Feb 13 20:16:03.469879 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Feb 13 20:16:03.469892 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 20:16:03.469899 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 20:16:03.469904 kernel: BIOS-provided physical RAM map: Feb 13 20:16:03.469908 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 20:16:03.469912 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 20:16:03.469917 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 20:16:03.469921 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 20:16:03.469925 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 20:16:03.469929 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Feb 13 20:16:03.469933 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Feb 13 20:16:03.469938 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Feb 13 20:16:03.469942 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Feb 13 20:16:03.469946 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 20:16:03.469951 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 20:16:03.469956 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 20:16:03.469961 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 20:16:03.469966 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 20:16:03.469970 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 20:16:03.469975 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 20:16:03.469979 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 20:16:03.469984 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 20:16:03.469988 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 20:16:03.469993 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 20:16:03.469997 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 20:16:03.470002 kernel: NX (Execute Disable) protection: active Feb 13 20:16:03.470006 kernel: APIC: Static calls initialized Feb 13 20:16:03.470011 kernel: SMBIOS 3.2.1 present. Feb 13 20:16:03.470016 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 20:16:03.470021 kernel: tsc: Detected 3400.000 MHz processor Feb 13 20:16:03.470026 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 20:16:03.470030 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:16:03.470035 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:16:03.470040 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 20:16:03.470045 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 20:16:03.470050 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:16:03.470054 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 20:16:03.470060 kernel: Using GB pages for direct mapping Feb 13 20:16:03.470065 kernel: ACPI: Early table checksum verification disabled Feb 13 20:16:03.470070 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 20:16:03.470076 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 20:16:03.470081 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 20:16:03.470086 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 20:16:03.470091 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 20:16:03.470097 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 20:16:03.470102 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 20:16:03.470107 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 20:16:03.470112 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 20:16:03.470117 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 20:16:03.470122 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 20:16:03.470127 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 20:16:03.470133 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 20:16:03.470138 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470143 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 20:16:03.470148 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 20:16:03.470153 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470158 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470163 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 20:16:03.470168 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 20:16:03.470173 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470179 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470184 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 20:16:03.470189 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:16:03.470194 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 20:16:03.470199 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 20:16:03.470204 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 20:16:03.470209 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 20:16:03.470214 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 20:16:03.470219 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 20:16:03.470224 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 20:16:03.470230 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 20:16:03.470235 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 20:16:03.470240 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 20:16:03.470245 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 20:16:03.470250 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 20:16:03.470255 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 20:16:03.470259 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 20:16:03.470265 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 20:16:03.470270 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 20:16:03.470275 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 20:16:03.470280 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 20:16:03.470285 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 20:16:03.470290 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 20:16:03.470295 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 20:16:03.470300 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 20:16:03.470305 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 20:16:03.470311 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 20:16:03.470316 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 20:16:03.470321 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 20:16:03.470326 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 20:16:03.470331 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 20:16:03.470335 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 20:16:03.470340 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 20:16:03.470345 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 20:16:03.470350 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 20:16:03.470355 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 20:16:03.470361 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 20:16:03.470366 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 20:16:03.470371 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 20:16:03.470376 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 20:16:03.470381 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 20:16:03.470386 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 20:16:03.470391 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 20:16:03.470396 kernel: No NUMA configuration found Feb 13 20:16:03.470401 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 20:16:03.470407 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 20:16:03.470412 kernel: Zone ranges: Feb 13 20:16:03.470417 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:16:03.470422 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:16:03.470427 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:16:03.470432 kernel: Movable zone start for each node Feb 13 20:16:03.470437 kernel: Early memory node ranges Feb 13 20:16:03.470442 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 20:16:03.470449 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 20:16:03.470455 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Feb 13 20:16:03.470461 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Feb 13 20:16:03.470484 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 20:16:03.470490 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 20:16:03.470512 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:16:03.470518 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 20:16:03.470524 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:16:03.470529 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 20:16:03.470535 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 20:16:03.470540 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 20:16:03.470546 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 20:16:03.470551 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 20:16:03.470557 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 20:16:03.470562 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 20:16:03.470567 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 20:16:03.470573 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 20:16:03.470578 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 20:16:03.470584 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 20:16:03.470589 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 20:16:03.470595 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 20:16:03.470600 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 20:16:03.470605 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 20:16:03.470611 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 20:16:03.470616 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 20:16:03.470621 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 20:16:03.470626 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 20:16:03.470632 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 20:16:03.470638 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 20:16:03.470643 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 20:16:03.470648 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 20:16:03.470654 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 20:16:03.470659 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 20:16:03.470664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:16:03.470670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:16:03.470675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:16:03.470682 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:16:03.470687 kernel: TSC deadline timer available Feb 13 20:16:03.470692 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 20:16:03.470698 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 20:16:03.470703 kernel: Booting paravirtualized kernel on bare hardware Feb 13 20:16:03.470708 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:16:03.470714 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 20:16:03.470719 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:16:03.470725 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:16:03.470731 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 20:16:03.470737 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 20:16:03.470742 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:16:03.470748 kernel: random: crng init done Feb 13 20:16:03.470753 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 20:16:03.470758 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 20:16:03.470763 kernel: Fallback order for Node 0: 0 Feb 13 20:16:03.470769 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 20:16:03.470775 kernel: Policy zone: Normal Feb 13 20:16:03.470781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:16:03.470786 kernel: software IO TLB: area num 16. Feb 13 20:16:03.470792 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 732412K reserved, 0K cma-reserved) Feb 13 20:16:03.470797 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 20:16:03.470802 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 20:16:03.470808 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:16:03.470813 kernel: Dynamic Preempt: voluntary Feb 13 20:16:03.470818 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:16:03.470825 kernel: rcu: RCU event tracing is enabled. Feb 13 20:16:03.470830 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 20:16:03.470836 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:16:03.470841 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:16:03.470846 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:16:03.470852 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:16:03.470857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 20:16:03.470862 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 20:16:03.470868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:16:03.470873 kernel: Console: colour VGA+ 80x25 Feb 13 20:16:03.470879 kernel: printk: console [tty0] enabled Feb 13 20:16:03.470885 kernel: printk: console [ttyS1] enabled Feb 13 20:16:03.470890 kernel: ACPI: Core revision 20230628 Feb 13 20:16:03.470895 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 20:16:03.470901 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:16:03.470906 kernel: DMAR: Host address width 39 Feb 13 20:16:03.470911 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 20:16:03.470917 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 20:16:03.470922 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 20:16:03.470928 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 20:16:03.470934 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 20:16:03.470939 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 20:16:03.470944 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 20:16:03.470950 kernel: x2apic enabled Feb 13 20:16:03.470955 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 20:16:03.470961 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 20:16:03.470966 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 20:16:03.470972 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 20:16:03.470978 kernel: process: using mwait in idle threads Feb 13 20:16:03.470983 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:16:03.470988 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:16:03.470994 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:16:03.470999 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:16:03.471004 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:16:03.471009 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 20:16:03.471015 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:16:03.471020 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 20:16:03.471025 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 20:16:03.471030 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:16:03.471036 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:16:03.471042 kernel: TAA: Mitigation: TSX disabled Feb 13 20:16:03.471047 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 20:16:03.471052 kernel: SRBDS: Mitigation: Microcode Feb 13 20:16:03.471058 kernel: GDS: Mitigation: Microcode Feb 13 20:16:03.471063 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:16:03.471068 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:16:03.471073 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:16:03.471079 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 20:16:03.471084 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 20:16:03.471089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:16:03.471096 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 20:16:03.471101 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 20:16:03.471106 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 20:16:03.471112 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:16:03.471117 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:16:03.471122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:16:03.471127 kernel: landlock: Up and running. Feb 13 20:16:03.471133 kernel: SELinux: Initializing. Feb 13 20:16:03.471138 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.471143 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.471149 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 20:16:03.471155 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:16:03.471160 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:16:03.471166 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:16:03.471171 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 20:16:03.471177 kernel: ... version: 4 Feb 13 20:16:03.471182 kernel: ... bit width: 48 Feb 13 20:16:03.471187 kernel: ... generic registers: 4 Feb 13 20:16:03.471193 kernel: ... value mask: 0000ffffffffffff Feb 13 20:16:03.471198 kernel: ... max period: 00007fffffffffff Feb 13 20:16:03.471204 kernel: ... fixed-purpose events: 3 Feb 13 20:16:03.471209 kernel: ... event mask: 000000070000000f Feb 13 20:16:03.471215 kernel: signal: max sigframe size: 2032 Feb 13 20:16:03.471220 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 20:16:03.471225 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:16:03.471231 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:16:03.471236 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 20:16:03.471241 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:16:03.471247 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:16:03.471253 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 20:16:03.471259 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:16:03.471264 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 20:16:03.471269 kernel: smpboot: Max logical packages: 1 Feb 13 20:16:03.471275 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 20:16:03.471280 kernel: devtmpfs: initialized Feb 13 20:16:03.471285 kernel: x86/mm: Memory block size: 128MB Feb 13 20:16:03.471291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Feb 13 20:16:03.471296 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 20:16:03.471302 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:16:03.471308 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 20:16:03.471313 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:16:03.471318 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:16:03.471324 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:16:03.471329 kernel: audit: type=2000 audit(1739477757.042:1): state=initialized audit_enabled=0 res=1 Feb 13 20:16:03.471334 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:16:03.471339 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:16:03.471345 kernel: cpuidle: using governor menu Feb 13 20:16:03.471351 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:16:03.471356 kernel: dca service started, version 1.12.1 Feb 13 20:16:03.471362 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 20:16:03.471367 kernel: PCI: Using configuration type 1 for base access Feb 13 20:16:03.471372 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 20:16:03.471378 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:16:03.471383 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:16:03.471388 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:16:03.471394 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:16:03.471400 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:16:03.471405 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:16:03.471410 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:16:03.471416 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:16:03.471421 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:16:03.471426 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 20:16:03.471432 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471437 kernel: ACPI: SSDT 0xFFFF94C240E3F400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 20:16:03.471442 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471450 kernel: ACPI: SSDT 0xFFFF94C241E0C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 20:16:03.471456 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471461 kernel: ACPI: SSDT 0xFFFF94C240DE5000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 20:16:03.471488 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471493 kernel: ACPI: SSDT 0xFFFF94C241E0F000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 20:16:03.471498 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471518 kernel: ACPI: SSDT 0xFFFF94C240E54000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 20:16:03.471523 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471528 kernel: ACPI: SSDT 0xFFFF94C241EC0C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 20:16:03.471535 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 20:16:03.471540 kernel: ACPI: Interpreter enabled Feb 13 20:16:03.471545 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:16:03.471550 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:16:03.471556 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 20:16:03.471561 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 20:16:03.471566 kernel: HEST: Table parsing has been initialized. Feb 13 20:16:03.471571 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 20:16:03.471577 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:16:03.471583 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:16:03.471588 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 20:16:03.471594 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 20:16:03.471599 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 20:16:03.471605 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 20:16:03.471610 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 20:16:03.471615 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 20:16:03.471621 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 20:16:03.471626 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 20:16:03.471631 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 20:16:03.471638 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 20:16:03.471643 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 20:16:03.471648 kernel: ACPI: \PIN_: New power resource Feb 13 20:16:03.471654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 20:16:03.471724 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:16:03.471776 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 20:16:03.471822 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 20:16:03.471831 kernel: PCI host bridge to bus 0000:00 Feb 13 20:16:03.471882 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:16:03.471925 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:16:03.471966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:16:03.472007 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 20:16:03.472047 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 20:16:03.472087 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 20:16:03.472144 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 20:16:03.472200 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 20:16:03.472248 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.472300 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 20:16:03.472346 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 20:16:03.472397 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 20:16:03.472445 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 20:16:03.472540 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 20:16:03.472586 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 20:16:03.472632 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 20:16:03.472682 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 20:16:03.472728 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 20:16:03.472776 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 20:16:03.472826 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 20:16:03.472872 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:16:03.472924 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 20:16:03.472971 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:16:03.473020 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 20:16:03.473068 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 20:16:03.473116 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 20:16:03.473173 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 20:16:03.473222 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 20:16:03.473269 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 20:16:03.473318 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 20:16:03.473365 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 20:16:03.473414 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 20:16:03.473484 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 20:16:03.473548 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 20:16:03.473593 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 20:16:03.473640 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 20:16:03.473685 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 20:16:03.473731 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 20:16:03.473780 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 20:16:03.473826 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 20:16:03.473876 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 20:16:03.473924 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.473979 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 20:16:03.474028 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474078 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 20:16:03.474126 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474175 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 20:16:03.474223 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474276 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 20:16:03.474323 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474374 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 20:16:03.474420 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:16:03.474492 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 20:16:03.474557 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 20:16:03.474607 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 20:16:03.474652 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 20:16:03.474704 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 20:16:03.474750 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 20:16:03.474803 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 20:16:03.474850 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 20:16:03.474901 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 20:16:03.474949 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 20:16:03.474996 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:16:03.475044 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:16:03.475095 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 20:16:03.475145 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 20:16:03.475192 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 20:16:03.475242 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 20:16:03.475290 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:16:03.475337 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:16:03.475385 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:16:03.475431 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:16:03.475501 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:16:03.475563 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:16:03.475616 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 20:16:03.475666 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:16:03.475715 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 20:16:03.475761 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 20:16:03.475809 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 20:16:03.475857 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.475904 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:16:03.475951 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:16:03.475999 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:16:03.476050 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 20:16:03.476098 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:16:03.476146 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 20:16:03.476193 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 20:16:03.476241 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 20:16:03.476289 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.476338 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:16:03.476385 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:16:03.476431 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:16:03.476507 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:16:03.476582 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 20:16:03.476630 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 20:16:03.476678 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 20:16:03.476727 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:16:03.476775 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:16:03.476822 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.476870 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.476923 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 20:16:03.476977 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 20:16:03.477027 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 20:16:03.477077 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 20:16:03.477128 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 20:16:03.477179 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:16:03.477229 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 20:16:03.477306 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:16:03.477370 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:16:03.477418 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.477470 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.477502 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 20:16:03.477508 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 20:16:03.477514 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 20:16:03.477520 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 20:16:03.477525 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 20:16:03.477531 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 20:16:03.477537 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 20:16:03.477542 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 20:16:03.477548 kernel: iommu: Default domain type: Translated Feb 13 20:16:03.477555 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:16:03.477561 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:16:03.477566 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:16:03.477572 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 20:16:03.477577 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Feb 13 20:16:03.477583 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 20:16:03.477588 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 20:16:03.477594 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 20:16:03.477599 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 20:16:03.477649 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 20:16:03.477701 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 20:16:03.477750 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:16:03.477758 kernel: vgaarb: loaded Feb 13 20:16:03.477764 kernel: clocksource: Switched to clocksource tsc-early Feb 13 20:16:03.477770 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:16:03.477776 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:16:03.477781 kernel: pnp: PnP ACPI init Feb 13 20:16:03.477827 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 20:16:03.477877 kernel: pnp 00:02: [dma 0 disabled] Feb 13 20:16:03.477924 kernel: pnp 00:03: [dma 0 disabled] Feb 13 20:16:03.477972 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 20:16:03.478015 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 20:16:03.478060 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 20:16:03.478105 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 20:16:03.478150 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 20:16:03.478192 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 20:16:03.478235 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 20:16:03.478280 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 20:16:03.478322 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 20:16:03.478364 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 20:16:03.478407 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 20:16:03.478462 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 20:16:03.478545 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 20:16:03.478602 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 20:16:03.478658 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 20:16:03.478699 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 20:16:03.478741 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 20:16:03.478782 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 20:16:03.478830 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 20:16:03.478839 kernel: pnp: PnP ACPI: found 10 devices Feb 13 20:16:03.478845 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:16:03.478850 kernel: NET: Registered PF_INET protocol family Feb 13 20:16:03.478856 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:16:03.478862 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 20:16:03.478868 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:16:03.478873 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:16:03.478881 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:16:03.478886 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 20:16:03.478892 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.478898 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.478903 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:16:03.478909 kernel: NET: Registered PF_XDP protocol family Feb 13 20:16:03.478957 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 20:16:03.479005 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 20:16:03.479054 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 20:16:03.479102 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479151 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479199 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479248 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479294 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:16:03.479342 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:16:03.479387 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:16:03.479436 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:16:03.479518 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:16:03.479566 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:16:03.479612 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:16:03.479658 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:16:03.479708 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:16:03.479753 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:16:03.479800 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:16:03.479847 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:16:03.479895 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.479972 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480019 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:16:03.480065 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.480112 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480156 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 20:16:03.480198 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:16:03.480239 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:16:03.480280 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:16:03.480321 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 20:16:03.480362 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 20:16:03.480408 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 20:16:03.480457 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:16:03.480540 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 20:16:03.480583 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 20:16:03.480629 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 20:16:03.480672 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 20:16:03.480719 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 20:16:03.480764 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480809 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 20:16:03.480853 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480861 kernel: PCI: CLS 64 bytes, default 64 Feb 13 20:16:03.480867 kernel: DMAR: No ATSR found Feb 13 20:16:03.480873 kernel: DMAR: No SATC found Feb 13 20:16:03.480879 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 20:16:03.480926 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 20:16:03.480972 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 20:16:03.481022 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 20:16:03.481068 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 20:16:03.481115 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 20:16:03.481161 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 20:16:03.481208 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 20:16:03.481254 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 20:16:03.481299 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 20:16:03.481375 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 20:16:03.481423 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 20:16:03.481473 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 20:16:03.481519 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 20:16:03.481566 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 20:16:03.481612 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 20:16:03.481659 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 20:16:03.481704 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 20:16:03.481750 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 20:16:03.481798 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 20:16:03.481846 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 20:16:03.481892 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 20:16:03.481940 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 20:16:03.481988 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 20:16:03.482037 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 20:16:03.482086 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 20:16:03.482132 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 20:16:03.482184 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 20:16:03.482192 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 20:16:03.482198 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:16:03.482204 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 20:16:03.482209 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 20:16:03.482215 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 20:16:03.482221 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 20:16:03.482226 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 20:16:03.482275 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 20:16:03.482285 kernel: Initialise system trusted keyrings Feb 13 20:16:03.482291 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 20:16:03.482296 kernel: Key type asymmetric registered Feb 13 20:16:03.482302 kernel: Asymmetric key parser 'x509' registered Feb 13 20:16:03.482307 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:16:03.482313 kernel: io scheduler mq-deadline registered Feb 13 20:16:03.482319 kernel: io scheduler kyber registered Feb 13 20:16:03.482324 kernel: io scheduler bfq registered Feb 13 20:16:03.482371 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 20:16:03.482418 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 20:16:03.482487 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 20:16:03.482548 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 20:16:03.482594 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 20:16:03.482640 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 20:16:03.482690 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 20:16:03.482716 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 20:16:03.482722 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 20:16:03.482741 kernel: pstore: Using crash dump compression: deflate Feb 13 20:16:03.482747 kernel: pstore: Registered erst as persistent store backend Feb 13 20:16:03.482753 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:16:03.482758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:16:03.482764 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:16:03.482770 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:16:03.482775 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 20:16:03.482826 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 20:16:03.482834 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:16:03.482876 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 20:16:03.482919 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 20:16:03.482961 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T20:16:02 UTC (1739477762) Feb 13 20:16:03.483005 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 20:16:03.483013 kernel: intel_pstate: Intel P-state driver initializing Feb 13 20:16:03.483019 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 20:16:03.483026 kernel: intel_pstate: HWP enabled Feb 13 20:16:03.483032 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:16:03.483038 kernel: Segment Routing with IPv6 Feb 13 20:16:03.483044 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:16:03.483049 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:16:03.483055 kernel: Key type dns_resolver registered Feb 13 20:16:03.483060 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 20:16:03.483066 kernel: IPI shorthand broadcast: enabled Feb 13 20:16:03.483072 kernel: sched_clock: Marking stable (2492000736, 1449268047)->(4504534218, -563265435) Feb 13 20:16:03.483078 kernel: registered taskstats version 1 Feb 13 20:16:03.483084 kernel: Loading compiled-in X.509 certificates Feb 13 20:16:03.483090 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 20:16:03.483095 kernel: Key type .fscrypt registered Feb 13 20:16:03.483101 kernel: Key type fscrypt-provisioning registered Feb 13 20:16:03.483107 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:16:03.483112 kernel: ima: No architecture policies found Feb 13 20:16:03.483118 kernel: clk: Disabling unused clocks Feb 13 20:16:03.483124 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 20:16:03.483130 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:16:03.483136 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 20:16:03.483142 kernel: Run /init as init process Feb 13 20:16:03.483147 kernel: with arguments: Feb 13 20:16:03.483153 kernel: /init Feb 13 20:16:03.483158 kernel: with environment: Feb 13 20:16:03.483164 kernel: HOME=/ Feb 13 20:16:03.483169 kernel: TERM=linux Feb 13 20:16:03.483175 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:16:03.483183 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:03.483190 systemd[1]: Detected architecture x86-64. Feb 13 20:16:03.483196 systemd[1]: Running in initrd. Feb 13 20:16:03.483202 systemd[1]: No hostname configured, using default hostname. Feb 13 20:16:03.483208 systemd[1]: Hostname set to . Feb 13 20:16:03.483213 systemd[1]: Initializing machine ID from random generator. Feb 13 20:16:03.483219 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:16:03.483226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:03.483232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:03.483238 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:16:03.483244 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:03.483250 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:16:03.483256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:16:03.483262 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:16:03.483270 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:16:03.483276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:03.483282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:03.483288 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:03.483294 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:03.483300 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:03.483306 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:03.483312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:03.483319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:03.483325 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:16:03.483330 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:16:03.483336 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:03.483342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:03.483348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:03.483354 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:03.483360 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:16:03.483367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:03.483373 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 20:16:03.483379 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 20:16:03.483384 kernel: clocksource: Switched to clocksource tsc Feb 13 20:16:03.483390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:16:03.483396 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:16:03.483402 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:03.483418 systemd-journald[267]: Collecting audit messages is disabled. Feb 13 20:16:03.483433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:03.483440 systemd-journald[267]: Journal started Feb 13 20:16:03.483463 systemd-journald[267]: Runtime Journal (/run/log/journal/4bca16d8887646f5b1ed4ca2f5dfa50a) is 8.0M, max 639.9M, 631.9M free. Feb 13 20:16:03.495381 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 20:16:03.496454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:03.511451 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:03.518657 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:03.518803 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:03.518909 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:16:03.519881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:16:03.533487 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:16:03.533694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:03.559426 kernel: Bridge firewalling registered Feb 13 20:16:03.534824 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 20:16:03.559639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:03.648819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:03.658809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:03.689770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:03.730863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:03.743559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:03.745275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:03.768874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:03.769491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:03.773707 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:03.777688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:03.789011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:16:03.790380 systemd-resolved[306]: Positive Trust Anchors: Feb 13 20:16:03.790386 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:03.790410 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:03.910829 dracut-cmdline[309]: dracut-dracut-053 Feb 13 20:16:03.910829 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 20:16:03.792030 systemd-resolved[306]: Defaulting to hostname 'linux'. Feb 13 20:16:03.810737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:03.810792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:04.027480 kernel: SCSI subsystem initialized Feb 13 20:16:04.041477 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:16:04.053479 kernel: iscsi: registered transport (tcp) Feb 13 20:16:04.074075 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:16:04.074092 kernel: QLogic iSCSI HBA Driver Feb 13 20:16:04.097059 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:04.124715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:16:04.159344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:16:04.159363 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:16:04.168105 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:16:04.203520 kernel: raid6: avx2x4 gen() 53162 MB/s Feb 13 20:16:04.224485 kernel: raid6: avx2x2 gen() 53866 MB/s Feb 13 20:16:04.250593 kernel: raid6: avx2x1 gen() 45216 MB/s Feb 13 20:16:04.250611 kernel: raid6: using algorithm avx2x2 gen() 53866 MB/s Feb 13 20:16:04.277689 kernel: raid6: .... xor() 30799 MB/s, rmw enabled Feb 13 20:16:04.277707 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:16:04.298453 kernel: xor: automatically using best checksumming function avx Feb 13 20:16:04.403455 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:16:04.409521 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:04.429745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:04.459920 systemd-udevd[496]: Using default interface naming scheme 'v255'. Feb 13 20:16:04.462451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:04.497703 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:16:04.517683 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Feb 13 20:16:04.561775 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:04.590845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:04.678921 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:04.705665 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:16:04.705729 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:16:04.706452 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:16:04.721481 kernel: PTP clock support registered Feb 13 20:16:04.721508 kernel: ACPI: bus type USB registered Feb 13 20:16:04.731212 kernel: usbcore: registered new interface driver usbfs Feb 13 20:16:04.732452 kernel: usbcore: registered new interface driver hub Feb 13 20:16:04.732468 kernel: usbcore: registered new device driver usb Feb 13 20:16:04.743454 kernel: libata version 3.00 loaded. Feb 13 20:16:04.745780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:16:04.889536 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:16:04.889563 kernel: AES CTR mode by8 optimization enabled Feb 13 20:16:04.889578 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 20:16:04.890216 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:16:04.890320 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 20:16:04.890422 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 20:16:04.890529 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 20:16:04.890626 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 20:16:04.890723 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:16:04.890821 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 20:16:04.890918 kernel: scsi host0: ahci Feb 13 20:16:04.891014 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 20:16:04.891111 kernel: scsi host1: ahci Feb 13 20:16:04.891205 kernel: hub 1-0:1.0: USB hub found Feb 13 20:16:04.891320 kernel: scsi host2: ahci Feb 13 20:16:04.891414 kernel: hub 1-0:1.0: 16 ports detected Feb 13 20:16:04.891527 kernel: scsi host3: ahci Feb 13 20:16:04.891621 kernel: hub 2-0:1.0: USB hub found Feb 13 20:16:04.891730 kernel: scsi host4: ahci Feb 13 20:16:04.891822 kernel: hub 2-0:1.0: 10 ports detected Feb 13 20:16:04.891925 kernel: scsi host5: ahci Feb 13 20:16:04.892017 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 20:16:04.892035 kernel: scsi host6: ahci Feb 13 20:16:04.892129 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 20:16:04.892144 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 20:16:04.892158 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 20:16:04.892172 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 20:16:04.892185 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 20:16:04.892199 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 20:16:04.892213 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 20:16:04.892229 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 20:16:04.765776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:05.002489 kernel: pps pps0: new PPS source ptp0 Feb 13 20:16:05.002569 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 20:16:05.002645 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:16:05.002712 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:16 Feb 13 20:16:05.002776 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 20:16:05.002841 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:16:05.002904 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 20:16:05.492709 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:16:05.492791 kernel: pps pps1: new PPS source ptp1 Feb 13 20:16:05.492860 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 20:16:05.492928 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:16:05.492992 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:17 Feb 13 20:16:05.493058 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 20:16:05.493121 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:16:05.493183 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 20:16:05.637814 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.637833 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:16:05.637847 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.637861 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.637874 kernel: hub 1-14:1.0: USB hub found Feb 13 20:16:05.638009 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.638025 kernel: hub 1-14:1.0: 4 ports detected Feb 13 20:16:05.638137 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:16:05.638152 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.638166 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:16:05.638180 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:16:05.638286 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:16:05.638301 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 20:16:05.638402 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:16:05.638418 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:16:05.638431 kernel: ata1.00: Features: NCQ-prio Feb 13 20:16:05.638445 kernel: ata2.00: Features: NCQ-prio Feb 13 20:16:05.638465 kernel: ata1.00: configured for UDMA/133 Feb 13 20:16:05.638479 kernel: ata2.00: configured for UDMA/133 Feb 13 20:16:05.638492 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:16:05.638597 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:16:05.638699 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 20:16:05.638802 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:05.638817 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:16:05.638831 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:16:05.638924 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:16:05.639017 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 20:16:05.639118 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:16:05.639212 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 20:16:05.639305 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:16:05.639399 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 20:16:05.639495 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 20:16:05.639589 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 20:16:05.639682 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:16:05.639774 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:16:05.639866 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 20:16:05.639961 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 20:16:05.640054 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:05.640069 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:16:05.640082 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 20:16:05.640174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:16:05.640189 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:16:05.640288 kernel: GPT:9289727 != 937703087 Feb 13 20:16:05.640302 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:16:05.640318 kernel: GPT:9289727 != 937703087 Feb 13 20:16:05.640331 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:16:05.640345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:16:05.640358 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 20:16:06.058954 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:16:06.059438 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:16:06.059876 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 20:16:06.060413 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (697) Feb 13 20:16:06.060512 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (547) Feb 13 20:16:06.060558 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:06.060595 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:16:06.060632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:16:06.060668 kernel: usbcore: registered new interface driver usbhid Feb 13 20:16:06.060705 kernel: usbhid: USB HID core driver Feb 13 20:16:06.060764 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 20:16:06.060815 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 20:16:06.061232 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 20:16:06.061277 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 20:16:06.061684 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:16:06.062035 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 20:16:06.062415 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:16:04.967273 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:06.080681 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 13 20:16:05.040369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:06.096665 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 13 20:16:05.059597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:05.069542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:05.069613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:05.080580 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:05.102632 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:16:06.151709 disk-uuid[714]: Primary Header is updated. Feb 13 20:16:06.151709 disk-uuid[714]: Secondary Entries is updated. Feb 13 20:16:06.151709 disk-uuid[714]: Secondary Header is updated. Feb 13 20:16:05.112483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:05.112609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:05.123500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:05.138648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:05.148918 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:05.170268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:05.188604 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:05.197672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:05.549216 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 20:16:05.579122 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 20:16:05.593608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:16:05.604518 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:16:05.619141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 20:16:05.636577 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:16:06.665603 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:06.673008 disk-uuid[715]: The operation has completed successfully. Feb 13 20:16:06.681662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:16:06.706715 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:16:06.706785 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:16:06.745696 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:16:06.771566 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:16:06.771624 sh[744]: Success Feb 13 20:16:06.805218 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:16:06.827398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:16:06.835769 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:16:06.907428 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 20:16:06.907459 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:06.917056 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:16:06.924070 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:16:06.929928 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:16:06.942500 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:16:06.943995 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:16:06.954021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:16:06.959567 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:16:06.988065 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:16:07.055526 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:07.055539 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:07.055546 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:16:07.055554 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:16:07.055561 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:16:07.055567 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:07.055878 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:16:07.078085 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:16:07.132769 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:07.158654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:07.177066 ignition[804]: Ignition 2.20.0 Feb 13 20:16:07.169261 systemd-networkd[927]: lo: Link UP Feb 13 20:16:07.177070 ignition[804]: Stage: fetch-offline Feb 13 20:16:07.169263 systemd-networkd[927]: lo: Gained carrier Feb 13 20:16:07.177087 ignition[804]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:07.171583 systemd-networkd[927]: Enumeration completed Feb 13 20:16:07.177092 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:07.171635 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:07.177142 ignition[804]: parsed url from cmdline: "" Feb 13 20:16:07.172337 systemd-networkd[927]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.177144 ignition[804]: no config URL provided Feb 13 20:16:07.179669 unknown[804]: fetched base config from "system" Feb 13 20:16:07.177147 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:16:07.179673 unknown[804]: fetched user config from "system" Feb 13 20:16:07.177169 ignition[804]: parsing config with SHA512: a83d9722e19296f6ddd79d43a571960a068dc997c95b6948ab9c6edabce3f219f580b4d9a5ce55c6a3cd6ab0117d755237f90dc0804c018714edc4d6f422b9b2 Feb 13 20:16:07.188956 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:07.179892 ignition[804]: fetch-offline: fetch-offline passed Feb 13 20:16:07.200248 systemd-networkd[927]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.179895 ignition[804]: POST message to Packet Timeline Feb 13 20:16:07.205971 systemd[1]: Reached target network.target - Network. Feb 13 20:16:07.179903 ignition[804]: POST Status error: resource requires networking Feb 13 20:16:07.213784 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:16:07.179945 ignition[804]: Ignition finished successfully Feb 13 20:16:07.226659 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:16:07.238102 ignition[940]: Ignition 2.20.0 Feb 13 20:16:07.228917 systemd-networkd[927]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.425626 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 20:16:07.238109 ignition[940]: Stage: kargs Feb 13 20:16:07.414639 systemd-networkd[927]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.238271 ignition[940]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:07.238282 ignition[940]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:07.239156 ignition[940]: kargs: kargs passed Feb 13 20:16:07.239161 ignition[940]: POST message to Packet Timeline Feb 13 20:16:07.239179 ignition[940]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:07.239731 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58136->[::1]:53: read: connection refused Feb 13 20:16:07.440297 ignition[940]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 20:16:07.441497 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59481->[::1]:53: read: connection refused Feb 13 20:16:07.628587 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 20:16:07.629107 systemd-networkd[927]: eno1: Link UP Feb 13 20:16:07.629272 systemd-networkd[927]: eno2: Link UP Feb 13 20:16:07.629388 systemd-networkd[927]: enp1s0f0np0: Link UP Feb 13 20:16:07.629537 systemd-networkd[927]: enp1s0f0np0: Gained carrier Feb 13 20:16:07.640728 systemd-networkd[927]: enp1s0f1np1: Link UP Feb 13 20:16:07.674680 systemd-networkd[927]: enp1s0f0np0: DHCPv4 address 147.75.90.163/31, gateway 147.75.90.162 acquired from 145.40.83.140 Feb 13 20:16:07.842624 ignition[940]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 20:16:07.843759 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35620->[::1]:53: read: connection refused Feb 13 20:16:08.418218 systemd-networkd[927]: enp1s0f1np1: Gained carrier Feb 13 20:16:08.644265 ignition[940]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 20:16:08.645357 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37897->[::1]:53: read: connection refused Feb 13 20:16:08.866070 systemd-networkd[927]: enp1s0f0np0: Gained IPv6LL Feb 13 20:16:09.570071 systemd-networkd[927]: enp1s0f1np1: Gained IPv6LL Feb 13 20:16:10.246852 ignition[940]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 20:16:10.248388 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48638->[::1]:53: read: connection refused Feb 13 20:16:13.451434 ignition[940]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 20:16:13.963846 ignition[940]: GET result: OK Feb 13 20:16:14.319894 ignition[940]: Ignition finished successfully Feb 13 20:16:14.324983 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:16:14.358902 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:16:14.387023 ignition[955]: Ignition 2.20.0 Feb 13 20:16:14.387039 ignition[955]: Stage: disks Feb 13 20:16:14.387396 ignition[955]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:14.387414 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:14.388783 ignition[955]: disks: disks passed Feb 13 20:16:14.388791 ignition[955]: POST message to Packet Timeline Feb 13 20:16:14.388817 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:14.760030 ignition[955]: GET result: OK Feb 13 20:16:15.975416 ignition[955]: Ignition finished successfully Feb 13 20:16:15.978982 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:16:15.994737 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:16.012732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:16:16.033721 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:16.055797 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:16.075798 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:16.104565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:16:16.140326 systemd-fsck[973]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:16:16.152171 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:16:16.165712 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:16:16.261513 kernel: EXT4-fs (sda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 20:16:16.262025 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:16:16.271929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:16.313789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:16.358668 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (982) Feb 13 20:16:16.358683 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:16.358691 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:16.358699 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:16:16.322771 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:16:16.389655 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:16:16.389667 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:16:16.390983 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:16:16.403067 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 20:16:16.414743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:16:16.414775 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:16.467271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:16.480602 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:16:16.507675 coreos-metadata[1000]: Feb 13 20:16:16.490 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:16:16.502792 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:16:16.540566 coreos-metadata[999]: Feb 13 20:16:16.490 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:16:16.553765 initrd-setup-root[1014]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:16:16.564568 initrd-setup-root[1021]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:16:16.574561 initrd-setup-root[1028]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:16:16.584575 initrd-setup-root[1035]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:16:16.591228 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:16.625663 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:16:16.650650 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:16.626397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:16:16.660457 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:16:16.686164 ignition[1102]: INFO : Ignition 2.20.0 Feb 13 20:16:16.686164 ignition[1102]: INFO : Stage: mount Feb 13 20:16:16.700558 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:16.700558 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:16.700558 ignition[1102]: INFO : mount: mount passed Feb 13 20:16:16.700558 ignition[1102]: INFO : POST message to Packet Timeline Feb 13 20:16:16.700558 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:16.695276 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:16:16.761663 coreos-metadata[999]: Feb 13 20:16:16.712 INFO Fetch successful Feb 13 20:16:16.761663 coreos-metadata[999]: Feb 13 20:16:16.755 INFO wrote hostname ci-4152.2.1-a-5d3d77ba07 to /sysroot/etc/hostname Feb 13 20:16:16.757236 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:16:17.046089 coreos-metadata[1000]: Feb 13 20:16:17.045 INFO Fetch successful Feb 13 20:16:17.065914 ignition[1102]: INFO : GET result: OK Feb 13 20:16:17.084875 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 20:16:17.084926 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 20:16:17.514747 ignition[1102]: INFO : Ignition finished successfully Feb 13 20:16:17.517586 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:16:17.547713 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:16:17.559609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:17.618614 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1125) Feb 13 20:16:17.618643 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:17.626712 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:17.632613 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:16:17.647289 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:16:17.647305 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:16:17.649232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:17.679912 ignition[1142]: INFO : Ignition 2.20.0 Feb 13 20:16:17.679912 ignition[1142]: INFO : Stage: files Feb 13 20:16:17.693660 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:17.693660 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:17.693660 ignition[1142]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:16:17.693660 ignition[1142]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:16:17.683724 unknown[1142]: wrote ssh authorized keys file for user: core Feb 13 20:16:17.824521 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:16:17.896097 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:16:18.392478 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:16:18.619529 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:18.619529 ignition[1142]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: files passed Feb 13 20:16:18.649671 ignition[1142]: INFO : POST message to Packet Timeline Feb 13 20:16:18.649671 ignition[1142]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:19.213198 ignition[1142]: INFO : GET result: OK Feb 13 20:16:19.571561 ignition[1142]: INFO : Ignition finished successfully Feb 13 20:16:19.574017 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:16:19.612695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:16:19.623097 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:16:19.644013 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:16:19.644120 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:16:19.687753 initrd-setup-root-after-ignition[1179]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:19.687753 initrd-setup-root-after-ignition[1179]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:19.725742 initrd-setup-root-after-ignition[1183]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:19.692342 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:19.702744 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:16:19.752657 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:16:19.812041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:16:19.812096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:16:19.830855 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:16:19.851658 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:16:19.872859 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:16:19.887872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:16:19.964592 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:19.996999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:16:20.013828 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:20.028746 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:20.038829 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:16:20.067809 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:16:20.067957 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:20.096264 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:16:20.118114 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:16:20.137217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:16:20.156222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:20.177080 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:20.198111 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:16:20.218082 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:20.239140 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:16:20.260138 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:16:20.280089 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:16:20.297972 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:16:20.298373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:20.323204 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:20.343121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:20.363976 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:16:20.364420 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:20.386093 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:16:20.386513 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:20.426787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:16:20.427267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:20.448312 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:16:20.466968 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:16:20.467414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:20.488109 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:16:20.505076 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:16:20.525087 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:16:20.525392 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:20.548130 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:16:20.548430 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:20.566179 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:16:20.566606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:20.585165 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:16:20.585568 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:16:20.696647 ignition[1204]: INFO : Ignition 2.20.0 Feb 13 20:16:20.696647 ignition[1204]: INFO : Stage: umount Feb 13 20:16:20.696647 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:20.696647 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:20.696647 ignition[1204]: INFO : umount: umount passed Feb 13 20:16:20.696647 ignition[1204]: INFO : POST message to Packet Timeline Feb 13 20:16:20.696647 ignition[1204]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:20.603182 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:16:20.603588 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:16:20.631728 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:16:20.656573 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:16:20.656780 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:20.690707 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:16:20.704635 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:16:20.704764 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:20.726741 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:16:20.726810 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:20.789959 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:16:20.794698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:16:20.794946 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:16:20.877706 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:16:20.877979 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:16:21.341498 ignition[1204]: INFO : GET result: OK Feb 13 20:16:22.245045 ignition[1204]: INFO : Ignition finished successfully Feb 13 20:16:22.246049 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:16:22.246134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:16:22.264118 systemd[1]: Stopped target network.target - Network. Feb 13 20:16:22.280677 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:16:22.280910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:16:22.299880 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:16:22.300018 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:16:22.317948 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:16:22.318105 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:16:22.335936 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:16:22.336097 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:22.344093 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:16:22.344260 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:22.371342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:16:22.380595 systemd-networkd[927]: enp1s0f0np0: DHCPv6 lease lost Feb 13 20:16:22.388692 systemd-networkd[927]: enp1s0f1np1: DHCPv6 lease lost Feb 13 20:16:22.389030 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:16:22.407614 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:16:22.407892 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:16:22.426871 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:16:22.427224 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:16:22.447323 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:16:22.447570 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:22.480623 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:16:22.506633 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:16:22.506781 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:22.525830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:16:22.525917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:22.543899 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:16:22.544051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:22.563919 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:16:22.564083 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:22.583177 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:22.604865 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:16:22.605321 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:22.635842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:16:22.635883 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:22.660561 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:16:22.660589 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:22.680649 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:16:22.680725 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:22.719637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:16:22.719802 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:22.749865 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:22.750005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:22.813558 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:16:22.840640 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:16:22.840792 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:23.031539 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Feb 13 20:16:22.861752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:22.861890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:22.883755 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:16:22.884012 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:16:22.903717 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:16:22.903973 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:16:22.924836 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:16:22.968949 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:16:22.988589 systemd[1]: Switching root. Feb 13 20:16:23.114639 systemd-journald[267]: Journal stopped Feb 13 20:16:03.469879 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Feb 13 20:16:03.469892 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 20:16:03.469899 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 20:16:03.469904 kernel: BIOS-provided physical RAM map: Feb 13 20:16:03.469908 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 20:16:03.469912 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 20:16:03.469917 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 20:16:03.469921 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 20:16:03.469925 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 20:16:03.469929 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Feb 13 20:16:03.469933 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Feb 13 20:16:03.469938 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Feb 13 20:16:03.469942 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Feb 13 20:16:03.469946 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 20:16:03.469951 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 20:16:03.469956 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 20:16:03.469961 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 20:16:03.469966 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 20:16:03.469970 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 20:16:03.469975 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 20:16:03.469979 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 20:16:03.469984 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 20:16:03.469988 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 20:16:03.469993 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 20:16:03.469997 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 20:16:03.470002 kernel: NX (Execute Disable) protection: active Feb 13 20:16:03.470006 kernel: APIC: Static calls initialized Feb 13 20:16:03.470011 kernel: SMBIOS 3.2.1 present. Feb 13 20:16:03.470016 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 20:16:03.470021 kernel: tsc: Detected 3400.000 MHz processor Feb 13 20:16:03.470026 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 20:16:03.470030 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:16:03.470035 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:16:03.470040 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 20:16:03.470045 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 20:16:03.470050 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:16:03.470054 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 20:16:03.470060 kernel: Using GB pages for direct mapping Feb 13 20:16:03.470065 kernel: ACPI: Early table checksum verification disabled Feb 13 20:16:03.470070 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 20:16:03.470076 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 20:16:03.470081 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 20:16:03.470086 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 20:16:03.470091 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 20:16:03.470097 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 20:16:03.470102 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 20:16:03.470107 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 20:16:03.470112 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 20:16:03.470117 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 20:16:03.470122 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 20:16:03.470127 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 20:16:03.470133 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 20:16:03.470138 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470143 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 20:16:03.470148 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 20:16:03.470153 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470158 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470163 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 20:16:03.470168 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 20:16:03.470173 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470179 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 20:16:03.470184 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 20:16:03.470189 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:16:03.470194 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 20:16:03.470199 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 20:16:03.470204 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 20:16:03.470209 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 20:16:03.470214 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 20:16:03.470219 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 20:16:03.470224 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 20:16:03.470230 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 20:16:03.470235 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 20:16:03.470240 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 20:16:03.470245 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 20:16:03.470250 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 20:16:03.470255 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 20:16:03.470259 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 20:16:03.470265 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 20:16:03.470270 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 20:16:03.470275 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 20:16:03.470280 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 20:16:03.470285 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 20:16:03.470290 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 20:16:03.470295 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 20:16:03.470300 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 20:16:03.470305 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 20:16:03.470311 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 20:16:03.470316 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 20:16:03.470321 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 20:16:03.470326 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 20:16:03.470331 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 20:16:03.470335 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 20:16:03.470340 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 20:16:03.470345 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 20:16:03.470350 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 20:16:03.470355 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 20:16:03.470361 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 20:16:03.470366 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 20:16:03.470371 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 20:16:03.470376 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 20:16:03.470381 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 20:16:03.470386 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 20:16:03.470391 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 20:16:03.470396 kernel: No NUMA configuration found Feb 13 20:16:03.470401 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 20:16:03.470407 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 20:16:03.470412 kernel: Zone ranges: Feb 13 20:16:03.470417 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:16:03.470422 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:16:03.470427 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:16:03.470432 kernel: Movable zone start for each node Feb 13 20:16:03.470437 kernel: Early memory node ranges Feb 13 20:16:03.470442 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 20:16:03.470449 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 20:16:03.470455 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Feb 13 20:16:03.470461 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Feb 13 20:16:03.470484 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 20:16:03.470490 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 20:16:03.470512 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 20:16:03.470518 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 20:16:03.470524 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:16:03.470529 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 20:16:03.470535 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 20:16:03.470540 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 20:16:03.470546 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 20:16:03.470551 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 20:16:03.470557 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 20:16:03.470562 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 20:16:03.470567 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 20:16:03.470573 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 20:16:03.470578 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 20:16:03.470584 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 20:16:03.470589 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 20:16:03.470595 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 20:16:03.470600 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 20:16:03.470605 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 20:16:03.470611 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 20:16:03.470616 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 20:16:03.470621 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 20:16:03.470626 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 20:16:03.470632 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 20:16:03.470638 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 20:16:03.470643 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 20:16:03.470648 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 20:16:03.470654 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 20:16:03.470659 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 20:16:03.470664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:16:03.470670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:16:03.470675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:16:03.470682 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:16:03.470687 kernel: TSC deadline timer available Feb 13 20:16:03.470692 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 20:16:03.470698 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 20:16:03.470703 kernel: Booting paravirtualized kernel on bare hardware Feb 13 20:16:03.470708 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:16:03.470714 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 20:16:03.470719 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:16:03.470725 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:16:03.470731 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 20:16:03.470737 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 20:16:03.470742 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:16:03.470748 kernel: random: crng init done Feb 13 20:16:03.470753 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 20:16:03.470758 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 20:16:03.470763 kernel: Fallback order for Node 0: 0 Feb 13 20:16:03.470769 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 20:16:03.470775 kernel: Policy zone: Normal Feb 13 20:16:03.470781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:16:03.470786 kernel: software IO TLB: area num 16. Feb 13 20:16:03.470792 kernel: Memory: 32720308K/33452980K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 732412K reserved, 0K cma-reserved) Feb 13 20:16:03.470797 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 20:16:03.470802 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 20:16:03.470808 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:16:03.470813 kernel: Dynamic Preempt: voluntary Feb 13 20:16:03.470818 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:16:03.470825 kernel: rcu: RCU event tracing is enabled. Feb 13 20:16:03.470830 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 20:16:03.470836 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:16:03.470841 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:16:03.470846 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:16:03.470852 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:16:03.470857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 20:16:03.470862 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 20:16:03.470868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:16:03.470873 kernel: Console: colour VGA+ 80x25 Feb 13 20:16:03.470879 kernel: printk: console [tty0] enabled Feb 13 20:16:03.470885 kernel: printk: console [ttyS1] enabled Feb 13 20:16:03.470890 kernel: ACPI: Core revision 20230628 Feb 13 20:16:03.470895 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 20:16:03.470901 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:16:03.470906 kernel: DMAR: Host address width 39 Feb 13 20:16:03.470911 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 20:16:03.470917 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 20:16:03.470922 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 20:16:03.470928 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 20:16:03.470934 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 20:16:03.470939 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 20:16:03.470944 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 20:16:03.470950 kernel: x2apic enabled Feb 13 20:16:03.470955 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 20:16:03.470961 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 20:16:03.470966 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 20:16:03.470972 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 20:16:03.470978 kernel: process: using mwait in idle threads Feb 13 20:16:03.470983 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:16:03.470988 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:16:03.470994 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:16:03.470999 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:16:03.471004 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:16:03.471009 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 20:16:03.471015 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:16:03.471020 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 20:16:03.471025 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 20:16:03.471030 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:16:03.471036 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:16:03.471042 kernel: TAA: Mitigation: TSX disabled Feb 13 20:16:03.471047 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 20:16:03.471052 kernel: SRBDS: Mitigation: Microcode Feb 13 20:16:03.471058 kernel: GDS: Mitigation: Microcode Feb 13 20:16:03.471063 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:16:03.471068 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:16:03.471073 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:16:03.471079 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 20:16:03.471084 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 20:16:03.471089 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:16:03.471096 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 20:16:03.471101 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 20:16:03.471106 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 20:16:03.471112 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:16:03.471117 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:16:03.471122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:16:03.471127 kernel: landlock: Up and running. Feb 13 20:16:03.471133 kernel: SELinux: Initializing. Feb 13 20:16:03.471138 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.471143 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.471149 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 20:16:03.471155 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:16:03.471160 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:16:03.471166 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:16:03.471171 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 20:16:03.471177 kernel: ... version: 4 Feb 13 20:16:03.471182 kernel: ... bit width: 48 Feb 13 20:16:03.471187 kernel: ... generic registers: 4 Feb 13 20:16:03.471193 kernel: ... value mask: 0000ffffffffffff Feb 13 20:16:03.471198 kernel: ... max period: 00007fffffffffff Feb 13 20:16:03.471204 kernel: ... fixed-purpose events: 3 Feb 13 20:16:03.471209 kernel: ... event mask: 000000070000000f Feb 13 20:16:03.471215 kernel: signal: max sigframe size: 2032 Feb 13 20:16:03.471220 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 20:16:03.471225 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:16:03.471231 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:16:03.471236 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 20:16:03.471241 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:16:03.471247 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:16:03.471253 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 20:16:03.471259 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:16:03.471264 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 20:16:03.471269 kernel: smpboot: Max logical packages: 1 Feb 13 20:16:03.471275 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 20:16:03.471280 kernel: devtmpfs: initialized Feb 13 20:16:03.471285 kernel: x86/mm: Memory block size: 128MB Feb 13 20:16:03.471291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Feb 13 20:16:03.471296 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 20:16:03.471302 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:16:03.471308 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 20:16:03.471313 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:16:03.471318 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:16:03.471324 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:16:03.471329 kernel: audit: type=2000 audit(1739477757.042:1): state=initialized audit_enabled=0 res=1 Feb 13 20:16:03.471334 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:16:03.471339 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:16:03.471345 kernel: cpuidle: using governor menu Feb 13 20:16:03.471351 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:16:03.471356 kernel: dca service started, version 1.12.1 Feb 13 20:16:03.471362 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 20:16:03.471367 kernel: PCI: Using configuration type 1 for base access Feb 13 20:16:03.471372 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 20:16:03.471378 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:16:03.471383 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:16:03.471388 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:16:03.471394 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:16:03.471400 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:16:03.471405 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:16:03.471410 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:16:03.471416 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:16:03.471421 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:16:03.471426 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 20:16:03.471432 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471437 kernel: ACPI: SSDT 0xFFFF94C240E3F400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 20:16:03.471442 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471450 kernel: ACPI: SSDT 0xFFFF94C241E0C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 20:16:03.471456 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471461 kernel: ACPI: SSDT 0xFFFF94C240DE5000 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 20:16:03.471488 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471493 kernel: ACPI: SSDT 0xFFFF94C241E0F000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 20:16:03.471498 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471518 kernel: ACPI: SSDT 0xFFFF94C240E54000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 20:16:03.471523 kernel: ACPI: Dynamic OEM Table Load: Feb 13 20:16:03.471528 kernel: ACPI: SSDT 0xFFFF94C241EC0C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 20:16:03.471535 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 20:16:03.471540 kernel: ACPI: Interpreter enabled Feb 13 20:16:03.471545 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:16:03.471550 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:16:03.471556 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 20:16:03.471561 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 20:16:03.471566 kernel: HEST: Table parsing has been initialized. Feb 13 20:16:03.471571 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 20:16:03.471577 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:16:03.471583 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:16:03.471588 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 20:16:03.471594 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 20:16:03.471599 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 20:16:03.471605 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 20:16:03.471610 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 20:16:03.471615 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 20:16:03.471621 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 20:16:03.471626 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 20:16:03.471631 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 20:16:03.471638 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 20:16:03.471643 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 20:16:03.471648 kernel: ACPI: \PIN_: New power resource Feb 13 20:16:03.471654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 20:16:03.471724 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:16:03.471776 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 20:16:03.471822 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 20:16:03.471831 kernel: PCI host bridge to bus 0000:00 Feb 13 20:16:03.471882 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:16:03.471925 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:16:03.471966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:16:03.472007 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 20:16:03.472047 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 20:16:03.472087 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 20:16:03.472144 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 20:16:03.472200 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 20:16:03.472248 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.472300 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 20:16:03.472346 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 20:16:03.472397 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 20:16:03.472445 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 20:16:03.472540 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 20:16:03.472586 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 20:16:03.472632 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 20:16:03.472682 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 20:16:03.472728 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 20:16:03.472776 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 20:16:03.472826 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 20:16:03.472872 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:16:03.472924 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 20:16:03.472971 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:16:03.473020 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 20:16:03.473068 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 20:16:03.473116 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 20:16:03.473173 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 20:16:03.473222 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 20:16:03.473269 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 20:16:03.473318 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 20:16:03.473365 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 20:16:03.473414 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 20:16:03.473484 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 20:16:03.473548 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 20:16:03.473593 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 20:16:03.473640 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 20:16:03.473685 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 20:16:03.473731 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 20:16:03.473780 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 20:16:03.473826 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 20:16:03.473876 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 20:16:03.473924 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.473979 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 20:16:03.474028 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474078 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 20:16:03.474126 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474175 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 20:16:03.474223 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474276 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 20:16:03.474323 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.474374 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 20:16:03.474420 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 20:16:03.474492 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 20:16:03.474557 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 20:16:03.474607 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 20:16:03.474652 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 20:16:03.474704 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 20:16:03.474750 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 20:16:03.474803 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 20:16:03.474850 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 20:16:03.474901 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 20:16:03.474949 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 20:16:03.474996 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:16:03.475044 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:16:03.475095 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 20:16:03.475145 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 20:16:03.475192 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 20:16:03.475242 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 20:16:03.475290 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 20:16:03.475337 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 20:16:03.475385 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:16:03.475431 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:16:03.475501 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:16:03.475563 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:16:03.475616 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Feb 13 20:16:03.475666 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:16:03.475715 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 20:16:03.475761 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 20:16:03.475809 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 20:16:03.475857 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.475904 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:16:03.475951 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:16:03.475999 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:16:03.476050 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 20:16:03.476098 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 20:16:03.476146 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 20:16:03.476193 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 20:16:03.476241 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 20:16:03.476289 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 20:16:03.476338 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:16:03.476385 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:16:03.476431 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:16:03.476507 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:16:03.476582 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 20:16:03.476630 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 20:16:03.476678 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 20:16:03.476727 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:16:03.476775 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:16:03.476822 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.476870 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.476923 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 20:16:03.476977 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 20:16:03.477027 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 20:16:03.477077 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 20:16:03.477128 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 20:16:03.477179 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:16:03.477229 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 20:16:03.477306 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 20:16:03.477370 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:16:03.477418 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.477470 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.477502 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 20:16:03.477508 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 20:16:03.477514 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 20:16:03.477520 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 20:16:03.477525 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 20:16:03.477531 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 20:16:03.477537 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 20:16:03.477542 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 20:16:03.477548 kernel: iommu: Default domain type: Translated Feb 13 20:16:03.477555 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:16:03.477561 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:16:03.477566 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:16:03.477572 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 20:16:03.477577 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Feb 13 20:16:03.477583 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 20:16:03.477588 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 20:16:03.477594 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 20:16:03.477599 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 20:16:03.477649 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 20:16:03.477701 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 20:16:03.477750 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:16:03.477758 kernel: vgaarb: loaded Feb 13 20:16:03.477764 kernel: clocksource: Switched to clocksource tsc-early Feb 13 20:16:03.477770 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:16:03.477776 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:16:03.477781 kernel: pnp: PnP ACPI init Feb 13 20:16:03.477827 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 20:16:03.477877 kernel: pnp 00:02: [dma 0 disabled] Feb 13 20:16:03.477924 kernel: pnp 00:03: [dma 0 disabled] Feb 13 20:16:03.477972 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 20:16:03.478015 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 20:16:03.478060 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 20:16:03.478105 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 20:16:03.478150 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 20:16:03.478192 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 20:16:03.478235 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 20:16:03.478280 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 20:16:03.478322 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 20:16:03.478364 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 20:16:03.478407 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 20:16:03.478462 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 20:16:03.478545 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 20:16:03.478602 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 20:16:03.478658 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 20:16:03.478699 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 20:16:03.478741 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 20:16:03.478782 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 20:16:03.478830 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 20:16:03.478839 kernel: pnp: PnP ACPI: found 10 devices Feb 13 20:16:03.478845 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:16:03.478850 kernel: NET: Registered PF_INET protocol family Feb 13 20:16:03.478856 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:16:03.478862 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 20:16:03.478868 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:16:03.478873 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:16:03.478881 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:16:03.478886 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 20:16:03.478892 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.478898 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:16:03.478903 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:16:03.478909 kernel: NET: Registered PF_XDP protocol family Feb 13 20:16:03.478957 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 20:16:03.479005 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 20:16:03.479054 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 20:16:03.479102 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479151 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479199 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479248 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 20:16:03.479294 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 20:16:03.479342 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 20:16:03.479387 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:16:03.479436 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 20:16:03.479518 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 20:16:03.479566 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 20:16:03.479612 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 20:16:03.479658 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 20:16:03.479708 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 20:16:03.479753 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 20:16:03.479800 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 20:16:03.479847 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 20:16:03.479895 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.479972 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480019 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 20:16:03.480065 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 20:16:03.480112 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480156 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 20:16:03.480198 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:16:03.480239 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:16:03.480280 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:16:03.480321 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 20:16:03.480362 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 20:16:03.480408 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 20:16:03.480457 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 20:16:03.480540 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 20:16:03.480583 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 20:16:03.480629 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 20:16:03.480672 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 20:16:03.480719 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 20:16:03.480764 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480809 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 20:16:03.480853 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 20:16:03.480861 kernel: PCI: CLS 64 bytes, default 64 Feb 13 20:16:03.480867 kernel: DMAR: No ATSR found Feb 13 20:16:03.480873 kernel: DMAR: No SATC found Feb 13 20:16:03.480879 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 20:16:03.480926 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 20:16:03.480972 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 20:16:03.481022 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 20:16:03.481068 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 20:16:03.481115 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 20:16:03.481161 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 20:16:03.481208 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 20:16:03.481254 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 20:16:03.481299 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 20:16:03.481375 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 20:16:03.481423 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 20:16:03.481473 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 20:16:03.481519 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 20:16:03.481566 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 20:16:03.481612 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 20:16:03.481659 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 20:16:03.481704 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 20:16:03.481750 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 20:16:03.481798 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 20:16:03.481846 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 20:16:03.481892 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 20:16:03.481940 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 20:16:03.481988 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 20:16:03.482037 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 20:16:03.482086 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 20:16:03.482132 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 20:16:03.482184 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 20:16:03.482192 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 20:16:03.482198 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:16:03.482204 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 20:16:03.482209 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 20:16:03.482215 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 20:16:03.482221 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 20:16:03.482226 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 20:16:03.482275 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 20:16:03.482285 kernel: Initialise system trusted keyrings Feb 13 20:16:03.482291 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 20:16:03.482296 kernel: Key type asymmetric registered Feb 13 20:16:03.482302 kernel: Asymmetric key parser 'x509' registered Feb 13 20:16:03.482307 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:16:03.482313 kernel: io scheduler mq-deadline registered Feb 13 20:16:03.482319 kernel: io scheduler kyber registered Feb 13 20:16:03.482324 kernel: io scheduler bfq registered Feb 13 20:16:03.482371 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 20:16:03.482418 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 20:16:03.482487 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 20:16:03.482548 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 20:16:03.482594 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 20:16:03.482640 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 20:16:03.482690 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 20:16:03.482716 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 20:16:03.482722 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 20:16:03.482741 kernel: pstore: Using crash dump compression: deflate Feb 13 20:16:03.482747 kernel: pstore: Registered erst as persistent store backend Feb 13 20:16:03.482753 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:16:03.482758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:16:03.482764 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:16:03.482770 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:16:03.482775 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 20:16:03.482826 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 20:16:03.482834 kernel: i8042: PNP: No PS/2 controller found. Feb 13 20:16:03.482876 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 20:16:03.482919 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 20:16:03.482961 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T20:16:02 UTC (1739477762) Feb 13 20:16:03.483005 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 20:16:03.483013 kernel: intel_pstate: Intel P-state driver initializing Feb 13 20:16:03.483019 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 20:16:03.483026 kernel: intel_pstate: HWP enabled Feb 13 20:16:03.483032 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:16:03.483038 kernel: Segment Routing with IPv6 Feb 13 20:16:03.483044 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:16:03.483049 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:16:03.483055 kernel: Key type dns_resolver registered Feb 13 20:16:03.483060 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 20:16:03.483066 kernel: IPI shorthand broadcast: enabled Feb 13 20:16:03.483072 kernel: sched_clock: Marking stable (2492000736, 1449268047)->(4504534218, -563265435) Feb 13 20:16:03.483078 kernel: registered taskstats version 1 Feb 13 20:16:03.483084 kernel: Loading compiled-in X.509 certificates Feb 13 20:16:03.483090 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 20:16:03.483095 kernel: Key type .fscrypt registered Feb 13 20:16:03.483101 kernel: Key type fscrypt-provisioning registered Feb 13 20:16:03.483107 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:16:03.483112 kernel: ima: No architecture policies found Feb 13 20:16:03.483118 kernel: clk: Disabling unused clocks Feb 13 20:16:03.483124 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 20:16:03.483130 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:16:03.483136 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 20:16:03.483142 kernel: Run /init as init process Feb 13 20:16:03.483147 kernel: with arguments: Feb 13 20:16:03.483153 kernel: /init Feb 13 20:16:03.483158 kernel: with environment: Feb 13 20:16:03.483164 kernel: HOME=/ Feb 13 20:16:03.483169 kernel: TERM=linux Feb 13 20:16:03.483175 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:16:03.483183 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:03.483190 systemd[1]: Detected architecture x86-64. Feb 13 20:16:03.483196 systemd[1]: Running in initrd. Feb 13 20:16:03.483202 systemd[1]: No hostname configured, using default hostname. Feb 13 20:16:03.483208 systemd[1]: Hostname set to . Feb 13 20:16:03.483213 systemd[1]: Initializing machine ID from random generator. Feb 13 20:16:03.483219 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:16:03.483226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:03.483232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:03.483238 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:16:03.483244 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:03.483250 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:16:03.483256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:16:03.483262 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:16:03.483270 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:16:03.483276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:03.483282 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:03.483288 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:03.483294 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:03.483300 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:03.483306 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:03.483312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:03.483319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:03.483325 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:16:03.483330 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:16:03.483336 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:03.483342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:03.483348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:03.483354 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:03.483360 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:16:03.483367 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:03.483373 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 20:16:03.483379 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 20:16:03.483384 kernel: clocksource: Switched to clocksource tsc Feb 13 20:16:03.483390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:16:03.483396 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:16:03.483402 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:03.483418 systemd-journald[267]: Collecting audit messages is disabled. Feb 13 20:16:03.483433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:03.483440 systemd-journald[267]: Journal started Feb 13 20:16:03.483463 systemd-journald[267]: Runtime Journal (/run/log/journal/4bca16d8887646f5b1ed4ca2f5dfa50a) is 8.0M, max 639.9M, 631.9M free. Feb 13 20:16:03.495381 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 20:16:03.496454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:03.511451 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:03.518657 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:03.518803 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:03.518909 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:16:03.519881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:16:03.533487 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:16:03.533694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:03.559426 kernel: Bridge firewalling registered Feb 13 20:16:03.534824 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 20:16:03.559639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:03.648819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:03.658809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:03.689770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:03.730863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:03.743559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:03.745275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:03.768874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:03.769491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:03.773707 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:03.777688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:03.789011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:16:03.790380 systemd-resolved[306]: Positive Trust Anchors: Feb 13 20:16:03.790386 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:03.790410 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:03.910829 dracut-cmdline[309]: dracut-dracut-053 Feb 13 20:16:03.910829 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 20:16:03.792030 systemd-resolved[306]: Defaulting to hostname 'linux'. Feb 13 20:16:03.810737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:03.810792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:04.027480 kernel: SCSI subsystem initialized Feb 13 20:16:04.041477 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:16:04.053479 kernel: iscsi: registered transport (tcp) Feb 13 20:16:04.074075 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:16:04.074092 kernel: QLogic iSCSI HBA Driver Feb 13 20:16:04.097059 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:04.124715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:16:04.159344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:16:04.159363 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:16:04.168105 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:16:04.203520 kernel: raid6: avx2x4 gen() 53162 MB/s Feb 13 20:16:04.224485 kernel: raid6: avx2x2 gen() 53866 MB/s Feb 13 20:16:04.250593 kernel: raid6: avx2x1 gen() 45216 MB/s Feb 13 20:16:04.250611 kernel: raid6: using algorithm avx2x2 gen() 53866 MB/s Feb 13 20:16:04.277689 kernel: raid6: .... xor() 30799 MB/s, rmw enabled Feb 13 20:16:04.277707 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:16:04.298453 kernel: xor: automatically using best checksumming function avx Feb 13 20:16:04.403455 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:16:04.409521 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:04.429745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:04.459920 systemd-udevd[496]: Using default interface naming scheme 'v255'. Feb 13 20:16:04.462451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:04.497703 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:16:04.517683 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Feb 13 20:16:04.561775 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:04.590845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:04.678921 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:04.705665 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 20:16:04.705729 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 20:16:04.706452 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:16:04.721481 kernel: PTP clock support registered Feb 13 20:16:04.721508 kernel: ACPI: bus type USB registered Feb 13 20:16:04.731212 kernel: usbcore: registered new interface driver usbfs Feb 13 20:16:04.732452 kernel: usbcore: registered new interface driver hub Feb 13 20:16:04.732468 kernel: usbcore: registered new device driver usb Feb 13 20:16:04.743454 kernel: libata version 3.00 loaded. Feb 13 20:16:04.745780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:16:04.889536 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:16:04.889563 kernel: AES CTR mode by8 optimization enabled Feb 13 20:16:04.889578 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 20:16:04.890216 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:16:04.890320 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 20:16:04.890422 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 20:16:04.890529 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 20:16:04.890626 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 20:16:04.890723 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 20:16:04.890821 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 20:16:04.890918 kernel: scsi host0: ahci Feb 13 20:16:04.891014 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 20:16:04.891111 kernel: scsi host1: ahci Feb 13 20:16:04.891205 kernel: hub 1-0:1.0: USB hub found Feb 13 20:16:04.891320 kernel: scsi host2: ahci Feb 13 20:16:04.891414 kernel: hub 1-0:1.0: 16 ports detected Feb 13 20:16:04.891527 kernel: scsi host3: ahci Feb 13 20:16:04.891621 kernel: hub 2-0:1.0: USB hub found Feb 13 20:16:04.891730 kernel: scsi host4: ahci Feb 13 20:16:04.891822 kernel: hub 2-0:1.0: 10 ports detected Feb 13 20:16:04.891925 kernel: scsi host5: ahci Feb 13 20:16:04.892017 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 20:16:04.892035 kernel: scsi host6: ahci Feb 13 20:16:04.892129 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 20:16:04.892144 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 20:16:04.892158 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 20:16:04.892172 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 20:16:04.892185 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 20:16:04.892199 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 20:16:04.892213 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 20:16:04.892229 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 20:16:04.765776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:05.002489 kernel: pps pps0: new PPS source ptp0 Feb 13 20:16:05.002569 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 20:16:05.002645 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:16:05.002712 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:16 Feb 13 20:16:05.002776 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 20:16:05.002841 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:16:05.002904 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 20:16:05.492709 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:16:05.492791 kernel: pps pps1: new PPS source ptp1 Feb 13 20:16:05.492860 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 20:16:05.492928 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 20:16:05.492992 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:17 Feb 13 20:16:05.493058 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 20:16:05.493121 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 20:16:05.493183 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 20:16:05.637814 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.637833 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:16:05.637847 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.637861 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.637874 kernel: hub 1-14:1.0: USB hub found Feb 13 20:16:05.638009 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.638025 kernel: hub 1-14:1.0: 4 ports detected Feb 13 20:16:05.638137 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 20:16:05.638152 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 20:16:05.638166 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:16:05.638180 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:16:05.638286 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 20:16:05.638301 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Feb 13 20:16:05.638402 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:16:05.638418 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 20:16:05.638431 kernel: ata1.00: Features: NCQ-prio Feb 13 20:16:05.638445 kernel: ata2.00: Features: NCQ-prio Feb 13 20:16:05.638465 kernel: ata1.00: configured for UDMA/133 Feb 13 20:16:05.638479 kernel: ata2.00: configured for UDMA/133 Feb 13 20:16:05.638492 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:16:05.638597 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 20:16:05.638699 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 20:16:05.638802 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:05.638817 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:16:05.638831 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:16:05.638924 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 20:16:05.639017 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 20:16:05.639118 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 20:16:05.639212 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 20:16:05.639305 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 20:16:05.639399 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 20:16:05.639495 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 20:16:05.639589 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 20:16:05.639682 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:16:05.639774 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:16:05.639866 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 20:16:05.639961 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 20:16:05.640054 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:05.640069 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 20:16:05.640082 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 20:16:05.640174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:16:05.640189 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:16:05.640288 kernel: GPT:9289727 != 937703087 Feb 13 20:16:05.640302 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:16:05.640318 kernel: GPT:9289727 != 937703087 Feb 13 20:16:05.640331 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:16:05.640345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:16:05.640358 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 20:16:06.058954 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 20:16:06.059438 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 20:16:06.059876 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 20:16:06.060413 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (697) Feb 13 20:16:06.060512 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (547) Feb 13 20:16:06.060558 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:06.060595 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:16:06.060632 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:16:06.060668 kernel: usbcore: registered new interface driver usbhid Feb 13 20:16:06.060705 kernel: usbhid: USB HID core driver Feb 13 20:16:06.060764 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 20:16:06.060815 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 20:16:06.061232 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 20:16:06.061277 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 20:16:06.061684 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 20:16:06.062035 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 20:16:06.062415 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 20:16:04.967273 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:06.080681 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 13 20:16:05.040369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:06.096665 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 13 20:16:05.059597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:05.069542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:05.069613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:05.080580 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:05.102632 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:16:06.151709 disk-uuid[714]: Primary Header is updated. Feb 13 20:16:06.151709 disk-uuid[714]: Secondary Entries is updated. Feb 13 20:16:06.151709 disk-uuid[714]: Secondary Header is updated. Feb 13 20:16:05.112483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:05.112609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:05.123500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:05.138648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:05.148918 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:05.170268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:05.188604 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:05.197672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:05.549216 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 20:16:05.579122 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 20:16:05.593608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:16:05.604518 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 20:16:05.619141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 20:16:05.636577 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:16:06.665603 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 20:16:06.673008 disk-uuid[715]: The operation has completed successfully. Feb 13 20:16:06.681662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:16:06.706715 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:16:06.706785 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:16:06.745696 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:16:06.771566 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:16:06.771624 sh[744]: Success Feb 13 20:16:06.805218 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:16:06.827398 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:16:06.835769 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:16:06.907428 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 20:16:06.907459 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:06.917056 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:16:06.924070 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:16:06.929928 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:16:06.942500 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:16:06.943995 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:16:06.954021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:16:06.959567 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:16:06.988065 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:16:07.055526 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:07.055539 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:07.055546 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:16:07.055554 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:16:07.055561 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:16:07.055567 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:07.055878 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:16:07.078085 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:16:07.132769 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:07.158654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:07.177066 ignition[804]: Ignition 2.20.0 Feb 13 20:16:07.169261 systemd-networkd[927]: lo: Link UP Feb 13 20:16:07.177070 ignition[804]: Stage: fetch-offline Feb 13 20:16:07.169263 systemd-networkd[927]: lo: Gained carrier Feb 13 20:16:07.177087 ignition[804]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:07.171583 systemd-networkd[927]: Enumeration completed Feb 13 20:16:07.177092 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:07.171635 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:07.177142 ignition[804]: parsed url from cmdline: "" Feb 13 20:16:07.172337 systemd-networkd[927]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.177144 ignition[804]: no config URL provided Feb 13 20:16:07.179669 unknown[804]: fetched base config from "system" Feb 13 20:16:07.177147 ignition[804]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:16:07.179673 unknown[804]: fetched user config from "system" Feb 13 20:16:07.177169 ignition[804]: parsing config with SHA512: a83d9722e19296f6ddd79d43a571960a068dc997c95b6948ab9c6edabce3f219f580b4d9a5ce55c6a3cd6ab0117d755237f90dc0804c018714edc4d6f422b9b2 Feb 13 20:16:07.188956 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:07.179892 ignition[804]: fetch-offline: fetch-offline passed Feb 13 20:16:07.200248 systemd-networkd[927]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.179895 ignition[804]: POST message to Packet Timeline Feb 13 20:16:07.205971 systemd[1]: Reached target network.target - Network. Feb 13 20:16:07.179903 ignition[804]: POST Status error: resource requires networking Feb 13 20:16:07.213784 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:16:07.179945 ignition[804]: Ignition finished successfully Feb 13 20:16:07.226659 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:16:07.238102 ignition[940]: Ignition 2.20.0 Feb 13 20:16:07.228917 systemd-networkd[927]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.425626 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 20:16:07.238109 ignition[940]: Stage: kargs Feb 13 20:16:07.414639 systemd-networkd[927]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:07.238271 ignition[940]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:07.238282 ignition[940]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:07.239156 ignition[940]: kargs: kargs passed Feb 13 20:16:07.239161 ignition[940]: POST message to Packet Timeline Feb 13 20:16:07.239179 ignition[940]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:07.239731 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58136->[::1]:53: read: connection refused Feb 13 20:16:07.440297 ignition[940]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 20:16:07.441497 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59481->[::1]:53: read: connection refused Feb 13 20:16:07.628587 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 20:16:07.629107 systemd-networkd[927]: eno1: Link UP Feb 13 20:16:07.629272 systemd-networkd[927]: eno2: Link UP Feb 13 20:16:07.629388 systemd-networkd[927]: enp1s0f0np0: Link UP Feb 13 20:16:07.629537 systemd-networkd[927]: enp1s0f0np0: Gained carrier Feb 13 20:16:07.640728 systemd-networkd[927]: enp1s0f1np1: Link UP Feb 13 20:16:07.674680 systemd-networkd[927]: enp1s0f0np0: DHCPv4 address 147.75.90.163/31, gateway 147.75.90.162 acquired from 145.40.83.140 Feb 13 20:16:07.842624 ignition[940]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 20:16:07.843759 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35620->[::1]:53: read: connection refused Feb 13 20:16:08.418218 systemd-networkd[927]: enp1s0f1np1: Gained carrier Feb 13 20:16:08.644265 ignition[940]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 20:16:08.645357 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37897->[::1]:53: read: connection refused Feb 13 20:16:08.866070 systemd-networkd[927]: enp1s0f0np0: Gained IPv6LL Feb 13 20:16:09.570071 systemd-networkd[927]: enp1s0f1np1: Gained IPv6LL Feb 13 20:16:10.246852 ignition[940]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 20:16:10.248388 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48638->[::1]:53: read: connection refused Feb 13 20:16:13.451434 ignition[940]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 20:16:13.963846 ignition[940]: GET result: OK Feb 13 20:16:14.319894 ignition[940]: Ignition finished successfully Feb 13 20:16:14.324983 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:16:14.358902 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:16:14.387023 ignition[955]: Ignition 2.20.0 Feb 13 20:16:14.387039 ignition[955]: Stage: disks Feb 13 20:16:14.387396 ignition[955]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:14.387414 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:14.388783 ignition[955]: disks: disks passed Feb 13 20:16:14.388791 ignition[955]: POST message to Packet Timeline Feb 13 20:16:14.388817 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:14.760030 ignition[955]: GET result: OK Feb 13 20:16:15.975416 ignition[955]: Ignition finished successfully Feb 13 20:16:15.978982 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:16:15.994737 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:16.012732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:16:16.033721 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:16.055797 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:16.075798 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:16.104565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:16:16.140326 systemd-fsck[973]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:16:16.152171 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:16:16.165712 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:16:16.261513 kernel: EXT4-fs (sda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 20:16:16.262025 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:16:16.271929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:16.313789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:16.358668 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (982) Feb 13 20:16:16.358683 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:16.358691 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:16.358699 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:16:16.322771 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:16:16.389655 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:16:16.389667 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:16:16.390983 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:16:16.403067 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 20:16:16.414743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:16:16.414775 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:16.467271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:16.480602 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:16:16.507675 coreos-metadata[1000]: Feb 13 20:16:16.490 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:16:16.502792 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:16:16.540566 coreos-metadata[999]: Feb 13 20:16:16.490 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:16:16.553765 initrd-setup-root[1014]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:16:16.564568 initrd-setup-root[1021]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:16:16.574561 initrd-setup-root[1028]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:16:16.584575 initrd-setup-root[1035]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:16:16.591228 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:16.625663 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:16:16.650650 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:16.626397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:16:16.660457 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:16:16.686164 ignition[1102]: INFO : Ignition 2.20.0 Feb 13 20:16:16.686164 ignition[1102]: INFO : Stage: mount Feb 13 20:16:16.700558 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:16.700558 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:16.700558 ignition[1102]: INFO : mount: mount passed Feb 13 20:16:16.700558 ignition[1102]: INFO : POST message to Packet Timeline Feb 13 20:16:16.700558 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:16.695276 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:16:16.761663 coreos-metadata[999]: Feb 13 20:16:16.712 INFO Fetch successful Feb 13 20:16:16.761663 coreos-metadata[999]: Feb 13 20:16:16.755 INFO wrote hostname ci-4152.2.1-a-5d3d77ba07 to /sysroot/etc/hostname Feb 13 20:16:16.757236 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:16:17.046089 coreos-metadata[1000]: Feb 13 20:16:17.045 INFO Fetch successful Feb 13 20:16:17.065914 ignition[1102]: INFO : GET result: OK Feb 13 20:16:17.084875 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 20:16:17.084926 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 20:16:17.514747 ignition[1102]: INFO : Ignition finished successfully Feb 13 20:16:17.517586 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:16:17.547713 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:16:17.559609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:17.618614 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1125) Feb 13 20:16:17.618643 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 20:16:17.626712 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:16:17.632613 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:16:17.647289 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:16:17.647305 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:16:17.649232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:17.679912 ignition[1142]: INFO : Ignition 2.20.0 Feb 13 20:16:17.679912 ignition[1142]: INFO : Stage: files Feb 13 20:16:17.693660 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:17.693660 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:17.693660 ignition[1142]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:16:17.693660 ignition[1142]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:16:17.693660 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:16:17.683724 unknown[1142]: wrote ssh authorized keys file for user: core Feb 13 20:16:17.824521 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:16:17.896097 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:17.912603 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:16:18.392478 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:16:18.619529 ignition[1142]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:16:18.619529 ignition[1142]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:18.649671 ignition[1142]: INFO : files: files passed Feb 13 20:16:18.649671 ignition[1142]: INFO : POST message to Packet Timeline Feb 13 20:16:18.649671 ignition[1142]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:19.213198 ignition[1142]: INFO : GET result: OK Feb 13 20:16:19.571561 ignition[1142]: INFO : Ignition finished successfully Feb 13 20:16:19.574017 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:16:19.612695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:16:19.623097 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:16:19.644013 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:16:19.644120 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:16:19.687753 initrd-setup-root-after-ignition[1179]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:19.687753 initrd-setup-root-after-ignition[1179]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:19.725742 initrd-setup-root-after-ignition[1183]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:19.692342 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:19.702744 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:16:19.752657 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:16:19.812041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:16:19.812096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:16:19.830855 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:16:19.851658 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:16:19.872859 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:16:19.887872 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:16:19.964592 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:19.996999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:16:20.013828 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:20.028746 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:20.038829 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:16:20.067809 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:16:20.067957 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:20.096264 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:16:20.118114 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:16:20.137217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:16:20.156222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:20.177080 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:20.198111 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:16:20.218082 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:20.239140 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:16:20.260138 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:16:20.280089 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:16:20.297972 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:16:20.298373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:20.323204 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:20.343121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:20.363976 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:16:20.364420 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:20.386093 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:16:20.386513 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:20.426787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:16:20.427267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:20.448312 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:16:20.466968 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:16:20.467414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:20.488109 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:16:20.505076 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:16:20.525087 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:16:20.525392 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:20.548130 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:16:20.548430 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:20.566179 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:16:20.566606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:20.585165 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:16:20.585568 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:16:20.696647 ignition[1204]: INFO : Ignition 2.20.0 Feb 13 20:16:20.696647 ignition[1204]: INFO : Stage: umount Feb 13 20:16:20.696647 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:20.696647 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 20:16:20.696647 ignition[1204]: INFO : umount: umount passed Feb 13 20:16:20.696647 ignition[1204]: INFO : POST message to Packet Timeline Feb 13 20:16:20.696647 ignition[1204]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 20:16:20.603182 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:16:20.603588 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:16:20.631728 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:16:20.656573 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:16:20.656780 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:20.690707 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:16:20.704635 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:16:20.704764 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:20.726741 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:16:20.726810 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:20.789959 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:16:20.794698 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:16:20.794946 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:16:20.877706 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:16:20.877979 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:16:21.341498 ignition[1204]: INFO : GET result: OK Feb 13 20:16:22.245045 ignition[1204]: INFO : Ignition finished successfully Feb 13 20:16:22.246049 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:16:22.246134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:16:22.264118 systemd[1]: Stopped target network.target - Network. Feb 13 20:16:22.280677 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:16:22.280910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:16:22.299880 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:16:22.300018 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:16:22.317948 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:16:22.318105 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:16:22.335936 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:16:22.336097 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:22.344093 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:16:22.344260 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:22.371342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:16:22.380595 systemd-networkd[927]: enp1s0f0np0: DHCPv6 lease lost Feb 13 20:16:22.388692 systemd-networkd[927]: enp1s0f1np1: DHCPv6 lease lost Feb 13 20:16:22.389030 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:16:22.407614 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:16:22.407892 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:16:22.426871 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:16:22.427224 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:16:22.447323 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:16:22.447570 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:22.480623 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:16:22.506633 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:16:22.506781 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:22.525830 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:16:22.525917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:22.543899 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:16:22.544051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:22.563919 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:16:22.564083 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:22.583177 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:22.604865 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:16:22.605321 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:22.635842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:16:22.635883 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:22.660561 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:16:22.660589 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:22.680649 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:16:22.680725 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:22.719637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:16:22.719802 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:22.749865 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:22.750005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:22.813558 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:16:22.840640 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:16:22.840792 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:23.031539 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Feb 13 20:16:22.861752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:22.861890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:22.883755 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:16:22.884012 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:16:22.903717 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:16:22.903973 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:16:22.924836 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:16:22.968949 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:16:22.988589 systemd[1]: Switching root. Feb 13 20:16:23.114639 systemd-journald[267]: Journal stopped Feb 13 20:16:24.745870 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:16:24.745887 kernel: SELinux: policy capability open_perms=1 Feb 13 20:16:24.745894 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:16:24.745901 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:16:24.745907 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:16:24.745912 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:16:24.745919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:16:24.745925 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:16:24.745930 kernel: audit: type=1403 audit(1739477783.251:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:16:24.745937 systemd[1]: Successfully loaded SELinux policy in 73.639ms. Feb 13 20:16:24.745945 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.089ms. Feb 13 20:16:24.745952 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:24.745959 systemd[1]: Detected architecture x86-64. Feb 13 20:16:24.745965 systemd[1]: Detected first boot. Feb 13 20:16:24.745972 systemd[1]: Hostname set to . Feb 13 20:16:24.745980 systemd[1]: Initializing machine ID from random generator. Feb 13 20:16:24.745986 zram_generator::config[1254]: No configuration found. Feb 13 20:16:24.745993 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:16:24.746000 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:16:24.746006 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:16:24.746013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:24.746020 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:16:24.746028 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:16:24.746034 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:16:24.746043 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:16:24.746050 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:16:24.746057 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:16:24.746063 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:16:24.746070 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:16:24.746078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:24.746085 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:24.746092 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:16:24.746098 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:16:24.746105 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:16:24.746111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:24.746118 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Feb 13 20:16:24.746125 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:24.746133 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:16:24.746139 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:16:24.746146 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:24.746155 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:16:24.746162 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:24.746168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:24.746175 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:24.746183 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:24.746190 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:16:24.746197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:16:24.746204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:24.746211 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:24.746218 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:24.746226 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:16:24.746233 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:16:24.746240 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:16:24.746247 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:16:24.746254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:24.746261 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:16:24.746268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:16:24.746276 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:16:24.746284 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:16:24.746291 systemd[1]: Reached target machines.target - Containers. Feb 13 20:16:24.746298 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:16:24.746305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:24.746312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:24.746319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:16:24.746326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:24.746333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:24.746342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:24.746349 kernel: ACPI: bus type drm_connector registered Feb 13 20:16:24.746356 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:16:24.746363 kernel: fuse: init (API version 7.39) Feb 13 20:16:24.746369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:24.746376 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:16:24.746383 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:16:24.746390 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:16:24.746398 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:16:24.746406 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:16:24.746412 kernel: loop: module loaded Feb 13 20:16:24.746419 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:24.746434 systemd-journald[1357]: Collecting audit messages is disabled. Feb 13 20:16:24.746458 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:24.746465 systemd-journald[1357]: Journal started Feb 13 20:16:24.746480 systemd-journald[1357]: Runtime Journal (/run/log/journal/fc78eeca90b9494dabef92fd00ff671f) is 8.0M, max 639.9M, 631.9M free. Feb 13 20:16:23.636974 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:16:23.658709 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:16:23.659593 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:16:24.785515 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:16:24.806542 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:16:24.826513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:24.847602 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:16:24.847627 systemd[1]: Stopped verity-setup.service. Feb 13 20:16:24.879396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:24.879419 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:24.889865 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:16:24.900748 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:16:24.911731 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:16:24.921720 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:16:24.931750 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:16:24.941704 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:16:24.951785 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:16:24.962802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:24.973881 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:16:24.974034 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:16:24.987168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:24.987436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:24.999378 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:24.999773 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:25.011398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:25.011793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:25.023617 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:16:25.024020 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:16:25.034384 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:25.034786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:25.045387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:25.056343 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:16:25.068343 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:16:25.080338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:25.116877 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:16:25.150760 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:16:25.163430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:16:25.173718 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:16:25.173821 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:25.187852 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:16:25.211696 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:16:25.223264 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:16:25.233684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:25.234998 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:16:25.250641 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:16:25.255175 systemd-journald[1357]: Time spent on flushing to /var/log/journal/fc78eeca90b9494dabef92fd00ff671f is 14.867ms for 1357 entries. Feb 13 20:16:25.255175 systemd-journald[1357]: System Journal (/var/log/journal/fc78eeca90b9494dabef92fd00ff671f) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:16:25.282957 systemd-journald[1357]: Received client request to flush runtime journal. Feb 13 20:16:25.270604 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:25.271249 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:16:25.280582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:25.281260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:25.291322 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:16:25.314148 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:16:25.324471 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 20:16:25.330420 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:16:25.343630 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:16:25.349454 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:16:25.359646 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:16:25.370683 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:16:25.381669 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:16:25.396502 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 20:16:25.398868 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:16:25.409656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:25.419659 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:16:25.432963 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:16:25.459706 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:16:25.460493 kernel: loop2: detected capacity change from 0 to 8 Feb 13 20:16:25.471217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:25.484214 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:16:25.484694 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:16:25.496156 udevadm[1393]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:16:25.498478 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Feb 13 20:16:25.498488 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Feb 13 20:16:25.501005 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:25.510502 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 20:16:25.574492 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 20:16:25.576597 ldconfig[1383]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:16:25.577960 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:16:25.594518 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 20:16:25.613500 kernel: loop6: detected capacity change from 0 to 8 Feb 13 20:16:25.620492 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 20:16:25.632542 (sd-merge)[1412]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Feb 13 20:16:25.632792 (sd-merge)[1412]: Merged extensions into '/usr'. Feb 13 20:16:25.635080 systemd[1]: Reloading requested from client PID 1389 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:16:25.635089 systemd[1]: Reloading... Feb 13 20:16:25.662499 zram_generator::config[1437]: No configuration found. Feb 13 20:16:25.731637 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:25.769285 systemd[1]: Reloading finished in 133 ms. Feb 13 20:16:25.813744 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:16:25.825003 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:16:25.855757 systemd[1]: Starting ensure-sysext.service... Feb 13 20:16:25.864762 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:25.877827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:25.889327 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:16:25.889624 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:16:25.890050 systemd[1]: Reloading requested from client PID 1494 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:16:25.890057 systemd[1]: Reloading... Feb 13 20:16:25.890173 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:16:25.890362 systemd-tmpfiles[1495]: ACLs are not supported, ignoring. Feb 13 20:16:25.890411 systemd-tmpfiles[1495]: ACLs are not supported, ignoring. Feb 13 20:16:25.892149 systemd-tmpfiles[1495]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:25.892152 systemd-tmpfiles[1495]: Skipping /boot Feb 13 20:16:25.896982 systemd-tmpfiles[1495]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:25.896985 systemd-tmpfiles[1495]: Skipping /boot Feb 13 20:16:25.904066 systemd-udevd[1496]: Using default interface naming scheme 'v255'. Feb 13 20:16:25.920513 zram_generator::config[1522]: No configuration found. Feb 13 20:16:25.961755 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 20:16:25.961809 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1557) Feb 13 20:16:25.961831 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 20:16:25.973459 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:16:25.973613 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:16:25.985462 kernel: IPMI message handler: version 39.2 Feb 13 20:16:25.985519 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 20:16:26.019354 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:16:26.019381 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 20:16:26.019525 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 20:16:26.034703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:26.061454 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 20:16:26.061494 kernel: ipmi device interface Feb 13 20:16:26.077457 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 20:16:26.077723 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 20:16:26.097854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 20:16:26.116234 kernel: ipmi_si: IPMI System Interface driver Feb 13 20:16:26.116281 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 20:16:26.132065 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 20:16:26.132094 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 20:16:26.132104 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 20:16:26.162152 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 20:16:26.162230 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 20:16:26.162300 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 20:16:26.162312 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 20:16:26.131715 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Feb 13 20:16:26.131839 systemd[1]: Reloading finished in 241 ms. Feb 13 20:16:26.193263 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 20:16:26.193557 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 20:16:26.209046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:26.211402 kernel: intel_rapl_common: Found RAPL domain package Feb 13 20:16:26.211439 kernel: intel_rapl_common: Found RAPL domain core Feb 13 20:16:26.218186 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 20:16:26.237748 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:26.256526 systemd[1]: Finished ensure-sysext.service. Feb 13 20:16:26.274452 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 20:16:26.284863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:26.295642 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:16:26.313454 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 20:16:26.313938 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:16:26.314010 augenrules[1693]: No rules Feb 13 20:16:26.324626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:26.325219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:26.335026 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:26.345074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:26.357041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:26.366604 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:26.367134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:16:26.378115 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:16:26.389407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:26.390320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:26.391224 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:16:26.417046 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:16:26.423452 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 20:16:26.431452 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 20:16:26.452644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:26.462546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:16:26.463055 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:16:26.463267 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:16:26.463349 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:16:26.463500 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:16:26.463633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:26.463698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:26.463833 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:26.463896 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:26.464033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:26.464096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:26.464229 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:26.464291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:26.464470 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:16:26.464605 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:16:26.469466 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:16:26.469497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:26.469527 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:26.470139 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:16:26.470972 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:16:26.470998 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:16:26.471213 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:16:26.477079 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:16:26.479100 lvm[1723]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:26.492096 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:16:26.524335 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:16:26.531878 systemd-resolved[1707]: Positive Trust Anchors: Feb 13 20:16:26.531884 systemd-resolved[1707]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:26.531907 systemd-resolved[1707]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:26.534539 systemd-resolved[1707]: Using system hostname 'ci-4152.2.1-a-5d3d77ba07'. Feb 13 20:16:26.539039 systemd-networkd[1706]: lo: Link UP Feb 13 20:16:26.539042 systemd-networkd[1706]: lo: Gained carrier Feb 13 20:16:26.541517 systemd-networkd[1706]: bond0: netdev ready Feb 13 20:16:26.542437 systemd-networkd[1706]: Enumeration completed Feb 13 20:16:26.546825 systemd-networkd[1706]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:19:e8.network. Feb 13 20:16:26.602681 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:16:26.613621 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:26.623715 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:26.633683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:26.645486 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:26.655533 systemd[1]: Reached target network.target - Network. Feb 13 20:16:26.663518 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:26.674535 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:26.684579 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:16:26.695556 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:16:26.706527 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:16:26.717526 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:16:26.717542 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:26.725522 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:16:26.735603 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:16:26.745599 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:16:26.756517 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:26.765455 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:16:26.776548 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:16:26.792941 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:16:26.808116 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:16:26.820278 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:16:26.822456 lvm[1748]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:26.831896 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:16:26.845973 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:26.846492 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 20:16:26.860458 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 20:16:26.863560 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:26.865067 systemd-networkd[1706]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:19:e9.network. Feb 13 20:16:26.871589 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:26.871614 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:26.888577 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:16:26.899313 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:16:26.909170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:16:26.918287 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:16:26.922213 coreos-metadata[1751]: Feb 13 20:16:26.922 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:16:26.923043 coreos-metadata[1751]: Feb 13 20:16:26.922 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 13 20:16:26.928236 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:16:26.929521 dbus-daemon[1752]: [system] SELinux support is enabled Feb 13 20:16:26.929931 jq[1755]: false Feb 13 20:16:26.937708 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:16:26.938340 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:16:26.945523 extend-filesystems[1757]: Found loop4 Feb 13 20:16:26.945523 extend-filesystems[1757]: Found loop5 Feb 13 20:16:26.963307 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 20:16:26.948212 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:16:26.963378 extend-filesystems[1757]: Found loop6 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found loop7 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda1 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda2 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda3 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found usr Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda4 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda6 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda7 Feb 13 20:16:26.963378 extend-filesystems[1757]: Found sda9 Feb 13 20:16:26.963378 extend-filesystems[1757]: Checking size of /dev/sda9 Feb 13 20:16:26.963378 extend-filesystems[1757]: Resized partition /dev/sda9 Feb 13 20:16:27.143643 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1615) Feb 13 20:16:27.143670 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 20:16:27.143830 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 20:16:27.143849 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 20:16:27.143864 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 20:16:27.143884 kernel: bond0: active interface up! Feb 13 20:16:27.143993 extend-filesystems[1765]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:16:26.963925 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:16:27.014694 systemd-networkd[1706]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 20:16:27.016222 systemd-networkd[1706]: enp1s0f0np0: Link UP Feb 13 20:16:27.016475 systemd-networkd[1706]: enp1s0f0np0: Gained carrier Feb 13 20:16:27.177936 sshd_keygen[1780]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:16:27.037887 systemd-networkd[1706]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:19:e8.network. Feb 13 20:16:27.178010 update_engine[1782]: I20250213 20:16:27.151263 1782 main.cc:92] Flatcar Update Engine starting Feb 13 20:16:27.178010 update_engine[1782]: I20250213 20:16:27.152109 1782 update_check_scheduler.cc:74] Next update check in 3m20s Feb 13 20:16:27.038066 systemd-networkd[1706]: enp1s0f1np1: Link UP Feb 13 20:16:27.178183 jq[1783]: true Feb 13 20:16:27.038307 systemd-networkd[1706]: enp1s0f1np1: Gained carrier Feb 13 20:16:27.049849 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:16:27.057625 systemd-networkd[1706]: bond0: Link UP Feb 13 20:16:27.057887 systemd-networkd[1706]: bond0: Gained carrier Feb 13 20:16:27.058047 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:27.058487 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:27.058769 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:27.058882 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:27.060858 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:16:27.082328 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Feb 13 20:16:27.109805 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:16:27.110188 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:16:27.110980 systemd-logind[1777]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 20:16:27.110993 systemd-logind[1777]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 20:16:27.111004 systemd-logind[1777]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 20:16:27.111163 systemd-logind[1777]: New seat seat0. Feb 13 20:16:27.117224 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:16:27.143828 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:16:27.170899 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:16:27.188780 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:16:27.207702 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:16:27.207799 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:16:27.207971 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:16:27.208060 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:16:27.217995 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:16:27.218083 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:16:27.228687 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:16:27.241423 (ntainerd)[1795]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:16:27.242981 jq[1793]: true Feb 13 20:16:27.245363 dbus-daemon[1752]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:16:27.246567 tar[1792]: linux-amd64/helm Feb 13 20:16:27.251458 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 20:16:27.253380 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 20:16:27.253484 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Feb 13 20:16:27.255900 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:16:27.267757 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:16:27.275547 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:16:27.275638 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:16:27.286599 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:16:27.286680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:16:27.312619 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:16:27.324487 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:16:27.324589 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:16:27.324757 bash[1824]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:27.334797 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:16:27.347337 systemd[1]: Starting sshkeys.service... Feb 13 20:16:27.347601 locksmithd[1831]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:16:27.355429 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:16:27.369160 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:16:27.381510 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:16:27.392966 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:16:27.403440 coreos-metadata[1845]: Feb 13 20:16:27.403 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 20:16:27.405598 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:16:27.414551 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Feb 13 20:16:27.424713 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:16:27.435054 containerd[1795]: time="2025-02-13T20:16:27.434970828Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 20:16:27.447405 containerd[1795]: time="2025-02-13T20:16:27.447380647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448219 containerd[1795]: time="2025-02-13T20:16:27.448202609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448252 containerd[1795]: time="2025-02-13T20:16:27.448230502Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:16:27.448252 containerd[1795]: time="2025-02-13T20:16:27.448240887Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:16:27.448342 containerd[1795]: time="2025-02-13T20:16:27.448334060Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:16:27.448365 containerd[1795]: time="2025-02-13T20:16:27.448344459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448384 containerd[1795]: time="2025-02-13T20:16:27.448377649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448400 containerd[1795]: time="2025-02-13T20:16:27.448385360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448491 containerd[1795]: time="2025-02-13T20:16:27.448480867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448491 containerd[1795]: time="2025-02-13T20:16:27.448490224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448539 containerd[1795]: time="2025-02-13T20:16:27.448498269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448539 containerd[1795]: time="2025-02-13T20:16:27.448503504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448588 containerd[1795]: time="2025-02-13T20:16:27.448543349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448813 containerd[1795]: time="2025-02-13T20:16:27.448804590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448865 containerd[1795]: time="2025-02-13T20:16:27.448856500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:27.448865 containerd[1795]: time="2025-02-13T20:16:27.448864851Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:16:27.448912 containerd[1795]: time="2025-02-13T20:16:27.448904487Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:16:27.448938 containerd[1795]: time="2025-02-13T20:16:27.448931212Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:16:27.460076 containerd[1795]: time="2025-02-13T20:16:27.460036274Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:16:27.460076 containerd[1795]: time="2025-02-13T20:16:27.460058044Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:16:27.460076 containerd[1795]: time="2025-02-13T20:16:27.460067071Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:16:27.460076 containerd[1795]: time="2025-02-13T20:16:27.460075686Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:16:27.460164 containerd[1795]: time="2025-02-13T20:16:27.460084200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:16:27.460164 containerd[1795]: time="2025-02-13T20:16:27.460150414Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:16:27.460327 containerd[1795]: time="2025-02-13T20:16:27.460282901Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:16:27.460354 containerd[1795]: time="2025-02-13T20:16:27.460339976Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:16:27.460354 containerd[1795]: time="2025-02-13T20:16:27.460349416Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:16:27.460401 containerd[1795]: time="2025-02-13T20:16:27.460357182Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:16:27.460401 containerd[1795]: time="2025-02-13T20:16:27.460365578Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460401 containerd[1795]: time="2025-02-13T20:16:27.460372850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460401 containerd[1795]: time="2025-02-13T20:16:27.460379494Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460401 containerd[1795]: time="2025-02-13T20:16:27.460386285Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460401 containerd[1795]: time="2025-02-13T20:16:27.460393957Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460401 containerd[1795]: time="2025-02-13T20:16:27.460400647Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460407363Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460413707Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460429282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460438154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460445472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460462869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460476085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460483795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460490069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460503 containerd[1795]: time="2025-02-13T20:16:27.460497008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460504217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460512372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460519586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460525945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460532687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460540657Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460551765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460559548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460565179Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460588565Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460597322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460603565Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:16:27.460643 containerd[1795]: time="2025-02-13T20:16:27.460609978Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:16:27.460818 containerd[1795]: time="2025-02-13T20:16:27.460615094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460818 containerd[1795]: time="2025-02-13T20:16:27.460621744Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:16:27.460818 containerd[1795]: time="2025-02-13T20:16:27.460627999Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:16:27.460818 containerd[1795]: time="2025-02-13T20:16:27.460633879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:16:27.460872 containerd[1795]: time="2025-02-13T20:16:27.460805156Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:16:27.460872 containerd[1795]: time="2025-02-13T20:16:27.460832115Z" level=info msg="Connect containerd service" Feb 13 20:16:27.460872 containerd[1795]: time="2025-02-13T20:16:27.460849026Z" level=info msg="using legacy CRI server" Feb 13 20:16:27.460872 containerd[1795]: time="2025-02-13T20:16:27.460853178Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:16:27.460999 containerd[1795]: time="2025-02-13T20:16:27.460908034Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:16:27.461249 containerd[1795]: time="2025-02-13T20:16:27.461204806Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:16:27.461475 containerd[1795]: time="2025-02-13T20:16:27.461421525Z" level=info msg="Start subscribing containerd event" Feb 13 20:16:27.461517 containerd[1795]: time="2025-02-13T20:16:27.461493665Z" level=info msg="Start recovering state" Feb 13 20:16:27.461742 containerd[1795]: time="2025-02-13T20:16:27.461568184Z" level=info msg="Start event monitor" Feb 13 20:16:27.461762 containerd[1795]: time="2025-02-13T20:16:27.461729297Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:16:27.461802 containerd[1795]: time="2025-02-13T20:16:27.461794943Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:16:27.461976 containerd[1795]: time="2025-02-13T20:16:27.461964692Z" level=info msg="Start snapshots syncer" Feb 13 20:16:27.461995 containerd[1795]: time="2025-02-13T20:16:27.461978500Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:16:27.461995 containerd[1795]: time="2025-02-13T20:16:27.461984010Z" level=info msg="Start streaming server" Feb 13 20:16:27.462028 containerd[1795]: time="2025-02-13T20:16:27.462017480Z" level=info msg="containerd successfully booted in 0.027517s" Feb 13 20:16:27.462077 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:16:27.480452 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 20:16:27.501231 extend-filesystems[1765]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:16:27.501231 extend-filesystems[1765]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 20:16:27.501231 extend-filesystems[1765]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 20:16:27.532537 extend-filesystems[1757]: Resized filesystem in /dev/sda9 Feb 13 20:16:27.532537 extend-filesystems[1757]: Found sdb Feb 13 20:16:27.502229 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:16:27.552633 tar[1792]: linux-amd64/LICENSE Feb 13 20:16:27.552633 tar[1792]: linux-amd64/README.md Feb 13 20:16:27.502363 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:16:27.561780 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:16:27.923149 coreos-metadata[1751]: Feb 13 20:16:27.923 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 20:16:28.769602 systemd-networkd[1706]: bond0: Gained IPv6LL Feb 13 20:16:28.769897 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:28.897663 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:28.897769 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:28.899009 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:16:28.910849 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:16:28.929580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:28.940059 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:16:28.957880 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:16:29.592291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:29.603923 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:29.667172 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Feb 13 20:16:29.667284 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Feb 13 20:16:30.103713 kubelet[1885]: E0213 20:16:30.103660 1885 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:30.104828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:30.104906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:30.842213 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:16:30.859846 systemd[1]: Started sshd@0-147.75.90.163:22-139.178.68.195:49052.service - OpenSSH per-connection server daemon (139.178.68.195:49052). Feb 13 20:16:30.916603 sshd[1908]: Accepted publickey for core from 139.178.68.195 port 49052 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:30.917271 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:30.922988 systemd-logind[1777]: New session 1 of user core. Feb 13 20:16:30.923806 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:16:30.950898 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:16:30.964426 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:16:30.992929 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:16:31.006354 (systemd)[1912]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:16:31.091346 systemd[1912]: Queued start job for default target default.target. Feb 13 20:16:31.099638 coreos-metadata[1751]: Feb 13 20:16:31.092 INFO Fetch successful Feb 13 20:16:31.099991 systemd[1912]: Created slice app.slice - User Application Slice. Feb 13 20:16:31.100006 systemd[1912]: Reached target paths.target - Paths. Feb 13 20:16:31.100014 systemd[1912]: Reached target timers.target - Timers. Feb 13 20:16:31.100681 systemd[1912]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:16:31.106297 systemd[1912]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:16:31.106325 systemd[1912]: Reached target sockets.target - Sockets. Feb 13 20:16:31.106335 systemd[1912]: Reached target basic.target - Basic System. Feb 13 20:16:31.106355 systemd[1912]: Reached target default.target - Main User Target. Feb 13 20:16:31.106370 systemd[1912]: Startup finished in 92ms. Feb 13 20:16:31.106557 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:16:31.107205 coreos-metadata[1845]: Feb 13 20:16:31.107 INFO Fetch successful Feb 13 20:16:31.130611 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:16:31.140359 unknown[1845]: wrote ssh authorized keys file for user: core Feb 13 20:16:31.169206 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:16:31.176355 update-ssh-keys[1921]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:31.180104 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:16:31.192405 systemd[1]: Finished sshkeys.service. Feb 13 20:16:31.215716 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Feb 13 20:16:31.229434 systemd[1]: Started sshd@1-147.75.90.163:22-139.178.68.195:49056.service - OpenSSH per-connection server daemon (139.178.68.195:49056). Feb 13 20:16:31.267502 sshd[1933]: Accepted publickey for core from 139.178.68.195 port 49056 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:31.268137 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:31.270696 systemd-logind[1777]: New session 2 of user core. Feb 13 20:16:31.281583 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:16:31.351491 sshd[1935]: Connection closed by 139.178.68.195 port 49056 Feb 13 20:16:31.351587 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:31.368129 systemd[1]: sshd@1-147.75.90.163:22-139.178.68.195:49056.service: Deactivated successfully. Feb 13 20:16:31.368921 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:16:31.369719 systemd-logind[1777]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:16:31.370417 systemd[1]: Started sshd@2-147.75.90.163:22-139.178.68.195:49062.service - OpenSSH per-connection server daemon (139.178.68.195:49062). Feb 13 20:16:31.383440 systemd-logind[1777]: Removed session 2. Feb 13 20:16:31.420299 sshd[1940]: Accepted publickey for core from 139.178.68.195 port 49062 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:31.421551 sshd-session[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:31.425788 systemd-logind[1777]: New session 3 of user core. Feb 13 20:16:31.439794 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:16:31.519750 sshd[1942]: Connection closed by 139.178.68.195 port 49062 Feb 13 20:16:31.520515 sshd-session[1940]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:31.526859 systemd[1]: sshd@2-147.75.90.163:22-139.178.68.195:49062.service: Deactivated successfully. Feb 13 20:16:31.530513 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:16:31.532001 systemd-logind[1777]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:16:31.532585 systemd-logind[1777]: Removed session 3. Feb 13 20:16:31.599311 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Feb 13 20:16:31.614315 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:16:31.624248 systemd[1]: Startup finished in 2.678s (kernel) + 20.413s (initrd) + 8.446s (userspace) = 31.538s. Feb 13 20:16:31.647991 login[1851]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:16:31.650765 systemd-logind[1777]: New session 4 of user core. Feb 13 20:16:31.651392 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:16:31.654055 login[1850]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:16:31.656310 systemd-logind[1777]: New session 5 of user core. Feb 13 20:16:31.656886 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:16:40.314426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:40.328709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:40.569377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:40.571637 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:40.592521 kubelet[1984]: E0213 20:16:40.592474 1984 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:40.594507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:40.594585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:41.544612 systemd[1]: Started sshd@3-147.75.90.163:22-139.178.68.195:39518.service - OpenSSH per-connection server daemon (139.178.68.195:39518). Feb 13 20:16:41.570642 sshd[2002]: Accepted publickey for core from 139.178.68.195 port 39518 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:41.571310 sshd-session[2002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:41.573899 systemd-logind[1777]: New session 6 of user core. Feb 13 20:16:41.588693 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:16:41.641352 sshd[2004]: Connection closed by 139.178.68.195 port 39518 Feb 13 20:16:41.641546 sshd-session[2002]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:41.658146 systemd[1]: sshd@3-147.75.90.163:22-139.178.68.195:39518.service: Deactivated successfully. Feb 13 20:16:41.658960 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:16:41.659708 systemd-logind[1777]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:16:41.660405 systemd[1]: Started sshd@4-147.75.90.163:22-139.178.68.195:39528.service - OpenSSH per-connection server daemon (139.178.68.195:39528). Feb 13 20:16:41.660960 systemd-logind[1777]: Removed session 6. Feb 13 20:16:41.694462 sshd[2009]: Accepted publickey for core from 139.178.68.195 port 39528 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:41.695402 sshd-session[2009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:41.699043 systemd-logind[1777]: New session 7 of user core. Feb 13 20:16:41.710726 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:16:41.765255 sshd[2011]: Connection closed by 139.178.68.195 port 39528 Feb 13 20:16:41.766008 sshd-session[2009]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:41.784263 systemd[1]: sshd@4-147.75.90.163:22-139.178.68.195:39528.service: Deactivated successfully. Feb 13 20:16:41.787880 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:16:41.791426 systemd-logind[1777]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:16:41.802190 systemd[1]: Started sshd@5-147.75.90.163:22-139.178.68.195:39534.service - OpenSSH per-connection server daemon (139.178.68.195:39534). Feb 13 20:16:41.804739 systemd-logind[1777]: Removed session 7. Feb 13 20:16:41.859983 sshd[2016]: Accepted publickey for core from 139.178.68.195 port 39534 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:41.860828 sshd-session[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:41.864003 systemd-logind[1777]: New session 8 of user core. Feb 13 20:16:41.874674 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:16:41.937811 sshd[2018]: Connection closed by 139.178.68.195 port 39534 Feb 13 20:16:41.938526 sshd-session[2016]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:41.952269 systemd[1]: sshd@5-147.75.90.163:22-139.178.68.195:39534.service: Deactivated successfully. Feb 13 20:16:41.955900 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:16:41.959339 systemd-logind[1777]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:16:41.974182 systemd[1]: Started sshd@6-147.75.90.163:22-139.178.68.195:39542.service - OpenSSH per-connection server daemon (139.178.68.195:39542). Feb 13 20:16:41.976724 systemd-logind[1777]: Removed session 8. Feb 13 20:16:42.024910 sshd[2023]: Accepted publickey for core from 139.178.68.195 port 39542 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:42.025858 sshd-session[2023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:42.029394 systemd-logind[1777]: New session 9 of user core. Feb 13 20:16:42.050751 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:16:42.113265 sudo[2026]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:16:42.113413 sudo[2026]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:42.137510 sudo[2026]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:42.138445 sshd[2025]: Connection closed by 139.178.68.195 port 39542 Feb 13 20:16:42.138684 sshd-session[2023]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:42.164545 systemd[1]: sshd@6-147.75.90.163:22-139.178.68.195:39542.service: Deactivated successfully. Feb 13 20:16:42.166127 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:16:42.167612 systemd-logind[1777]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:16:42.169138 systemd[1]: Started sshd@7-147.75.90.163:22-139.178.68.195:39544.service - OpenSSH per-connection server daemon (139.178.68.195:39544). Feb 13 20:16:42.170265 systemd-logind[1777]: Removed session 9. Feb 13 20:16:42.206916 sshd[2031]: Accepted publickey for core from 139.178.68.195 port 39544 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:42.207657 sshd-session[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:42.210253 systemd-logind[1777]: New session 10 of user core. Feb 13 20:16:42.223674 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:16:42.276179 sudo[2035]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:16:42.276334 sudo[2035]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:42.278256 sudo[2035]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:42.280957 sudo[2034]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 20:16:42.281112 sudo[2034]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:42.306044 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:16:42.340322 augenrules[2057]: No rules Feb 13 20:16:42.341971 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:16:42.342386 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:16:42.344511 sudo[2034]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:42.346845 sshd[2033]: Connection closed by 139.178.68.195 port 39544 Feb 13 20:16:42.347601 sshd-session[2031]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:42.370330 systemd[1]: sshd@7-147.75.90.163:22-139.178.68.195:39544.service: Deactivated successfully. Feb 13 20:16:42.373797 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:16:42.377073 systemd-logind[1777]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:16:42.379963 systemd[1]: Started sshd@8-147.75.90.163:22-139.178.68.195:39552.service - OpenSSH per-connection server daemon (139.178.68.195:39552). Feb 13 20:16:42.382612 systemd-logind[1777]: Removed session 10. Feb 13 20:16:42.444914 sshd[2065]: Accepted publickey for core from 139.178.68.195 port 39552 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:16:42.445756 sshd-session[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:42.449126 systemd-logind[1777]: New session 11 of user core. Feb 13 20:16:42.466869 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:16:42.520597 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:16:42.520748 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:42.958779 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:16:42.958872 (dockerd)[2096]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:16:43.213779 dockerd[2096]: time="2025-02-13T20:16:43.213672918Z" level=info msg="Starting up" Feb 13 20:16:43.279137 dockerd[2096]: time="2025-02-13T20:16:43.279088638Z" level=info msg="Loading containers: start." Feb 13 20:16:43.418455 kernel: Initializing XFRM netlink socket Feb 13 20:16:43.434567 systemd-timesyncd[1708]: Network configuration changed, trying to establish connection. Feb 13 20:16:44.552864 systemd-resolved[1707]: Clock change detected. Flushing caches. Feb 13 20:16:44.552989 systemd-timesyncd[1708]: Contacted time server [2604:2dc0:202:300::140d]:123 (2.flatcar.pool.ntp.org). Feb 13 20:16:44.553024 systemd-timesyncd[1708]: Initial clock synchronization to Thu 2025-02-13 20:16:44.552840 UTC. Feb 13 20:16:44.590598 systemd-networkd[1706]: docker0: Link UP Feb 13 20:16:44.621597 dockerd[2096]: time="2025-02-13T20:16:44.621564877Z" level=info msg="Loading containers: done." Feb 13 20:16:44.632569 dockerd[2096]: time="2025-02-13T20:16:44.632548110Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:16:44.632664 dockerd[2096]: time="2025-02-13T20:16:44.632603929Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 20:16:44.632700 dockerd[2096]: time="2025-02-13T20:16:44.632669545Z" level=info msg="Daemon has completed initialization" Feb 13 20:16:44.648210 dockerd[2096]: time="2025-02-13T20:16:44.648178581Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:16:44.648402 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:16:45.425303 containerd[1795]: time="2025-02-13T20:16:45.425243708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:16:45.984553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284505841.mount: Deactivated successfully. Feb 13 20:16:46.858562 containerd[1795]: time="2025-02-13T20:16:46.858536139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:46.858829 containerd[1795]: time="2025-02-13T20:16:46.858713086Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 20:16:46.859208 containerd[1795]: time="2025-02-13T20:16:46.859194254Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:46.861131 containerd[1795]: time="2025-02-13T20:16:46.861116622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:46.861675 containerd[1795]: time="2025-02-13T20:16:46.861662402Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.43639673s" Feb 13 20:16:46.861714 containerd[1795]: time="2025-02-13T20:16:46.861676624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:16:46.873004 containerd[1795]: time="2025-02-13T20:16:46.872982654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:16:47.950204 containerd[1795]: time="2025-02-13T20:16:47.950143661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.950415 containerd[1795]: time="2025-02-13T20:16:47.950337437Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 20:16:47.950801 containerd[1795]: time="2025-02-13T20:16:47.950758917Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.952345 containerd[1795]: time="2025-02-13T20:16:47.952305393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.952978 containerd[1795]: time="2025-02-13T20:16:47.952936776Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.079934068s" Feb 13 20:16:47.952978 containerd[1795]: time="2025-02-13T20:16:47.952953207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:16:47.964673 containerd[1795]: time="2025-02-13T20:16:47.964610558Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:16:48.783154 containerd[1795]: time="2025-02-13T20:16:48.783099678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:48.783360 containerd[1795]: time="2025-02-13T20:16:48.783316717Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 20:16:48.783697 containerd[1795]: time="2025-02-13T20:16:48.783655495Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:48.785503 containerd[1795]: time="2025-02-13T20:16:48.785474526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:48.786013 containerd[1795]: time="2025-02-13T20:16:48.785973128Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 821.342826ms" Feb 13 20:16:48.786013 containerd[1795]: time="2025-02-13T20:16:48.785988105Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:16:48.797374 containerd[1795]: time="2025-02-13T20:16:48.797327775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:16:49.568739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803680464.mount: Deactivated successfully. Feb 13 20:16:49.737087 containerd[1795]: time="2025-02-13T20:16:49.737035291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.737290 containerd[1795]: time="2025-02-13T20:16:49.737223512Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:16:49.737513 containerd[1795]: time="2025-02-13T20:16:49.737469980Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.738497 containerd[1795]: time="2025-02-13T20:16:49.738451692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.738886 containerd[1795]: time="2025-02-13T20:16:49.738842325Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 941.493267ms" Feb 13 20:16:49.738886 containerd[1795]: time="2025-02-13T20:16:49.738860605Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:16:49.749390 containerd[1795]: time="2025-02-13T20:16:49.749341979Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:16:50.252084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105219847.mount: Deactivated successfully. Feb 13 20:16:50.714585 containerd[1795]: time="2025-02-13T20:16:50.714527070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:50.714804 containerd[1795]: time="2025-02-13T20:16:50.714744292Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:16:50.715149 containerd[1795]: time="2025-02-13T20:16:50.715108929Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:50.716763 containerd[1795]: time="2025-02-13T20:16:50.716720737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:50.717885 containerd[1795]: time="2025-02-13T20:16:50.717843061Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 968.479668ms" Feb 13 20:16:50.717885 containerd[1795]: time="2025-02-13T20:16:50.717858941Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:16:50.729305 containerd[1795]: time="2025-02-13T20:16:50.729285291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:16:51.225275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096049250.mount: Deactivated successfully. Feb 13 20:16:51.226699 containerd[1795]: time="2025-02-13T20:16:51.226679130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:51.226864 containerd[1795]: time="2025-02-13T20:16:51.226852340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 20:16:51.227321 containerd[1795]: time="2025-02-13T20:16:51.227309673Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:51.228440 containerd[1795]: time="2025-02-13T20:16:51.228430493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:51.228956 containerd[1795]: time="2025-02-13T20:16:51.228904450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 499.599064ms" Feb 13 20:16:51.228956 containerd[1795]: time="2025-02-13T20:16:51.228918423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:16:51.240696 containerd[1795]: time="2025-02-13T20:16:51.240644367Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:16:51.726155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:16:51.741696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:51.742650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881153687.mount: Deactivated successfully. Feb 13 20:16:51.959504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:51.961895 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:51.984390 kubelet[2539]: E0213 20:16:51.984299 2539 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:51.985618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:51.985693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:52.987530 containerd[1795]: time="2025-02-13T20:16:52.987472904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:52.987736 containerd[1795]: time="2025-02-13T20:16:52.987662134Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 20:16:52.988121 containerd[1795]: time="2025-02-13T20:16:52.988081557Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:52.990072 containerd[1795]: time="2025-02-13T20:16:52.990033863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:52.990622 containerd[1795]: time="2025-02-13T20:16:52.990584747Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.749902905s" Feb 13 20:16:52.990622 containerd[1795]: time="2025-02-13T20:16:52.990601893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:16:54.568926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:54.591795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:54.605004 systemd[1]: Reloading requested from client PID 2740 ('systemctl') (unit session-11.scope)... Feb 13 20:16:54.605013 systemd[1]: Reloading... Feb 13 20:16:54.642520 zram_generator::config[2779]: No configuration found. Feb 13 20:16:54.708283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:54.767193 systemd[1]: Reloading finished in 161 ms. Feb 13 20:16:54.831404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:54.835738 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:54.843416 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:16:54.843909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:54.847678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:55.108799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:55.115844 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:16:55.136768 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:55.136768 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:16:55.136768 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:55.137675 kubelet[2849]: I0213 20:16:55.137629 2849 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:16:55.531266 kubelet[2849]: I0213 20:16:55.531222 2849 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:16:55.531266 kubelet[2849]: I0213 20:16:55.531236 2849 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:16:55.531362 kubelet[2849]: I0213 20:16:55.531356 2849 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:16:55.540594 kubelet[2849]: I0213 20:16:55.540582 2849 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:16:55.543383 kubelet[2849]: E0213 20:16:55.543372 2849 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.90.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.556021 kubelet[2849]: I0213 20:16:55.556012 2849 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:16:55.556182 kubelet[2849]: I0213 20:16:55.556138 2849 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:16:55.556275 kubelet[2849]: I0213 20:16:55.556152 2849 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.1-a-5d3d77ba07","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:16:55.556275 kubelet[2849]: I0213 20:16:55.556255 2849 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:16:55.556275 kubelet[2849]: I0213 20:16:55.556262 2849 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:16:55.556391 kubelet[2849]: I0213 20:16:55.556318 2849 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:55.556983 kubelet[2849]: I0213 20:16:55.556958 2849 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:16:55.556983 kubelet[2849]: I0213 20:16:55.556966 2849 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:16:55.556983 kubelet[2849]: I0213 20:16:55.556977 2849 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:16:55.556983 kubelet[2849]: I0213 20:16:55.556986 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:16:55.557253 kubelet[2849]: W0213 20:16:55.557229 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.557283 kubelet[2849]: E0213 20:16:55.557258 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.90.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.557283 kubelet[2849]: W0213 20:16:55.557261 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-5d3d77ba07&limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.557316 kubelet[2849]: E0213 20:16:55.557288 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.90.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.1-a-5d3d77ba07&limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.560219 kubelet[2849]: I0213 20:16:55.560189 2849 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:16:55.561434 kubelet[2849]: I0213 20:16:55.561405 2849 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:16:55.561511 kubelet[2849]: W0213 20:16:55.561446 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:16:55.561910 kubelet[2849]: I0213 20:16:55.561901 2849 server.go:1264] "Started kubelet" Feb 13 20:16:55.562008 kubelet[2849]: I0213 20:16:55.561966 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:16:55.562008 kubelet[2849]: I0213 20:16:55.561995 2849 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:16:55.562160 kubelet[2849]: I0213 20:16:55.562152 2849 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:16:55.562573 kubelet[2849]: I0213 20:16:55.562566 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:16:55.562613 kubelet[2849]: I0213 20:16:55.562603 2849 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:16:55.564004 kubelet[2849]: I0213 20:16:55.563990 2849 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:16:55.564058 kubelet[2849]: I0213 20:16:55.564042 2849 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:16:55.564128 kubelet[2849]: E0213 20:16:55.564107 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-5d3d77ba07?timeout=10s\": dial tcp 147.75.90.163:6443: connect: connection refused" interval="200ms" Feb 13 20:16:55.564382 kubelet[2849]: W0213 20:16:55.564353 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.564430 kubelet[2849]: E0213 20:16:55.564399 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.564492 kubelet[2849]: I0213 20:16:55.564481 2849 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:16:55.564555 kubelet[2849]: I0213 20:16:55.564540 2849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:16:55.564874 kubelet[2849]: E0213 20:16:55.564860 2849 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:16:55.566969 kubelet[2849]: I0213 20:16:55.566950 2849 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:16:55.575797 kubelet[2849]: I0213 20:16:55.575766 2849 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:16:55.575952 kubelet[2849]: E0213 20:16:55.575858 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.163:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.1-a-5d3d77ba07.1823dddb6b3d2601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-5d3d77ba07,UID:ci-4152.2.1-a-5d3d77ba07,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-5d3d77ba07,},FirstTimestamp:2025-02-13 20:16:55.561889281 +0000 UTC m=+0.444245332,LastTimestamp:2025-02-13 20:16:55.561889281 +0000 UTC m=+0.444245332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-5d3d77ba07,}" Feb 13 20:16:55.586056 kubelet[2849]: I0213 20:16:55.586042 2849 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:16:55.586056 kubelet[2849]: I0213 20:16:55.586052 2849 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:16:55.586130 kubelet[2849]: I0213 20:16:55.586063 2849 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:55.587139 kubelet[2849]: I0213 20:16:55.587126 2849 policy_none.go:49] "None policy: Start" Feb 13 20:16:55.587413 kubelet[2849]: I0213 20:16:55.587402 2849 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:16:55.587444 kubelet[2849]: I0213 20:16:55.587416 2849 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:16:55.588599 kubelet[2849]: I0213 20:16:55.588586 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:16:55.589182 kubelet[2849]: I0213 20:16:55.589152 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:16:55.589182 kubelet[2849]: I0213 20:16:55.589182 2849 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:16:55.589235 kubelet[2849]: I0213 20:16:55.589193 2849 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:16:55.589235 kubelet[2849]: E0213 20:16:55.589219 2849 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:16:55.589484 kubelet[2849]: W0213 20:16:55.589452 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.589532 kubelet[2849]: E0213 20:16:55.589495 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.90.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:55.590443 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:16:55.616856 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:16:55.619655 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:16:55.644922 kubelet[2849]: I0213 20:16:55.644841 2849 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:16:55.645313 kubelet[2849]: I0213 20:16:55.645231 2849 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:16:55.645493 kubelet[2849]: I0213 20:16:55.645462 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:16:55.646994 kubelet[2849]: E0213 20:16:55.646944 2849 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.1-a-5d3d77ba07\" not found" Feb 13 20:16:55.666559 kubelet[2849]: I0213 20:16:55.666512 2849 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.667279 kubelet[2849]: E0213 20:16:55.667218 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.163:6443/api/v1/nodes\": dial tcp 147.75.90.163:6443: connect: connection refused" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.689482 kubelet[2849]: I0213 20:16:55.689364 2849 topology_manager.go:215] "Topology Admit Handler" podUID="bf5d926418320884626c76b12aff11c8" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.693279 kubelet[2849]: I0213 20:16:55.693225 2849 topology_manager.go:215] "Topology Admit Handler" podUID="01fdd65b79eabd9be483096d47df8cce" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.696970 kubelet[2849]: I0213 20:16:55.696923 2849 topology_manager.go:215] "Topology Admit Handler" podUID="2bab0b148e0a917d7c3633e8a2244efb" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.711375 systemd[1]: Created slice kubepods-burstable-podbf5d926418320884626c76b12aff11c8.slice - libcontainer container kubepods-burstable-podbf5d926418320884626c76b12aff11c8.slice. Feb 13 20:16:55.742169 systemd[1]: Created slice kubepods-burstable-pod01fdd65b79eabd9be483096d47df8cce.slice - libcontainer container kubepods-burstable-pod01fdd65b79eabd9be483096d47df8cce.slice. Feb 13 20:16:55.765578 kubelet[2849]: E0213 20:16:55.765489 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-5d3d77ba07?timeout=10s\": dial tcp 147.75.90.163:6443: connect: connection refused" interval="400ms" Feb 13 20:16:55.771585 systemd[1]: Created slice kubepods-burstable-pod2bab0b148e0a917d7c3633e8a2244efb.slice - libcontainer container kubepods-burstable-pod2bab0b148e0a917d7c3633e8a2244efb.slice. Feb 13 20:16:55.865765 kubelet[2849]: I0213 20:16:55.865515 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.865765 kubelet[2849]: I0213 20:16:55.865673 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.866111 kubelet[2849]: I0213 20:16:55.865779 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bab0b148e0a917d7c3633e8a2244efb-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-5d3d77ba07\" (UID: \"2bab0b148e0a917d7c3633e8a2244efb\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.866111 kubelet[2849]: I0213 20:16:55.865837 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf5d926418320884626c76b12aff11c8-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" (UID: \"bf5d926418320884626c76b12aff11c8\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.866111 kubelet[2849]: I0213 20:16:55.865914 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf5d926418320884626c76b12aff11c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" (UID: \"bf5d926418320884626c76b12aff11c8\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.866111 kubelet[2849]: I0213 20:16:55.866030 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.866543 kubelet[2849]: I0213 20:16:55.866122 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.866543 kubelet[2849]: I0213 20:16:55.866186 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf5d926418320884626c76b12aff11c8-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" (UID: \"bf5d926418320884626c76b12aff11c8\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.866543 kubelet[2849]: I0213 20:16:55.866278 2849 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.871616 kubelet[2849]: I0213 20:16:55.871490 2849 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:55.871869 kubelet[2849]: E0213 20:16:55.871855 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.163:6443/api/v1/nodes\": dial tcp 147.75.90.163:6443: connect: connection refused" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:56.036390 containerd[1795]: time="2025-02-13T20:16:56.036255366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-5d3d77ba07,Uid:bf5d926418320884626c76b12aff11c8,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:56.064702 containerd[1795]: time="2025-02-13T20:16:56.064643127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-5d3d77ba07,Uid:01fdd65b79eabd9be483096d47df8cce,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:56.076325 containerd[1795]: time="2025-02-13T20:16:56.076291368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-5d3d77ba07,Uid:2bab0b148e0a917d7c3633e8a2244efb,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:56.166877 kubelet[2849]: E0213 20:16:56.166750 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.1-a-5d3d77ba07?timeout=10s\": dial tcp 147.75.90.163:6443: connect: connection refused" interval="800ms" Feb 13 20:16:56.274108 kubelet[2849]: I0213 20:16:56.274090 2849 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:56.274343 kubelet[2849]: E0213 20:16:56.274323 2849 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.90.163:6443/api/v1/nodes\": dial tcp 147.75.90.163:6443: connect: connection refused" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:56.536603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805556079.mount: Deactivated successfully. Feb 13 20:16:56.538410 containerd[1795]: time="2025-02-13T20:16:56.538368949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:56.538656 containerd[1795]: time="2025-02-13T20:16:56.538607251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:16:56.539136 containerd[1795]: time="2025-02-13T20:16:56.539094012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:56.539786 containerd[1795]: time="2025-02-13T20:16:56.539748536Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:56.539911 containerd[1795]: time="2025-02-13T20:16:56.539859310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:16:56.540592 containerd[1795]: time="2025-02-13T20:16:56.540557684Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:56.540898 containerd[1795]: time="2025-02-13T20:16:56.540860538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:16:56.542267 containerd[1795]: time="2025-02-13T20:16:56.542231467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.515162ms" Feb 13 20:16:56.542591 containerd[1795]: time="2025-02-13T20:16:56.542555618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:56.543365 containerd[1795]: time="2025-02-13T20:16:56.543328846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.832439ms" Feb 13 20:16:56.544477 containerd[1795]: time="2025-02-13T20:16:56.544408699Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 468.084701ms" Feb 13 20:16:56.659647 containerd[1795]: time="2025-02-13T20:16:56.659596819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:56.659647 containerd[1795]: time="2025-02-13T20:16:56.659628103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:56.659647 containerd[1795]: time="2025-02-13T20:16:56.659634743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:56.659647 containerd[1795]: time="2025-02-13T20:16:56.659614508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:56.659647 containerd[1795]: time="2025-02-13T20:16:56.659641984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:56.659647 containerd[1795]: time="2025-02-13T20:16:56.659649289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:56.659808 containerd[1795]: time="2025-02-13T20:16:56.659675059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:56.659808 containerd[1795]: time="2025-02-13T20:16:56.659691862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:56.661848 containerd[1795]: time="2025-02-13T20:16:56.659354861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:56.661848 containerd[1795]: time="2025-02-13T20:16:56.661835394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:56.661913 containerd[1795]: time="2025-02-13T20:16:56.661865330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:56.661944 containerd[1795]: time="2025-02-13T20:16:56.661928734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:56.679760 systemd[1]: Started cri-containerd-6d6ae412ec6ebcebc8cdf38e0875b37dc3cb5466d957c6e35c470154f68724f4.scope - libcontainer container 6d6ae412ec6ebcebc8cdf38e0875b37dc3cb5466d957c6e35c470154f68724f4. Feb 13 20:16:56.680491 systemd[1]: Started cri-containerd-7d9b89055036ca8ff49c91235e92f6f94a966beb8ed19226ee45fd9a02cf729d.scope - libcontainer container 7d9b89055036ca8ff49c91235e92f6f94a966beb8ed19226ee45fd9a02cf729d. Feb 13 20:16:56.682062 systemd[1]: Started cri-containerd-82440610512ec245e47d234560ab104c60bc7cfc1b9570ed6133da300a0d345d.scope - libcontainer container 82440610512ec245e47d234560ab104c60bc7cfc1b9570ed6133da300a0d345d. Feb 13 20:16:56.702083 containerd[1795]: time="2025-02-13T20:16:56.702052685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.1-a-5d3d77ba07,Uid:bf5d926418320884626c76b12aff11c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d6ae412ec6ebcebc8cdf38e0875b37dc3cb5466d957c6e35c470154f68724f4\"" Feb 13 20:16:56.702727 containerd[1795]: time="2025-02-13T20:16:56.702713487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.1-a-5d3d77ba07,Uid:2bab0b148e0a917d7c3633e8a2244efb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d9b89055036ca8ff49c91235e92f6f94a966beb8ed19226ee45fd9a02cf729d\"" Feb 13 20:16:56.703878 containerd[1795]: time="2025-02-13T20:16:56.703867168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.1-a-5d3d77ba07,Uid:01fdd65b79eabd9be483096d47df8cce,Namespace:kube-system,Attempt:0,} returns sandbox id \"82440610512ec245e47d234560ab104c60bc7cfc1b9570ed6133da300a0d345d\"" Feb 13 20:16:56.704015 containerd[1795]: time="2025-02-13T20:16:56.704004390Z" level=info msg="CreateContainer within sandbox \"7d9b89055036ca8ff49c91235e92f6f94a966beb8ed19226ee45fd9a02cf729d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:16:56.704045 containerd[1795]: time="2025-02-13T20:16:56.704013190Z" level=info msg="CreateContainer within sandbox \"6d6ae412ec6ebcebc8cdf38e0875b37dc3cb5466d957c6e35c470154f68724f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:16:56.704801 containerd[1795]: time="2025-02-13T20:16:56.704790139Z" level=info msg="CreateContainer within sandbox \"82440610512ec245e47d234560ab104c60bc7cfc1b9570ed6133da300a0d345d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:16:56.709957 containerd[1795]: time="2025-02-13T20:16:56.709915944Z" level=info msg="CreateContainer within sandbox \"6d6ae412ec6ebcebc8cdf38e0875b37dc3cb5466d957c6e35c470154f68724f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"047b3b6809dde8d2a51c795b6f59182effd78d067a6bcd687e92f224b99fd107\"" Feb 13 20:16:56.710195 containerd[1795]: time="2025-02-13T20:16:56.710150153Z" level=info msg="StartContainer for \"047b3b6809dde8d2a51c795b6f59182effd78d067a6bcd687e92f224b99fd107\"" Feb 13 20:16:56.711371 containerd[1795]: time="2025-02-13T20:16:56.711332937Z" level=info msg="CreateContainer within sandbox \"7d9b89055036ca8ff49c91235e92f6f94a966beb8ed19226ee45fd9a02cf729d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36d73c1905326e1944cc22f462820b91ab899331d36a0c59b731a600b1a41263\"" Feb 13 20:16:56.711573 containerd[1795]: time="2025-02-13T20:16:56.711521713Z" level=info msg="StartContainer for \"36d73c1905326e1944cc22f462820b91ab899331d36a0c59b731a600b1a41263\"" Feb 13 20:16:56.712535 containerd[1795]: time="2025-02-13T20:16:56.712493691Z" level=info msg="CreateContainer within sandbox \"82440610512ec245e47d234560ab104c60bc7cfc1b9570ed6133da300a0d345d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fed2d2a744866de2a3f9a7d342374cdd7a0907e5b879797d034f726e7756692b\"" Feb 13 20:16:56.712739 containerd[1795]: time="2025-02-13T20:16:56.712686671Z" level=info msg="StartContainer for \"fed2d2a744866de2a3f9a7d342374cdd7a0907e5b879797d034f726e7756692b\"" Feb 13 20:16:56.733734 systemd[1]: Started cri-containerd-047b3b6809dde8d2a51c795b6f59182effd78d067a6bcd687e92f224b99fd107.scope - libcontainer container 047b3b6809dde8d2a51c795b6f59182effd78d067a6bcd687e92f224b99fd107. Feb 13 20:16:56.734333 systemd[1]: Started cri-containerd-36d73c1905326e1944cc22f462820b91ab899331d36a0c59b731a600b1a41263.scope - libcontainer container 36d73c1905326e1944cc22f462820b91ab899331d36a0c59b731a600b1a41263. Feb 13 20:16:56.735924 systemd[1]: Started cri-containerd-fed2d2a744866de2a3f9a7d342374cdd7a0907e5b879797d034f726e7756692b.scope - libcontainer container fed2d2a744866de2a3f9a7d342374cdd7a0907e5b879797d034f726e7756692b. Feb 13 20:16:56.748645 kubelet[2849]: W0213 20:16:56.748583 2849 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:56.748645 kubelet[2849]: E0213 20:16:56.748622 2849 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.90.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.163:6443: connect: connection refused Feb 13 20:16:56.757194 containerd[1795]: time="2025-02-13T20:16:56.757166064Z" level=info msg="StartContainer for \"36d73c1905326e1944cc22f462820b91ab899331d36a0c59b731a600b1a41263\" returns successfully" Feb 13 20:16:56.757280 containerd[1795]: time="2025-02-13T20:16:56.757180139Z" level=info msg="StartContainer for \"047b3b6809dde8d2a51c795b6f59182effd78d067a6bcd687e92f224b99fd107\" returns successfully" Feb 13 20:16:56.758712 containerd[1795]: time="2025-02-13T20:16:56.758691807Z" level=info msg="StartContainer for \"fed2d2a744866de2a3f9a7d342374cdd7a0907e5b879797d034f726e7756692b\" returns successfully" Feb 13 20:16:57.076387 kubelet[2849]: I0213 20:16:57.076370 2849 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:57.241865 kubelet[2849]: E0213 20:16:57.241814 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.1-a-5d3d77ba07\" not found" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:57.349136 kubelet[2849]: I0213 20:16:57.348911 2849 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:57.559145 kubelet[2849]: I0213 20:16:57.559076 2849 apiserver.go:52] "Watching apiserver" Feb 13 20:16:57.564327 kubelet[2849]: I0213 20:16:57.564269 2849 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:16:57.612646 kubelet[2849]: E0213 20:16:57.612421 2849 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152.2.1-a-5d3d77ba07\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:57.612646 kubelet[2849]: E0213 20:16:57.612494 2849 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:57.612646 kubelet[2849]: E0213 20:16:57.612520 2849 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:16:58.611394 kubelet[2849]: W0213 20:16:58.611339 2849 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:16:59.551319 systemd[1]: Reloading requested from client PID 3164 ('systemctl') (unit session-11.scope)... Feb 13 20:16:59.551326 systemd[1]: Reloading... Feb 13 20:16:59.588463 zram_generator::config[3203]: No configuration found. Feb 13 20:16:59.654927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:59.721554 systemd[1]: Reloading finished in 170 ms. Feb 13 20:16:59.745531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:59.745662 kubelet[2849]: I0213 20:16:59.745578 2849 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:16:59.745662 kubelet[2849]: E0213 20:16:59.745520 2849 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152.2.1-a-5d3d77ba07.1823dddb6b3d2601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.1-a-5d3d77ba07,UID:ci-4152.2.1-a-5d3d77ba07,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.1-a-5d3d77ba07,},FirstTimestamp:2025-02-13 20:16:55.561889281 +0000 UTC m=+0.444245332,LastTimestamp:2025-02-13 20:16:55.561889281 +0000 UTC m=+0.444245332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.1-a-5d3d77ba07,}" Feb 13 20:16:59.757499 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:16:59.757617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:59.757643 systemd[1]: kubelet.service: Consumed 1.001s CPU time, 124.6M memory peak, 0B memory swap peak. Feb 13 20:16:59.775491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:59.992555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:00.005034 (kubelet)[3267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:17:00.042059 kubelet[3267]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:00.042059 kubelet[3267]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:17:00.042059 kubelet[3267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:00.042344 kubelet[3267]: I0213 20:17:00.042089 3267 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:17:00.045617 kubelet[3267]: I0213 20:17:00.045567 3267 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:17:00.045617 kubelet[3267]: I0213 20:17:00.045583 3267 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:17:00.045766 kubelet[3267]: I0213 20:17:00.045725 3267 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:17:00.046817 kubelet[3267]: I0213 20:17:00.046780 3267 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:17:00.047611 kubelet[3267]: I0213 20:17:00.047572 3267 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:00.058905 kubelet[3267]: I0213 20:17:00.058860 3267 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:17:00.059061 kubelet[3267]: I0213 20:17:00.059008 3267 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:17:00.059198 kubelet[3267]: I0213 20:17:00.059032 3267 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.1-a-5d3d77ba07","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:17:00.059198 kubelet[3267]: I0213 20:17:00.059178 3267 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:17:00.059198 kubelet[3267]: I0213 20:17:00.059188 3267 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:17:00.059331 kubelet[3267]: I0213 20:17:00.059220 3267 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:00.059331 kubelet[3267]: I0213 20:17:00.059293 3267 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:17:00.059331 kubelet[3267]: I0213 20:17:00.059303 3267 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:17:00.059331 kubelet[3267]: I0213 20:17:00.059318 3267 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:17:00.059331 kubelet[3267]: I0213 20:17:00.059331 3267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:17:00.059797 kubelet[3267]: I0213 20:17:00.059776 3267 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:17:00.059936 kubelet[3267]: I0213 20:17:00.059924 3267 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:17:00.060248 kubelet[3267]: I0213 20:17:00.060234 3267 server.go:1264] "Started kubelet" Feb 13 20:17:00.060310 kubelet[3267]: I0213 20:17:00.060277 3267 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:17:00.060367 kubelet[3267]: I0213 20:17:00.060312 3267 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:17:00.060582 kubelet[3267]: I0213 20:17:00.060566 3267 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:17:00.061397 kubelet[3267]: I0213 20:17:00.061385 3267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:17:00.061472 kubelet[3267]: I0213 20:17:00.061460 3267 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:17:00.061519 kubelet[3267]: E0213 20:17:00.061466 3267 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.1-a-5d3d77ba07\" not found" Feb 13 20:17:00.061519 kubelet[3267]: I0213 20:17:00.061498 3267 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:17:00.061617 kubelet[3267]: I0213 20:17:00.061604 3267 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:17:00.061703 kubelet[3267]: E0213 20:17:00.061679 3267 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:17:00.061762 kubelet[3267]: I0213 20:17:00.061710 3267 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:17:00.061988 kubelet[3267]: I0213 20:17:00.061972 3267 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:17:00.062077 kubelet[3267]: I0213 20:17:00.062059 3267 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:17:00.062956 kubelet[3267]: I0213 20:17:00.062939 3267 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:17:00.069485 kubelet[3267]: I0213 20:17:00.069443 3267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:17:00.070265 kubelet[3267]: I0213 20:17:00.070247 3267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:17:00.070360 kubelet[3267]: I0213 20:17:00.070270 3267 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:17:00.070360 kubelet[3267]: I0213 20:17:00.070283 3267 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:17:00.070360 kubelet[3267]: E0213 20:17:00.070322 3267 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:17:00.082788 kubelet[3267]: I0213 20:17:00.082768 3267 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:17:00.082788 kubelet[3267]: I0213 20:17:00.082780 3267 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:17:00.082788 kubelet[3267]: I0213 20:17:00.082795 3267 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:00.082923 kubelet[3267]: I0213 20:17:00.082914 3267 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:17:00.082946 kubelet[3267]: I0213 20:17:00.082922 3267 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:17:00.082946 kubelet[3267]: I0213 20:17:00.082936 3267 policy_none.go:49] "None policy: Start" Feb 13 20:17:00.083235 kubelet[3267]: I0213 20:17:00.083225 3267 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:17:00.083266 kubelet[3267]: I0213 20:17:00.083239 3267 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:17:00.083351 kubelet[3267]: I0213 20:17:00.083345 3267 state_mem.go:75] "Updated machine memory state" Feb 13 20:17:00.085874 kubelet[3267]: I0213 20:17:00.085861 3267 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:17:00.085999 kubelet[3267]: I0213 20:17:00.085974 3267 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:17:00.086052 kubelet[3267]: I0213 20:17:00.086044 3267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:17:00.168945 kubelet[3267]: I0213 20:17:00.168886 3267 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.171062 kubelet[3267]: I0213 20:17:00.170964 3267 topology_manager.go:215] "Topology Admit Handler" podUID="01fdd65b79eabd9be483096d47df8cce" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.171299 kubelet[3267]: I0213 20:17:00.171216 3267 topology_manager.go:215] "Topology Admit Handler" podUID="2bab0b148e0a917d7c3633e8a2244efb" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.171530 kubelet[3267]: I0213 20:17:00.171479 3267 topology_manager.go:215] "Topology Admit Handler" podUID="bf5d926418320884626c76b12aff11c8" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.179792 kubelet[3267]: W0213 20:17:00.179730 3267 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:00.179792 kubelet[3267]: I0213 20:17:00.179780 3267 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.180228 kubelet[3267]: W0213 20:17:00.179778 3267 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:00.180228 kubelet[3267]: I0213 20:17:00.179944 3267 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.180666 kubelet[3267]: W0213 20:17:00.180590 3267 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:00.180882 kubelet[3267]: E0213 20:17:00.180712 3267 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.363609 kubelet[3267]: I0213 20:17:00.363363 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.363609 kubelet[3267]: I0213 20:17:00.363477 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.363609 kubelet[3267]: I0213 20:17:00.363544 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.363609 kubelet[3267]: I0213 20:17:00.363604 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf5d926418320884626c76b12aff11c8-k8s-certs\") pod \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" (UID: \"bf5d926418320884626c76b12aff11c8\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.364192 kubelet[3267]: I0213 20:17:00.363662 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf5d926418320884626c76b12aff11c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" (UID: \"bf5d926418320884626c76b12aff11c8\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.364192 kubelet[3267]: I0213 20:17:00.363715 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-ca-certs\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.364192 kubelet[3267]: I0213 20:17:00.363764 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/01fdd65b79eabd9be483096d47df8cce-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" (UID: \"01fdd65b79eabd9be483096d47df8cce\") " pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.364192 kubelet[3267]: I0213 20:17:00.363810 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bab0b148e0a917d7c3633e8a2244efb-kubeconfig\") pod \"kube-scheduler-ci-4152.2.1-a-5d3d77ba07\" (UID: \"2bab0b148e0a917d7c3633e8a2244efb\") " pod="kube-system/kube-scheduler-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:00.364192 kubelet[3267]: I0213 20:17:00.363854 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf5d926418320884626c76b12aff11c8-ca-certs\") pod \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" (UID: \"bf5d926418320884626c76b12aff11c8\") " pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:01.059924 kubelet[3267]: I0213 20:17:01.059867 3267 apiserver.go:52] "Watching apiserver" Feb 13 20:17:01.062381 kubelet[3267]: I0213 20:17:01.062365 3267 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:17:01.077237 kubelet[3267]: W0213 20:17:01.076916 3267 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:01.077237 kubelet[3267]: E0213 20:17:01.076985 3267 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152.2.1-a-5d3d77ba07\" already exists" pod="kube-system/kube-scheduler-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:01.077374 kubelet[3267]: W0213 20:17:01.077358 3267 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:01.077430 kubelet[3267]: E0213 20:17:01.077404 3267 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.1-a-5d3d77ba07\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:01.077642 kubelet[3267]: W0213 20:17:01.077602 3267 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:17:01.077671 kubelet[3267]: E0213 20:17:01.077640 3267 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.1-a-5d3d77ba07\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:01.090678 kubelet[3267]: I0213 20:17:01.090630 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.1-a-5d3d77ba07" podStartSLOduration=1.090617372 podStartE2EDuration="1.090617372s" podCreationTimestamp="2025-02-13 20:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:01.090568015 +0000 UTC m=+1.080849926" watchObservedRunningTime="2025-02-13 20:17:01.090617372 +0000 UTC m=+1.080899282" Feb 13 20:17:01.094720 kubelet[3267]: I0213 20:17:01.094656 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.1-a-5d3d77ba07" podStartSLOduration=1.094643244 podStartE2EDuration="1.094643244s" podCreationTimestamp="2025-02-13 20:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:01.094577057 +0000 UTC m=+1.084858968" watchObservedRunningTime="2025-02-13 20:17:01.094643244 +0000 UTC m=+1.084925153" Feb 13 20:17:01.098873 kubelet[3267]: I0213 20:17:01.098807 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.1-a-5d3d77ba07" podStartSLOduration=3.098794091 podStartE2EDuration="3.098794091s" podCreationTimestamp="2025-02-13 20:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:01.098756388 +0000 UTC m=+1.089038299" watchObservedRunningTime="2025-02-13 20:17:01.098794091 +0000 UTC m=+1.089075998" Feb 13 20:17:04.000546 sudo[2068]: pam_unix(sudo:session): session closed for user root Feb 13 20:17:04.001198 sshd[2067]: Connection closed by 139.178.68.195 port 39552 Feb 13 20:17:04.001378 sshd-session[2065]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:04.003060 systemd[1]: sshd@8-147.75.90.163:22-139.178.68.195:39552.service: Deactivated successfully. Feb 13 20:17:04.003948 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:17:04.004029 systemd[1]: session-11.scope: Consumed 3.174s CPU time, 197.0M memory peak, 0B memory swap peak. Feb 13 20:17:04.004711 systemd-logind[1777]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:17:04.005273 systemd-logind[1777]: Removed session 11. Feb 13 20:17:13.196716 update_engine[1782]: I20250213 20:17:13.196590 1782 update_attempter.cc:509] Updating boot flags... Feb 13 20:17:13.238499 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3435) Feb 13 20:17:13.253731 kubelet[3267]: I0213 20:17:13.253683 3267 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:17:13.253930 containerd[1795]: time="2025-02-13T20:17:13.253880127Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:17:13.254048 kubelet[3267]: I0213 20:17:13.253983 3267 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:17:13.266502 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3438) Feb 13 20:17:13.357716 kubelet[3267]: I0213 20:17:13.357615 3267 topology_manager.go:215] "Topology Admit Handler" podUID="cca88610-6310-4cc8-923a-ec3a69d62985" podNamespace="kube-system" podName="kube-proxy-clzrt" Feb 13 20:17:13.373871 systemd[1]: Created slice kubepods-besteffort-podcca88610_6310_4cc8_923a_ec3a69d62985.slice - libcontainer container kubepods-besteffort-podcca88610_6310_4cc8_923a_ec3a69d62985.slice. Feb 13 20:17:13.461883 kubelet[3267]: I0213 20:17:13.461583 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cca88610-6310-4cc8-923a-ec3a69d62985-kube-proxy\") pod \"kube-proxy-clzrt\" (UID: \"cca88610-6310-4cc8-923a-ec3a69d62985\") " pod="kube-system/kube-proxy-clzrt" Feb 13 20:17:13.461883 kubelet[3267]: I0213 20:17:13.461719 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cca88610-6310-4cc8-923a-ec3a69d62985-xtables-lock\") pod \"kube-proxy-clzrt\" (UID: \"cca88610-6310-4cc8-923a-ec3a69d62985\") " pod="kube-system/kube-proxy-clzrt" Feb 13 20:17:13.461883 kubelet[3267]: I0213 20:17:13.461788 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cca88610-6310-4cc8-923a-ec3a69d62985-lib-modules\") pod \"kube-proxy-clzrt\" (UID: \"cca88610-6310-4cc8-923a-ec3a69d62985\") " pod="kube-system/kube-proxy-clzrt" Feb 13 20:17:13.461883 kubelet[3267]: I0213 20:17:13.461844 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cxjr\" (UniqueName: \"kubernetes.io/projected/cca88610-6310-4cc8-923a-ec3a69d62985-kube-api-access-4cxjr\") pod \"kube-proxy-clzrt\" (UID: \"cca88610-6310-4cc8-923a-ec3a69d62985\") " pod="kube-system/kube-proxy-clzrt" Feb 13 20:17:13.575893 kubelet[3267]: E0213 20:17:13.575834 3267 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:17:13.575893 kubelet[3267]: E0213 20:17:13.575896 3267 projected.go:200] Error preparing data for projected volume kube-api-access-4cxjr for pod kube-system/kube-proxy-clzrt: configmap "kube-root-ca.crt" not found Feb 13 20:17:13.576276 kubelet[3267]: E0213 20:17:13.576034 3267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cca88610-6310-4cc8-923a-ec3a69d62985-kube-api-access-4cxjr podName:cca88610-6310-4cc8-923a-ec3a69d62985 nodeName:}" failed. No retries permitted until 2025-02-13 20:17:14.075979895 +0000 UTC m=+14.066261874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4cxjr" (UniqueName: "kubernetes.io/projected/cca88610-6310-4cc8-923a-ec3a69d62985-kube-api-access-4cxjr") pod "kube-proxy-clzrt" (UID: "cca88610-6310-4cc8-923a-ec3a69d62985") : configmap "kube-root-ca.crt" not found Feb 13 20:17:14.293126 containerd[1795]: time="2025-02-13T20:17:14.292999931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clzrt,Uid:cca88610-6310-4cc8-923a-ec3a69d62985,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:14.304484 kubelet[3267]: I0213 20:17:14.304457 3267 topology_manager.go:215] "Topology Admit Handler" podUID="e2c31b9a-2418-4820-97f0-31eaad2a72e5" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-5vpkb" Feb 13 20:17:14.306356 containerd[1795]: time="2025-02-13T20:17:14.306306029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:14.306356 containerd[1795]: time="2025-02-13T20:17:14.306342099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:14.306356 containerd[1795]: time="2025-02-13T20:17:14.306349465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:14.306501 containerd[1795]: time="2025-02-13T20:17:14.306395252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:14.309510 systemd[1]: Created slice kubepods-besteffort-pode2c31b9a_2418_4820_97f0_31eaad2a72e5.slice - libcontainer container kubepods-besteffort-pode2c31b9a_2418_4820_97f0_31eaad2a72e5.slice. Feb 13 20:17:14.327742 systemd[1]: Started cri-containerd-e864eb4eb841d2c96ca7473d89b27a8334e7ecc661add371e1fb712b7ab42e99.scope - libcontainer container e864eb4eb841d2c96ca7473d89b27a8334e7ecc661add371e1fb712b7ab42e99. Feb 13 20:17:14.337444 containerd[1795]: time="2025-02-13T20:17:14.337418024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clzrt,Uid:cca88610-6310-4cc8-923a-ec3a69d62985,Namespace:kube-system,Attempt:0,} returns sandbox id \"e864eb4eb841d2c96ca7473d89b27a8334e7ecc661add371e1fb712b7ab42e99\"" Feb 13 20:17:14.338831 containerd[1795]: time="2025-02-13T20:17:14.338818146Z" level=info msg="CreateContainer within sandbox \"e864eb4eb841d2c96ca7473d89b27a8334e7ecc661add371e1fb712b7ab42e99\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:17:14.344600 containerd[1795]: time="2025-02-13T20:17:14.344556351Z" level=info msg="CreateContainer within sandbox \"e864eb4eb841d2c96ca7473d89b27a8334e7ecc661add371e1fb712b7ab42e99\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e9410b81af8226c72c32a38a0a011d575ef0c64aee9323343d9a4f773241daf\"" Feb 13 20:17:14.344923 containerd[1795]: time="2025-02-13T20:17:14.344886995Z" level=info msg="StartContainer for \"3e9410b81af8226c72c32a38a0a011d575ef0c64aee9323343d9a4f773241daf\"" Feb 13 20:17:14.370027 kubelet[3267]: I0213 20:17:14.369976 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2c31b9a-2418-4820-97f0-31eaad2a72e5-var-lib-calico\") pod \"tigera-operator-7bc55997bb-5vpkb\" (UID: \"e2c31b9a-2418-4820-97f0-31eaad2a72e5\") " pod="tigera-operator/tigera-operator-7bc55997bb-5vpkb" Feb 13 20:17:14.370027 kubelet[3267]: I0213 20:17:14.369999 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zb4h\" (UniqueName: \"kubernetes.io/projected/e2c31b9a-2418-4820-97f0-31eaad2a72e5-kube-api-access-6zb4h\") pod \"tigera-operator-7bc55997bb-5vpkb\" (UID: \"e2c31b9a-2418-4820-97f0-31eaad2a72e5\") " pod="tigera-operator/tigera-operator-7bc55997bb-5vpkb" Feb 13 20:17:14.372595 systemd[1]: Started cri-containerd-3e9410b81af8226c72c32a38a0a011d575ef0c64aee9323343d9a4f773241daf.scope - libcontainer container 3e9410b81af8226c72c32a38a0a011d575ef0c64aee9323343d9a4f773241daf. Feb 13 20:17:14.389304 containerd[1795]: time="2025-02-13T20:17:14.389273194Z" level=info msg="StartContainer for \"3e9410b81af8226c72c32a38a0a011d575ef0c64aee9323343d9a4f773241daf\" returns successfully" Feb 13 20:17:14.612880 containerd[1795]: time="2025-02-13T20:17:14.612645268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5vpkb,Uid:e2c31b9a-2418-4820-97f0-31eaad2a72e5,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:17:14.624262 containerd[1795]: time="2025-02-13T20:17:14.624217740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:14.624262 containerd[1795]: time="2025-02-13T20:17:14.624248998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:14.624262 containerd[1795]: time="2025-02-13T20:17:14.624259888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:14.624417 containerd[1795]: time="2025-02-13T20:17:14.624306356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:14.647646 systemd[1]: Started cri-containerd-24ec8331362fd35d594ceed5f40042ac07ec8e137fae6a30f08c28bea434d048.scope - libcontainer container 24ec8331362fd35d594ceed5f40042ac07ec8e137fae6a30f08c28bea434d048. Feb 13 20:17:14.675998 containerd[1795]: time="2025-02-13T20:17:14.675970872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5vpkb,Uid:e2c31b9a-2418-4820-97f0-31eaad2a72e5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"24ec8331362fd35d594ceed5f40042ac07ec8e137fae6a30f08c28bea434d048\"" Feb 13 20:17:14.677024 containerd[1795]: time="2025-02-13T20:17:14.677007112Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:17:15.128417 kubelet[3267]: I0213 20:17:15.128264 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clzrt" podStartSLOduration=2.128230403 podStartE2EDuration="2.128230403s" podCreationTimestamp="2025-02-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:15.127801366 +0000 UTC m=+15.118083348" watchObservedRunningTime="2025-02-13 20:17:15.128230403 +0000 UTC m=+15.118512369" Feb 13 20:17:15.188056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983908773.mount: Deactivated successfully. Feb 13 20:17:16.021537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093922471.mount: Deactivated successfully. Feb 13 20:17:16.227307 containerd[1795]: time="2025-02-13T20:17:16.227262466Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:16.227519 containerd[1795]: time="2025-02-13T20:17:16.227455988Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:17:16.227845 containerd[1795]: time="2025-02-13T20:17:16.227809689Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:16.228891 containerd[1795]: time="2025-02-13T20:17:16.228855712Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:16.229684 containerd[1795]: time="2025-02-13T20:17:16.229648337Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.552621349s" Feb 13 20:17:16.229684 containerd[1795]: time="2025-02-13T20:17:16.229663025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:17:16.230647 containerd[1795]: time="2025-02-13T20:17:16.230636718Z" level=info msg="CreateContainer within sandbox \"24ec8331362fd35d594ceed5f40042ac07ec8e137fae6a30f08c28bea434d048\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:17:16.234214 containerd[1795]: time="2025-02-13T20:17:16.234170565Z" level=info msg="CreateContainer within sandbox \"24ec8331362fd35d594ceed5f40042ac07ec8e137fae6a30f08c28bea434d048\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b49f9df33ced4224252d1a52672edf655d1da30cf34c5e0835c090b5cf9d78d6\"" Feb 13 20:17:16.234353 containerd[1795]: time="2025-02-13T20:17:16.234313554Z" level=info msg="StartContainer for \"b49f9df33ced4224252d1a52672edf655d1da30cf34c5e0835c090b5cf9d78d6\"" Feb 13 20:17:16.256768 systemd[1]: Started cri-containerd-b49f9df33ced4224252d1a52672edf655d1da30cf34c5e0835c090b5cf9d78d6.scope - libcontainer container b49f9df33ced4224252d1a52672edf655d1da30cf34c5e0835c090b5cf9d78d6. Feb 13 20:17:16.268019 containerd[1795]: time="2025-02-13T20:17:16.267963784Z" level=info msg="StartContainer for \"b49f9df33ced4224252d1a52672edf655d1da30cf34c5e0835c090b5cf9d78d6\" returns successfully" Feb 13 20:17:17.132697 kubelet[3267]: I0213 20:17:17.132589 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-5vpkb" podStartSLOduration=1.579172678 podStartE2EDuration="3.132551751s" podCreationTimestamp="2025-02-13 20:17:14 +0000 UTC" firstStartedPulling="2025-02-13 20:17:14.676691241 +0000 UTC m=+14.666973163" lastFinishedPulling="2025-02-13 20:17:16.230070325 +0000 UTC m=+16.220352236" observedRunningTime="2025-02-13 20:17:17.132428046 +0000 UTC m=+17.122710035" watchObservedRunningTime="2025-02-13 20:17:17.132551751 +0000 UTC m=+17.122833713" Feb 13 20:17:19.158532 kubelet[3267]: I0213 20:17:19.158494 3267 topology_manager.go:215] "Topology Admit Handler" podUID="d5bd188f-5958-46b8-909e-178f7436febb" podNamespace="calico-system" podName="calico-typha-68ccd97bf6-5czw4" Feb 13 20:17:19.164942 systemd[1]: Created slice kubepods-besteffort-podd5bd188f_5958_46b8_909e_178f7436febb.slice - libcontainer container kubepods-besteffort-podd5bd188f_5958_46b8_909e_178f7436febb.slice. Feb 13 20:17:19.175985 kubelet[3267]: I0213 20:17:19.175958 3267 topology_manager.go:215] "Topology Admit Handler" podUID="f5960550-b202-42d3-92d4-6b98b4d12969" podNamespace="calico-system" podName="calico-node-fc7qg" Feb 13 20:17:19.179517 systemd[1]: Created slice kubepods-besteffort-podf5960550_b202_42d3_92d4_6b98b4d12969.slice - libcontainer container kubepods-besteffort-podf5960550_b202_42d3_92d4_6b98b4d12969.slice. Feb 13 20:17:19.204955 kubelet[3267]: I0213 20:17:19.204841 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-xtables-lock\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.204955 kubelet[3267]: I0213 20:17:19.204938 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-var-lib-calico\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.205497 kubelet[3267]: I0213 20:17:19.205047 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd8fh\" (UniqueName: \"kubernetes.io/projected/f5960550-b202-42d3-92d4-6b98b4d12969-kube-api-access-pd8fh\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.205497 kubelet[3267]: I0213 20:17:19.205150 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-flexvol-driver-host\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.205497 kubelet[3267]: I0213 20:17:19.205246 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-policysync\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.205497 kubelet[3267]: I0213 20:17:19.205425 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5960550-b202-42d3-92d4-6b98b4d12969-node-certs\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.206245 kubelet[3267]: I0213 20:17:19.205581 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6mf\" (UniqueName: \"kubernetes.io/projected/d5bd188f-5958-46b8-909e-178f7436febb-kube-api-access-nk6mf\") pod \"calico-typha-68ccd97bf6-5czw4\" (UID: \"d5bd188f-5958-46b8-909e-178f7436febb\") " pod="calico-system/calico-typha-68ccd97bf6-5czw4" Feb 13 20:17:19.206245 kubelet[3267]: I0213 20:17:19.205700 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-lib-modules\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.206245 kubelet[3267]: I0213 20:17:19.205790 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-var-run-calico\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.206245 kubelet[3267]: I0213 20:17:19.205883 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-cni-net-dir\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.206245 kubelet[3267]: I0213 20:17:19.205973 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d5bd188f-5958-46b8-909e-178f7436febb-typha-certs\") pod \"calico-typha-68ccd97bf6-5czw4\" (UID: \"d5bd188f-5958-46b8-909e-178f7436febb\") " pod="calico-system/calico-typha-68ccd97bf6-5czw4" Feb 13 20:17:19.206862 kubelet[3267]: I0213 20:17:19.206065 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-cni-log-dir\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.206862 kubelet[3267]: I0213 20:17:19.206156 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bd188f-5958-46b8-909e-178f7436febb-tigera-ca-bundle\") pod \"calico-typha-68ccd97bf6-5czw4\" (UID: \"d5bd188f-5958-46b8-909e-178f7436febb\") " pod="calico-system/calico-typha-68ccd97bf6-5czw4" Feb 13 20:17:19.206862 kubelet[3267]: I0213 20:17:19.206251 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5960550-b202-42d3-92d4-6b98b4d12969-tigera-ca-bundle\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.206862 kubelet[3267]: I0213 20:17:19.206349 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5960550-b202-42d3-92d4-6b98b4d12969-cni-bin-dir\") pod \"calico-node-fc7qg\" (UID: \"f5960550-b202-42d3-92d4-6b98b4d12969\") " pod="calico-system/calico-node-fc7qg" Feb 13 20:17:19.306012 kubelet[3267]: I0213 20:17:19.305927 3267 topology_manager.go:215] "Topology Admit Handler" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" podNamespace="calico-system" podName="csi-node-driver-kcvvn" Feb 13 20:17:19.306639 kubelet[3267]: E0213 20:17:19.306587 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:19.310414 kubelet[3267]: E0213 20:17:19.310351 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.310414 kubelet[3267]: W0213 20:17:19.310410 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.310982 kubelet[3267]: E0213 20:17:19.310532 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.311329 kubelet[3267]: E0213 20:17:19.311263 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.311986 kubelet[3267]: W0213 20:17:19.311515 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.312211 kubelet[3267]: E0213 20:17:19.311588 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.312728 kubelet[3267]: E0213 20:17:19.312628 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.312728 kubelet[3267]: W0213 20:17:19.312688 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.313043 kubelet[3267]: E0213 20:17:19.312736 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.313319 kubelet[3267]: E0213 20:17:19.313291 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.313411 kubelet[3267]: W0213 20:17:19.313321 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.313411 kubelet[3267]: E0213 20:17:19.313351 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.313840 kubelet[3267]: E0213 20:17:19.313816 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.313840 kubelet[3267]: W0213 20:17:19.313839 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.314071 kubelet[3267]: E0213 20:17:19.313867 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.314563 kubelet[3267]: E0213 20:17:19.314521 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.314563 kubelet[3267]: W0213 20:17:19.314562 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.314884 kubelet[3267]: E0213 20:17:19.314599 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.315736 kubelet[3267]: E0213 20:17:19.315700 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.315736 kubelet[3267]: W0213 20:17:19.315733 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.316027 kubelet[3267]: E0213 20:17:19.315773 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.316230 kubelet[3267]: E0213 20:17:19.316191 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.316230 kubelet[3267]: W0213 20:17:19.316225 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.316512 kubelet[3267]: E0213 20:17:19.316254 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.324633 kubelet[3267]: E0213 20:17:19.324050 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.324633 kubelet[3267]: W0213 20:17:19.324093 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.324633 kubelet[3267]: E0213 20:17:19.324170 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.324633 kubelet[3267]: E0213 20:17:19.324443 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.324633 kubelet[3267]: W0213 20:17:19.324477 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.324633 kubelet[3267]: E0213 20:17:19.324502 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.397406 kubelet[3267]: E0213 20:17:19.397384 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.397406 kubelet[3267]: W0213 20:17:19.397401 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.397538 kubelet[3267]: E0213 20:17:19.397419 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.397597 kubelet[3267]: E0213 20:17:19.397588 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.397597 kubelet[3267]: W0213 20:17:19.397596 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.397679 kubelet[3267]: E0213 20:17:19.397606 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.397821 kubelet[3267]: E0213 20:17:19.397810 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.397821 kubelet[3267]: W0213 20:17:19.397820 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.397915 kubelet[3267]: E0213 20:17:19.397831 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.398032 kubelet[3267]: E0213 20:17:19.398020 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.398032 kubelet[3267]: W0213 20:17:19.398030 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.398126 kubelet[3267]: E0213 20:17:19.398041 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.398192 kubelet[3267]: E0213 20:17:19.398182 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.398192 kubelet[3267]: W0213 20:17:19.398191 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.398272 kubelet[3267]: E0213 20:17:19.398201 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.398357 kubelet[3267]: E0213 20:17:19.398348 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.398357 kubelet[3267]: W0213 20:17:19.398355 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.398433 kubelet[3267]: E0213 20:17:19.398365 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.398499 kubelet[3267]: E0213 20:17:19.398490 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.398499 kubelet[3267]: W0213 20:17:19.398497 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.398579 kubelet[3267]: E0213 20:17:19.398506 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.398656 kubelet[3267]: E0213 20:17:19.398648 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.398656 kubelet[3267]: W0213 20:17:19.398655 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.398732 kubelet[3267]: E0213 20:17:19.398664 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.398813 kubelet[3267]: E0213 20:17:19.398804 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.398813 kubelet[3267]: W0213 20:17:19.398811 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.398892 kubelet[3267]: E0213 20:17:19.398820 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.398971 kubelet[3267]: E0213 20:17:19.398962 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.398971 kubelet[3267]: W0213 20:17:19.398970 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399048 kubelet[3267]: E0213 20:17:19.398978 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399104 kubelet[3267]: E0213 20:17:19.399096 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399104 kubelet[3267]: W0213 20:17:19.399103 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399177 kubelet[3267]: E0213 20:17:19.399111 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399226 kubelet[3267]: E0213 20:17:19.399219 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399226 kubelet[3267]: W0213 20:17:19.399226 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399303 kubelet[3267]: E0213 20:17:19.399234 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399349 kubelet[3267]: E0213 20:17:19.399342 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399390 kubelet[3267]: W0213 20:17:19.399350 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399390 kubelet[3267]: E0213 20:17:19.399358 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399478 kubelet[3267]: E0213 20:17:19.399469 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399478 kubelet[3267]: W0213 20:17:19.399477 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399556 kubelet[3267]: E0213 20:17:19.399486 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399600 kubelet[3267]: E0213 20:17:19.399592 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399600 kubelet[3267]: W0213 20:17:19.399599 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399676 kubelet[3267]: E0213 20:17:19.399608 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399715 kubelet[3267]: E0213 20:17:19.399710 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399754 kubelet[3267]: W0213 20:17:19.399717 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399754 kubelet[3267]: E0213 20:17:19.399726 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399842 kubelet[3267]: E0213 20:17:19.399835 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399842 kubelet[3267]: W0213 20:17:19.399842 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399916 kubelet[3267]: E0213 20:17:19.399850 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.399959 kubelet[3267]: E0213 20:17:19.399954 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.399998 kubelet[3267]: W0213 20:17:19.399961 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.399998 kubelet[3267]: E0213 20:17:19.399969 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.400079 kubelet[3267]: E0213 20:17:19.400071 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.400079 kubelet[3267]: W0213 20:17:19.400078 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.400155 kubelet[3267]: E0213 20:17:19.400087 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.400200 kubelet[3267]: E0213 20:17:19.400192 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.400241 kubelet[3267]: W0213 20:17:19.400200 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.400241 kubelet[3267]: E0213 20:17:19.400209 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.408655 kubelet[3267]: E0213 20:17:19.408599 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.408655 kubelet[3267]: W0213 20:17:19.408608 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.408655 kubelet[3267]: E0213 20:17:19.408617 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.408655 kubelet[3267]: I0213 20:17:19.408637 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/64b01e33-b95d-49cd-8822-d1aaf4457527-registration-dir\") pod \"csi-node-driver-kcvvn\" (UID: \"64b01e33-b95d-49cd-8822-d1aaf4457527\") " pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:19.408852 kubelet[3267]: E0213 20:17:19.408826 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.408852 kubelet[3267]: W0213 20:17:19.408836 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.408852 kubelet[3267]: E0213 20:17:19.408848 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.408950 kubelet[3267]: I0213 20:17:19.408861 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/64b01e33-b95d-49cd-8822-d1aaf4457527-varrun\") pod \"csi-node-driver-kcvvn\" (UID: \"64b01e33-b95d-49cd-8822-d1aaf4457527\") " pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:19.409054 kubelet[3267]: E0213 20:17:19.409043 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.409091 kubelet[3267]: W0213 20:17:19.409053 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.409091 kubelet[3267]: E0213 20:17:19.409065 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.409091 kubelet[3267]: I0213 20:17:19.409082 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64b01e33-b95d-49cd-8822-d1aaf4457527-kubelet-dir\") pod \"csi-node-driver-kcvvn\" (UID: \"64b01e33-b95d-49cd-8822-d1aaf4457527\") " pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:19.409285 kubelet[3267]: E0213 20:17:19.409274 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.409317 kubelet[3267]: W0213 20:17:19.409286 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.409317 kubelet[3267]: E0213 20:17:19.409298 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.409421 kubelet[3267]: E0213 20:17:19.409413 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.409457 kubelet[3267]: W0213 20:17:19.409422 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.409457 kubelet[3267]: E0213 20:17:19.409434 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.409588 kubelet[3267]: E0213 20:17:19.409576 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.409628 kubelet[3267]: W0213 20:17:19.409589 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.409628 kubelet[3267]: E0213 20:17:19.409601 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.409765 kubelet[3267]: E0213 20:17:19.409754 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.409812 kubelet[3267]: W0213 20:17:19.409765 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.409812 kubelet[3267]: E0213 20:17:19.409781 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.409812 kubelet[3267]: I0213 20:17:19.409804 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/64b01e33-b95d-49cd-8822-d1aaf4457527-socket-dir\") pod \"csi-node-driver-kcvvn\" (UID: \"64b01e33-b95d-49cd-8822-d1aaf4457527\") " pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:19.409968 kubelet[3267]: E0213 20:17:19.409958 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.409998 kubelet[3267]: W0213 20:17:19.409967 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.409998 kubelet[3267]: E0213 20:17:19.409978 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.410090 kubelet[3267]: E0213 20:17:19.410083 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.410116 kubelet[3267]: W0213 20:17:19.410090 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.410116 kubelet[3267]: E0213 20:17:19.410096 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.410238 kubelet[3267]: E0213 20:17:19.410227 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.410272 kubelet[3267]: W0213 20:17:19.410240 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.410272 kubelet[3267]: E0213 20:17:19.410251 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.410351 kubelet[3267]: E0213 20:17:19.410345 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.410378 kubelet[3267]: W0213 20:17:19.410352 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.410378 kubelet[3267]: E0213 20:17:19.410358 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.410477 kubelet[3267]: E0213 20:17:19.410470 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.410477 kubelet[3267]: W0213 20:17:19.410477 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.410536 kubelet[3267]: E0213 20:17:19.410483 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.410651 kubelet[3267]: E0213 20:17:19.410642 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.410678 kubelet[3267]: W0213 20:17:19.410652 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.410678 kubelet[3267]: E0213 20:17:19.410661 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.410725 kubelet[3267]: I0213 20:17:19.410677 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnj9\" (UniqueName: \"kubernetes.io/projected/64b01e33-b95d-49cd-8822-d1aaf4457527-kube-api-access-7rnj9\") pod \"csi-node-driver-kcvvn\" (UID: \"64b01e33-b95d-49cd-8822-d1aaf4457527\") " pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:19.410815 kubelet[3267]: E0213 20:17:19.410808 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.410844 kubelet[3267]: W0213 20:17:19.410815 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.410844 kubelet[3267]: E0213 20:17:19.410823 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.410925 kubelet[3267]: E0213 20:17:19.410919 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.410953 kubelet[3267]: W0213 20:17:19.410925 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.410953 kubelet[3267]: E0213 20:17:19.410932 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.468610 containerd[1795]: time="2025-02-13T20:17:19.468529223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68ccd97bf6-5czw4,Uid:d5bd188f-5958-46b8-909e-178f7436febb,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:19.479991 containerd[1795]: time="2025-02-13T20:17:19.479947024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:19.480199 containerd[1795]: time="2025-02-13T20:17:19.479991137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:19.480231 containerd[1795]: time="2025-02-13T20:17:19.480197075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:19.480263 containerd[1795]: time="2025-02-13T20:17:19.480240612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:19.480956 containerd[1795]: time="2025-02-13T20:17:19.480939592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fc7qg,Uid:f5960550-b202-42d3-92d4-6b98b4d12969,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:19.490841 containerd[1795]: time="2025-02-13T20:17:19.490759788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:19.491015 containerd[1795]: time="2025-02-13T20:17:19.490808879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:19.491015 containerd[1795]: time="2025-02-13T20:17:19.491003690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:19.491085 containerd[1795]: time="2025-02-13T20:17:19.491046426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:19.508764 systemd[1]: Started cri-containerd-e18eb29252953ead25211751d5453893c3c2265f5e2fd1c8299dfe5df94a188e.scope - libcontainer container e18eb29252953ead25211751d5453893c3c2265f5e2fd1c8299dfe5df94a188e. Feb 13 20:17:19.510571 systemd[1]: Started cri-containerd-e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5.scope - libcontainer container e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5. Feb 13 20:17:19.511101 kubelet[3267]: E0213 20:17:19.511089 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511101 kubelet[3267]: W0213 20:17:19.511100 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511169 kubelet[3267]: E0213 20:17:19.511113 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.511238 kubelet[3267]: E0213 20:17:19.511230 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511238 kubelet[3267]: W0213 20:17:19.511236 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511295 kubelet[3267]: E0213 20:17:19.511244 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.511342 kubelet[3267]: E0213 20:17:19.511337 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511342 kubelet[3267]: W0213 20:17:19.511342 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511403 kubelet[3267]: E0213 20:17:19.511348 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.511468 kubelet[3267]: E0213 20:17:19.511459 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511468 kubelet[3267]: W0213 20:17:19.511467 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511541 kubelet[3267]: E0213 20:17:19.511476 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.511583 kubelet[3267]: E0213 20:17:19.511564 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511583 kubelet[3267]: W0213 20:17:19.511569 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511583 kubelet[3267]: E0213 20:17:19.511575 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.511689 kubelet[3267]: E0213 20:17:19.511652 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511689 kubelet[3267]: W0213 20:17:19.511657 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511689 kubelet[3267]: E0213 20:17:19.511663 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.511793 kubelet[3267]: E0213 20:17:19.511764 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511793 kubelet[3267]: W0213 20:17:19.511768 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511793 kubelet[3267]: E0213 20:17:19.511774 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.511931 kubelet[3267]: E0213 20:17:19.511854 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.511931 kubelet[3267]: W0213 20:17:19.511859 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.511931 kubelet[3267]: E0213 20:17:19.511865 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512029 kubelet[3267]: E0213 20:17:19.511943 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512029 kubelet[3267]: W0213 20:17:19.511948 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512029 kubelet[3267]: E0213 20:17:19.511954 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512133 kubelet[3267]: E0213 20:17:19.512039 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512133 kubelet[3267]: W0213 20:17:19.512047 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512133 kubelet[3267]: E0213 20:17:19.512055 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512226 kubelet[3267]: E0213 20:17:19.512182 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512226 kubelet[3267]: W0213 20:17:19.512187 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512226 kubelet[3267]: E0213 20:17:19.512194 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512298 kubelet[3267]: E0213 20:17:19.512292 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512298 kubelet[3267]: W0213 20:17:19.512296 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512362 kubelet[3267]: E0213 20:17:19.512303 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512422 kubelet[3267]: E0213 20:17:19.512416 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512422 kubelet[3267]: W0213 20:17:19.512421 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512485 kubelet[3267]: E0213 20:17:19.512428 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512581 kubelet[3267]: E0213 20:17:19.512570 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512603 kubelet[3267]: W0213 20:17:19.512581 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512603 kubelet[3267]: E0213 20:17:19.512592 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512694 kubelet[3267]: E0213 20:17:19.512689 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512718 kubelet[3267]: W0213 20:17:19.512694 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512718 kubelet[3267]: E0213 20:17:19.512702 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512791 kubelet[3267]: E0213 20:17:19.512786 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512813 kubelet[3267]: W0213 20:17:19.512792 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512813 kubelet[3267]: E0213 20:17:19.512798 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.512947 kubelet[3267]: E0213 20:17:19.512941 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.512971 kubelet[3267]: W0213 20:17:19.512947 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.512971 kubelet[3267]: E0213 20:17:19.512955 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513048 kubelet[3267]: E0213 20:17:19.513043 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513071 kubelet[3267]: W0213 20:17:19.513048 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513071 kubelet[3267]: E0213 20:17:19.513055 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513142 kubelet[3267]: E0213 20:17:19.513137 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513164 kubelet[3267]: W0213 20:17:19.513142 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513164 kubelet[3267]: E0213 20:17:19.513149 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513232 kubelet[3267]: E0213 20:17:19.513227 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513232 kubelet[3267]: W0213 20:17:19.513232 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513279 kubelet[3267]: E0213 20:17:19.513238 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513336 kubelet[3267]: E0213 20:17:19.513330 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513365 kubelet[3267]: W0213 20:17:19.513336 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513365 kubelet[3267]: E0213 20:17:19.513342 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513517 kubelet[3267]: E0213 20:17:19.513507 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513517 kubelet[3267]: W0213 20:17:19.513515 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513591 kubelet[3267]: E0213 20:17:19.513523 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513641 kubelet[3267]: E0213 20:17:19.513634 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513641 kubelet[3267]: W0213 20:17:19.513640 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513684 kubelet[3267]: E0213 20:17:19.513646 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513750 kubelet[3267]: E0213 20:17:19.513744 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513772 kubelet[3267]: W0213 20:17:19.513750 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513772 kubelet[3267]: E0213 20:17:19.513757 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.513899 kubelet[3267]: E0213 20:17:19.513892 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.513921 kubelet[3267]: W0213 20:17:19.513899 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.513921 kubelet[3267]: E0213 20:17:19.513905 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.519145 kubelet[3267]: E0213 20:17:19.519126 3267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:17:19.519145 kubelet[3267]: W0213 20:17:19.519143 3267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:17:19.519247 kubelet[3267]: E0213 20:17:19.519158 3267 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:17:19.522761 containerd[1795]: time="2025-02-13T20:17:19.522738630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fc7qg,Uid:f5960550-b202-42d3-92d4-6b98b4d12969,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5\"" Feb 13 20:17:19.523687 containerd[1795]: time="2025-02-13T20:17:19.523644197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:17:19.534701 containerd[1795]: time="2025-02-13T20:17:19.534679888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68ccd97bf6-5czw4,Uid:d5bd188f-5958-46b8-909e-178f7436febb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e18eb29252953ead25211751d5453893c3c2265f5e2fd1c8299dfe5df94a188e\"" Feb 13 20:17:21.071981 kubelet[3267]: E0213 20:17:21.071847 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:21.447232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596137110.mount: Deactivated successfully. Feb 13 20:17:21.485751 containerd[1795]: time="2025-02-13T20:17:21.485698301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:21.485977 containerd[1795]: time="2025-02-13T20:17:21.485846244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 20:17:21.486254 containerd[1795]: time="2025-02-13T20:17:21.486241350Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:21.487428 containerd[1795]: time="2025-02-13T20:17:21.487413240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:21.487723 containerd[1795]: time="2025-02-13T20:17:21.487701649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.964040032s" Feb 13 20:17:21.487723 containerd[1795]: time="2025-02-13T20:17:21.487716444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:17:21.488163 containerd[1795]: time="2025-02-13T20:17:21.488152816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:17:21.488704 containerd[1795]: time="2025-02-13T20:17:21.488668697Z" level=info msg="CreateContainer within sandbox \"e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:17:21.493803 containerd[1795]: time="2025-02-13T20:17:21.493784369Z" level=info msg="CreateContainer within sandbox \"e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7\"" Feb 13 20:17:21.494081 containerd[1795]: time="2025-02-13T20:17:21.494067000Z" level=info msg="StartContainer for \"712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7\"" Feb 13 20:17:21.522771 systemd[1]: Started cri-containerd-712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7.scope - libcontainer container 712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7. Feb 13 20:17:21.536388 containerd[1795]: time="2025-02-13T20:17:21.536365645Z" level=info msg="StartContainer for \"712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7\" returns successfully" Feb 13 20:17:21.541294 systemd[1]: cri-containerd-712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7.scope: Deactivated successfully. Feb 13 20:17:21.809096 containerd[1795]: time="2025-02-13T20:17:21.808995383Z" level=info msg="shim disconnected" id=712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7 namespace=k8s.io Feb 13 20:17:21.809096 containerd[1795]: time="2025-02-13T20:17:21.809031344Z" level=warning msg="cleaning up after shim disconnected" id=712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7 namespace=k8s.io Feb 13 20:17:21.809096 containerd[1795]: time="2025-02-13T20:17:21.809037621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:17:22.437449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712bdea0ea917bb510871ba3b1d10c252dc6bbe9b7569567b3dd2c40f98b9ed7-rootfs.mount: Deactivated successfully. Feb 13 20:17:23.071190 kubelet[3267]: E0213 20:17:23.071103 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:23.434633 containerd[1795]: time="2025-02-13T20:17:23.434584855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:23.434821 containerd[1795]: time="2025-02-13T20:17:23.434784508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 20:17:23.435194 containerd[1795]: time="2025-02-13T20:17:23.435151812Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:23.436126 containerd[1795]: time="2025-02-13T20:17:23.436088654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:23.436492 containerd[1795]: time="2025-02-13T20:17:23.436465497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.948297886s" Feb 13 20:17:23.436492 containerd[1795]: time="2025-02-13T20:17:23.436481356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:17:23.436928 containerd[1795]: time="2025-02-13T20:17:23.436917698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:17:23.439873 containerd[1795]: time="2025-02-13T20:17:23.439828020Z" level=info msg="CreateContainer within sandbox \"e18eb29252953ead25211751d5453893c3c2265f5e2fd1c8299dfe5df94a188e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:17:23.444098 containerd[1795]: time="2025-02-13T20:17:23.444048741Z" level=info msg="CreateContainer within sandbox \"e18eb29252953ead25211751d5453893c3c2265f5e2fd1c8299dfe5df94a188e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d06d84ad8e275efeeee1b2b363f52a56711acc3e6f3cd77da4c264dcbfa2843c\"" Feb 13 20:17:23.444251 containerd[1795]: time="2025-02-13T20:17:23.444223424Z" level=info msg="StartContainer for \"d06d84ad8e275efeeee1b2b363f52a56711acc3e6f3cd77da4c264dcbfa2843c\"" Feb 13 20:17:23.465758 systemd[1]: Started cri-containerd-d06d84ad8e275efeeee1b2b363f52a56711acc3e6f3cd77da4c264dcbfa2843c.scope - libcontainer container d06d84ad8e275efeeee1b2b363f52a56711acc3e6f3cd77da4c264dcbfa2843c. Feb 13 20:17:23.488617 containerd[1795]: time="2025-02-13T20:17:23.488567052Z" level=info msg="StartContainer for \"d06d84ad8e275efeeee1b2b363f52a56711acc3e6f3cd77da4c264dcbfa2843c\" returns successfully" Feb 13 20:17:24.160601 kubelet[3267]: I0213 20:17:24.160401 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68ccd97bf6-5czw4" podStartSLOduration=1.258688453 podStartE2EDuration="5.160362731s" podCreationTimestamp="2025-02-13 20:17:19 +0000 UTC" firstStartedPulling="2025-02-13 20:17:19.535183608 +0000 UTC m=+19.525465519" lastFinishedPulling="2025-02-13 20:17:23.436857886 +0000 UTC m=+23.427139797" observedRunningTime="2025-02-13 20:17:24.159701681 +0000 UTC m=+24.149983665" watchObservedRunningTime="2025-02-13 20:17:24.160362731 +0000 UTC m=+24.150644689" Feb 13 20:17:25.070756 kubelet[3267]: E0213 20:17:25.070662 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:25.140743 kubelet[3267]: I0213 20:17:25.140687 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:26.127523 containerd[1795]: time="2025-02-13T20:17:26.127499746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:26.127731 containerd[1795]: time="2025-02-13T20:17:26.127691712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:17:26.127936 containerd[1795]: time="2025-02-13T20:17:26.127924021Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:26.129055 containerd[1795]: time="2025-02-13T20:17:26.129009270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:26.129486 containerd[1795]: time="2025-02-13T20:17:26.129439703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.692507654s" Feb 13 20:17:26.129486 containerd[1795]: time="2025-02-13T20:17:26.129461298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:17:26.130521 containerd[1795]: time="2025-02-13T20:17:26.130504596Z" level=info msg="CreateContainer within sandbox \"e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:17:26.134854 containerd[1795]: time="2025-02-13T20:17:26.134838603Z" level=info msg="CreateContainer within sandbox \"e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3\"" Feb 13 20:17:26.135048 containerd[1795]: time="2025-02-13T20:17:26.135036643Z" level=info msg="StartContainer for \"602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3\"" Feb 13 20:17:26.157579 systemd[1]: Started cri-containerd-602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3.scope - libcontainer container 602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3. Feb 13 20:17:26.172268 containerd[1795]: time="2025-02-13T20:17:26.172244976Z" level=info msg="StartContainer for \"602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3\" returns successfully" Feb 13 20:17:26.668621 kubelet[3267]: I0213 20:17:26.668573 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:26.703408 systemd[1]: cri-containerd-602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3.scope: Deactivated successfully. Feb 13 20:17:26.756062 kubelet[3267]: I0213 20:17:26.755962 3267 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:17:26.795591 kubelet[3267]: I0213 20:17:26.795507 3267 topology_manager.go:215] "Topology Admit Handler" podUID="36ab6480-30fc-4b9c-bca0-5bd3671f05dc" podNamespace="calico-apiserver" podName="calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:26.797123 kubelet[3267]: I0213 20:17:26.797049 3267 topology_manager.go:215] "Topology Admit Handler" podUID="f56e4292-4136-4f41-a2d5-245b627504ec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:26.798229 kubelet[3267]: I0213 20:17:26.798169 3267 topology_manager.go:215] "Topology Admit Handler" podUID="a01e5e0d-d13b-4c53-a211-8d274b915928" podNamespace="calico-apiserver" podName="calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:26.799649 kubelet[3267]: I0213 20:17:26.799576 3267 topology_manager.go:215] "Topology Admit Handler" podUID="853990b6-ca4d-4293-b605-05f073fd0f8e" podNamespace="calico-system" podName="calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:26.800722 kubelet[3267]: I0213 20:17:26.800612 3267 topology_manager.go:215] "Topology Admit Handler" podUID="542fc81f-a219-47bd-b6bf-841062c32db8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-frmtf" Feb 13 20:17:26.814283 systemd[1]: Created slice kubepods-besteffort-pod36ab6480_30fc_4b9c_bca0_5bd3671f05dc.slice - libcontainer container kubepods-besteffort-pod36ab6480_30fc_4b9c_bca0_5bd3671f05dc.slice. Feb 13 20:17:26.821116 systemd[1]: Created slice kubepods-burstable-podf56e4292_4136_4f41_a2d5_245b627504ec.slice - libcontainer container kubepods-burstable-podf56e4292_4136_4f41_a2d5_245b627504ec.slice. Feb 13 20:17:26.827521 systemd[1]: Created slice kubepods-besteffort-poda01e5e0d_d13b_4c53_a211_8d274b915928.slice - libcontainer container kubepods-besteffort-poda01e5e0d_d13b_4c53_a211_8d274b915928.slice. Feb 13 20:17:26.832546 systemd[1]: Created slice kubepods-besteffort-pod853990b6_ca4d_4293_b605_05f073fd0f8e.slice - libcontainer container kubepods-besteffort-pod853990b6_ca4d_4293_b605_05f073fd0f8e.slice. Feb 13 20:17:26.837186 systemd[1]: Created slice kubepods-burstable-pod542fc81f_a219_47bd_b6bf_841062c32db8.slice - libcontainer container kubepods-burstable-pod542fc81f_a219_47bd_b6bf_841062c32db8.slice. Feb 13 20:17:26.965703 kubelet[3267]: I0213 20:17:26.965420 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56e4292-4136-4f41-a2d5-245b627504ec-config-volume\") pod \"coredns-7db6d8ff4d-cnhpl\" (UID: \"f56e4292-4136-4f41-a2d5-245b627504ec\") " pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:26.965703 kubelet[3267]: I0213 20:17:26.965588 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36ab6480-30fc-4b9c-bca0-5bd3671f05dc-calico-apiserver-certs\") pod \"calico-apiserver-ff9d88786-v9j7t\" (UID: \"36ab6480-30fc-4b9c-bca0-5bd3671f05dc\") " pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:26.965703 kubelet[3267]: I0213 20:17:26.965681 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6kh\" (UniqueName: \"kubernetes.io/projected/f56e4292-4136-4f41-a2d5-245b627504ec-kube-api-access-7n6kh\") pod \"coredns-7db6d8ff4d-cnhpl\" (UID: \"f56e4292-4136-4f41-a2d5-245b627504ec\") " pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:26.966696 kubelet[3267]: I0213 20:17:26.965740 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfwhg\" (UniqueName: \"kubernetes.io/projected/a01e5e0d-d13b-4c53-a211-8d274b915928-kube-api-access-tfwhg\") pod \"calico-apiserver-ff9d88786-8wrbb\" (UID: \"a01e5e0d-d13b-4c53-a211-8d274b915928\") " pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:26.966696 kubelet[3267]: I0213 20:17:26.965818 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsbd6\" (UniqueName: \"kubernetes.io/projected/853990b6-ca4d-4293-b605-05f073fd0f8e-kube-api-access-hsbd6\") pod \"calico-kube-controllers-55886c68fb-v8l5j\" (UID: \"853990b6-ca4d-4293-b605-05f073fd0f8e\") " pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:26.966696 kubelet[3267]: I0213 20:17:26.965875 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/542fc81f-a219-47bd-b6bf-841062c32db8-config-volume\") pod \"coredns-7db6d8ff4d-frmtf\" (UID: \"542fc81f-a219-47bd-b6bf-841062c32db8\") " pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:26.966696 kubelet[3267]: I0213 20:17:26.965927 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nsl2\" (UniqueName: \"kubernetes.io/projected/36ab6480-30fc-4b9c-bca0-5bd3671f05dc-kube-api-access-4nsl2\") pod \"calico-apiserver-ff9d88786-v9j7t\" (UID: \"36ab6480-30fc-4b9c-bca0-5bd3671f05dc\") " pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:26.966696 kubelet[3267]: I0213 20:17:26.965976 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/853990b6-ca4d-4293-b605-05f073fd0f8e-tigera-ca-bundle\") pod \"calico-kube-controllers-55886c68fb-v8l5j\" (UID: \"853990b6-ca4d-4293-b605-05f073fd0f8e\") " pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:26.967697 kubelet[3267]: I0213 20:17:26.966053 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a01e5e0d-d13b-4c53-a211-8d274b915928-calico-apiserver-certs\") pod \"calico-apiserver-ff9d88786-8wrbb\" (UID: \"a01e5e0d-d13b-4c53-a211-8d274b915928\") " pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:26.967697 kubelet[3267]: I0213 20:17:26.966199 3267 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47f42\" (UniqueName: \"kubernetes.io/projected/542fc81f-a219-47bd-b6bf-841062c32db8-kube-api-access-47f42\") pod \"coredns-7db6d8ff4d-frmtf\" (UID: \"542fc81f-a219-47bd-b6bf-841062c32db8\") " pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:27.074291 systemd[1]: Created slice kubepods-besteffort-pod64b01e33_b95d_49cd_8822_d1aaf4457527.slice - libcontainer container kubepods-besteffort-pod64b01e33_b95d_49cd_8822_d1aaf4457527.slice. Feb 13 20:17:27.075761 containerd[1795]: time="2025-02-13T20:17:27.075693924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:27.119515 containerd[1795]: time="2025-02-13T20:17:27.119396514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:17:27.125757 containerd[1795]: time="2025-02-13T20:17:27.125641112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:27.131049 containerd[1795]: time="2025-02-13T20:17:27.130940225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:17:27.136580 containerd[1795]: time="2025-02-13T20:17:27.136437103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:0,}" Feb 13 20:17:27.140826 containerd[1795]: time="2025-02-13T20:17:27.140701180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:27.164918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3-rootfs.mount: Deactivated successfully. Feb 13 20:17:27.375750 containerd[1795]: time="2025-02-13T20:17:27.375668453Z" level=info msg="shim disconnected" id=602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3 namespace=k8s.io Feb 13 20:17:27.375750 containerd[1795]: time="2025-02-13T20:17:27.375713885Z" level=warning msg="cleaning up after shim disconnected" id=602a28e107e5301d3ccfd984b78835002f983d08730d8f07c9d5b00e3f4e3fa3 namespace=k8s.io Feb 13 20:17:27.375750 containerd[1795]: time="2025-02-13T20:17:27.375719328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:17:27.421521 containerd[1795]: time="2025-02-13T20:17:27.421483586Z" level=error msg="Failed to destroy network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421668 containerd[1795]: time="2025-02-13T20:17:27.421537299Z" level=error msg="Failed to destroy network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421716 containerd[1795]: time="2025-02-13T20:17:27.421701458Z" level=error msg="encountered an error cleaning up failed sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421754 containerd[1795]: time="2025-02-13T20:17:27.421743630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421818 containerd[1795]: time="2025-02-13T20:17:27.421762157Z" level=error msg="encountered an error cleaning up failed sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421818 containerd[1795]: time="2025-02-13T20:17:27.421707750Z" level=error msg="Failed to destroy network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421818 containerd[1795]: time="2025-02-13T20:17:27.421797928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421917 containerd[1795]: time="2025-02-13T20:17:27.421803212Z" level=error msg="Failed to destroy network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421949 kubelet[3267]: E0213 20:17:27.421882 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.421949 kubelet[3267]: E0213 20:17:27.421935 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:27.422058 containerd[1795]: time="2025-02-13T20:17:27.421930809Z" level=error msg="Failed to destroy network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422058 containerd[1795]: time="2025-02-13T20:17:27.421997133Z" level=error msg="encountered an error cleaning up failed sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422058 containerd[1795]: time="2025-02-13T20:17:27.422027194Z" level=error msg="encountered an error cleaning up failed sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422146 kubelet[3267]: E0213 20:17:27.421953 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:27.422146 kubelet[3267]: E0213 20:17:27.421958 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422146 kubelet[3267]: E0213 20:17:27.421982 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" podUID="a01e5e0d-d13b-4c53-a211-8d274b915928" Feb 13 20:17:27.422237 containerd[1795]: time="2025-02-13T20:17:27.422059381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422237 containerd[1795]: time="2025-02-13T20:17:27.422076635Z" level=error msg="encountered an error cleaning up failed sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422237 containerd[1795]: time="2025-02-13T20:17:27.422029013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422237 containerd[1795]: time="2025-02-13T20:17:27.422096840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422324 kubelet[3267]: E0213 20:17:27.421989 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:27.422324 kubelet[3267]: E0213 20:17:27.422008 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:27.422324 kubelet[3267]: E0213 20:17:27.422037 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:27.422393 kubelet[3267]: E0213 20:17:27.422145 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422393 kubelet[3267]: E0213 20:17:27.422154 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422393 kubelet[3267]: E0213 20:17:27.422161 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422393 kubelet[3267]: E0213 20:17:27.422174 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:27.422478 kubelet[3267]: E0213 20:17:27.422185 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:27.422478 kubelet[3267]: E0213 20:17:27.422166 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:27.422478 kubelet[3267]: E0213 20:17:27.422201 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:27.422478 kubelet[3267]: E0213 20:17:27.422201 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:27.422551 kubelet[3267]: E0213 20:17:27.422224 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" podUID="853990b6-ca4d-4293-b605-05f073fd0f8e" Feb 13 20:17:27.422551 kubelet[3267]: E0213 20:17:27.422189 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:27.422551 kubelet[3267]: E0213 20:17:27.422246 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-frmtf" podUID="542fc81f-a219-47bd-b6bf-841062c32db8" Feb 13 20:17:27.422628 kubelet[3267]: E0213 20:17:27.422251 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cnhpl" podUID="f56e4292-4136-4f41-a2d5-245b627504ec" Feb 13 20:17:27.422657 containerd[1795]: time="2025-02-13T20:17:27.422633582Z" level=error msg="Failed to destroy network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422781 containerd[1795]: time="2025-02-13T20:17:27.422768785Z" level=error msg="encountered an error cleaning up failed sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422803 containerd[1795]: time="2025-02-13T20:17:27.422792886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422893 kubelet[3267]: E0213 20:17:27.422883 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:27.422921 kubelet[3267]: E0213 20:17:27.422898 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:27.422921 kubelet[3267]: E0213 20:17:27.422907 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:27.422960 kubelet[3267]: E0213 20:17:27.422920 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" podUID="36ab6480-30fc-4b9c-bca0-5bd3671f05dc" Feb 13 20:17:28.139193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b-shm.mount: Deactivated successfully. Feb 13 20:17:28.139244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f-shm.mount: Deactivated successfully. Feb 13 20:17:28.139278 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348-shm.mount: Deactivated successfully. Feb 13 20:17:28.139311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47-shm.mount: Deactivated successfully. Feb 13 20:17:28.139341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b-shm.mount: Deactivated successfully. Feb 13 20:17:28.139371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb-shm.mount: Deactivated successfully. Feb 13 20:17:28.151360 kubelet[3267]: I0213 20:17:28.151318 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb" Feb 13 20:17:28.151755 containerd[1795]: time="2025-02-13T20:17:28.151695857Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:17:28.151891 containerd[1795]: time="2025-02-13T20:17:28.151824504Z" level=info msg="Ensure that sandbox 4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb in task-service has been cleanup successfully" Feb 13 20:17:28.151924 containerd[1795]: time="2025-02-13T20:17:28.151913167Z" level=info msg="TearDown network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" successfully" Feb 13 20:17:28.151924 containerd[1795]: time="2025-02-13T20:17:28.151921995Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" returns successfully" Feb 13 20:17:28.152135 containerd[1795]: time="2025-02-13T20:17:28.152109016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:1,}" Feb 13 20:17:28.152551 kubelet[3267]: I0213 20:17:28.152541 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b" Feb 13 20:17:28.152673 containerd[1795]: time="2025-02-13T20:17:28.152662414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:17:28.152807 containerd[1795]: time="2025-02-13T20:17:28.152794816Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:17:28.152915 containerd[1795]: time="2025-02-13T20:17:28.152900247Z" level=info msg="Ensure that sandbox 9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b in task-service has been cleanup successfully" Feb 13 20:17:28.152947 kubelet[3267]: I0213 20:17:28.152931 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348" Feb 13 20:17:28.153008 containerd[1795]: time="2025-02-13T20:17:28.152998851Z" level=info msg="TearDown network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" successfully" Feb 13 20:17:28.153008 containerd[1795]: time="2025-02-13T20:17:28.153007159Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" returns successfully" Feb 13 20:17:28.153105 containerd[1795]: time="2025-02-13T20:17:28.153097095Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:17:28.153183 containerd[1795]: time="2025-02-13T20:17:28.153174948Z" level=info msg="Ensure that sandbox b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348 in task-service has been cleanup successfully" Feb 13 20:17:28.153214 containerd[1795]: time="2025-02-13T20:17:28.153203834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:1,}" Feb 13 20:17:28.153276 containerd[1795]: time="2025-02-13T20:17:28.153265976Z" level=info msg="TearDown network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" successfully" Feb 13 20:17:28.153298 containerd[1795]: time="2025-02-13T20:17:28.153276495Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" returns successfully" Feb 13 20:17:28.153403 systemd[1]: run-netns-cni\x2d6c0d8cc4\x2da769\x2d473b\x2d8856\x2d3a368f4e2a44.mount: Deactivated successfully. Feb 13 20:17:28.153465 containerd[1795]: time="2025-02-13T20:17:28.153427006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:17:28.153488 kubelet[3267]: I0213 20:17:28.153460 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47" Feb 13 20:17:28.153660 containerd[1795]: time="2025-02-13T20:17:28.153648469Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:17:28.153815 containerd[1795]: time="2025-02-13T20:17:28.153803102Z" level=info msg="Ensure that sandbox fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47 in task-service has been cleanup successfully" Feb 13 20:17:28.154189 containerd[1795]: time="2025-02-13T20:17:28.153913497Z" level=info msg="TearDown network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" successfully" Feb 13 20:17:28.154189 containerd[1795]: time="2025-02-13T20:17:28.153923967Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" returns successfully" Feb 13 20:17:28.154189 containerd[1795]: time="2025-02-13T20:17:28.154121732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:1,}" Feb 13 20:17:28.154189 containerd[1795]: time="2025-02-13T20:17:28.154145132Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:17:28.154321 kubelet[3267]: I0213 20:17:28.153870 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b" Feb 13 20:17:28.154352 containerd[1795]: time="2025-02-13T20:17:28.154238701Z" level=info msg="Ensure that sandbox b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b in task-service has been cleanup successfully" Feb 13 20:17:28.154352 containerd[1795]: time="2025-02-13T20:17:28.154316465Z" level=info msg="TearDown network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" successfully" Feb 13 20:17:28.154352 containerd[1795]: time="2025-02-13T20:17:28.154324412Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" returns successfully" Feb 13 20:17:28.154424 kubelet[3267]: I0213 20:17:28.154336 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f" Feb 13 20:17:28.154535 containerd[1795]: time="2025-02-13T20:17:28.154501319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:17:28.154535 containerd[1795]: time="2025-02-13T20:17:28.154529375Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:17:28.154626 containerd[1795]: time="2025-02-13T20:17:28.154617006Z" level=info msg="Ensure that sandbox 7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f in task-service has been cleanup successfully" Feb 13 20:17:28.154700 containerd[1795]: time="2025-02-13T20:17:28.154691562Z" level=info msg="TearDown network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" successfully" Feb 13 20:17:28.154700 containerd[1795]: time="2025-02-13T20:17:28.154699012Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" returns successfully" Feb 13 20:17:28.154871 containerd[1795]: time="2025-02-13T20:17:28.154860362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:1,}" Feb 13 20:17:28.155237 systemd[1]: run-netns-cni\x2d4e912712\x2dee1d\x2d6556\x2d7743\x2dba97b9c28d23.mount: Deactivated successfully. Feb 13 20:17:28.155295 systemd[1]: run-netns-cni\x2d66088655\x2d9b6d\x2d1aec\x2dc926\x2dc9020ad5b85e.mount: Deactivated successfully. Feb 13 20:17:28.155329 systemd[1]: run-netns-cni\x2d65480429\x2d4819\x2df0c0\x2ddb33\x2d4bebe6ede616.mount: Deactivated successfully. Feb 13 20:17:28.157607 systemd[1]: run-netns-cni\x2d97387740\x2da248\x2d82fd\x2d207e\x2dc6781609b7c7.mount: Deactivated successfully. Feb 13 20:17:28.157654 systemd[1]: run-netns-cni\x2d263dbbab\x2df014\x2d34e7\x2d7241\x2dd7ed8123b51c.mount: Deactivated successfully. Feb 13 20:17:28.194787 containerd[1795]: time="2025-02-13T20:17:28.194747405Z" level=error msg="Failed to destroy network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195050 containerd[1795]: time="2025-02-13T20:17:28.194995681Z" level=error msg="Failed to destroy network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195125 containerd[1795]: time="2025-02-13T20:17:28.195109892Z" level=error msg="encountered an error cleaning up failed sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195203 containerd[1795]: time="2025-02-13T20:17:28.195188850Z" level=error msg="encountered an error cleaning up failed sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195269 containerd[1795]: time="2025-02-13T20:17:28.195255176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195310 containerd[1795]: time="2025-02-13T20:17:28.195195312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195411 kubelet[3267]: E0213 20:17:28.195391 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195456 kubelet[3267]: E0213 20:17:28.195437 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:28.195500 kubelet[3267]: E0213 20:17:28.195466 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:28.195500 kubelet[3267]: E0213 20:17:28.195392 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195543 containerd[1795]: time="2025-02-13T20:17:28.195443245Z" level=error msg="Failed to destroy network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195563 kubelet[3267]: E0213 20:17:28.195500 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:28.195563 kubelet[3267]: E0213 20:17:28.195505 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" podUID="a01e5e0d-d13b-4c53-a211-8d274b915928" Feb 13 20:17:28.195563 kubelet[3267]: E0213 20:17:28.195519 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:28.195653 kubelet[3267]: E0213 20:17:28.195547 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:28.195691 containerd[1795]: time="2025-02-13T20:17:28.195635980Z" level=error msg="encountered an error cleaning up failed sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195691 containerd[1795]: time="2025-02-13T20:17:28.195667568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195758 kubelet[3267]: E0213 20:17:28.195743 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.195779 kubelet[3267]: E0213 20:17:28.195766 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:28.195796 kubelet[3267]: E0213 20:17:28.195776 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:28.195814 kubelet[3267]: E0213 20:17:28.195795 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cnhpl" podUID="f56e4292-4136-4f41-a2d5-245b627504ec" Feb 13 20:17:28.196077 containerd[1795]: time="2025-02-13T20:17:28.196065575Z" level=error msg="Failed to destroy network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.196205 containerd[1795]: time="2025-02-13T20:17:28.196194531Z" level=error msg="encountered an error cleaning up failed sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.196224 containerd[1795]: time="2025-02-13T20:17:28.196216564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.196295 kubelet[3267]: E0213 20:17:28.196283 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.196314 kubelet[3267]: E0213 20:17:28.196304 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:28.196334 kubelet[3267]: E0213 20:17:28.196316 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:28.196356 kubelet[3267]: E0213 20:17:28.196333 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-frmtf" podUID="542fc81f-a219-47bd-b6bf-841062c32db8" Feb 13 20:17:28.197806 containerd[1795]: time="2025-02-13T20:17:28.197759460Z" level=error msg="Failed to destroy network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.197954 containerd[1795]: time="2025-02-13T20:17:28.197915026Z" level=error msg="encountered an error cleaning up failed sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.197954 containerd[1795]: time="2025-02-13T20:17:28.197938927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.198043 kubelet[3267]: E0213 20:17:28.198030 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.198070 kubelet[3267]: E0213 20:17:28.198052 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:28.198089 kubelet[3267]: E0213 20:17:28.198063 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:28.198108 kubelet[3267]: E0213 20:17:28.198091 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" podUID="36ab6480-30fc-4b9c-bca0-5bd3671f05dc" Feb 13 20:17:28.198660 containerd[1795]: time="2025-02-13T20:17:28.198610530Z" level=error msg="Failed to destroy network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.198787 containerd[1795]: time="2025-02-13T20:17:28.198750891Z" level=error msg="encountered an error cleaning up failed sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.198787 containerd[1795]: time="2025-02-13T20:17:28.198774697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.198910 kubelet[3267]: E0213 20:17:28.198865 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:28.198910 kubelet[3267]: E0213 20:17:28.198884 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:28.198910 kubelet[3267]: E0213 20:17:28.198895 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:28.198972 kubelet[3267]: E0213 20:17:28.198911 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" podUID="853990b6-ca4d-4293-b605-05f073fd0f8e" Feb 13 20:17:29.136842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53-shm.mount: Deactivated successfully. Feb 13 20:17:29.157277 kubelet[3267]: I0213 20:17:29.157250 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1" Feb 13 20:17:29.157849 containerd[1795]: time="2025-02-13T20:17:29.157807220Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" Feb 13 20:17:29.158169 containerd[1795]: time="2025-02-13T20:17:29.158095065Z" level=info msg="Ensure that sandbox 4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1 in task-service has been cleanup successfully" Feb 13 20:17:29.158362 containerd[1795]: time="2025-02-13T20:17:29.158332599Z" level=info msg="TearDown network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" successfully" Feb 13 20:17:29.158362 containerd[1795]: time="2025-02-13T20:17:29.158358697Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" returns successfully" Feb 13 20:17:29.158651 containerd[1795]: time="2025-02-13T20:17:29.158623479Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:17:29.158782 containerd[1795]: time="2025-02-13T20:17:29.158727253Z" level=info msg="TearDown network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" successfully" Feb 13 20:17:29.158888 containerd[1795]: time="2025-02-13T20:17:29.158780989Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" returns successfully" Feb 13 20:17:29.158946 kubelet[3267]: I0213 20:17:29.158806 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7" Feb 13 20:17:29.159280 containerd[1795]: time="2025-02-13T20:17:29.159248690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:2,}" Feb 13 20:17:29.159396 containerd[1795]: time="2025-02-13T20:17:29.159369384Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" Feb 13 20:17:29.159630 containerd[1795]: time="2025-02-13T20:17:29.159603656Z" level=info msg="Ensure that sandbox 16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7 in task-service has been cleanup successfully" Feb 13 20:17:29.159885 containerd[1795]: time="2025-02-13T20:17:29.159857303Z" level=info msg="TearDown network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" successfully" Feb 13 20:17:29.159885 containerd[1795]: time="2025-02-13T20:17:29.159881112Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" returns successfully" Feb 13 20:17:29.160096 kubelet[3267]: I0213 20:17:29.160087 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf" Feb 13 20:17:29.160122 containerd[1795]: time="2025-02-13T20:17:29.160093828Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:17:29.160158 containerd[1795]: time="2025-02-13T20:17:29.160137668Z" level=info msg="TearDown network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" successfully" Feb 13 20:17:29.160188 containerd[1795]: time="2025-02-13T20:17:29.160158169Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" returns successfully" Feb 13 20:17:29.160306 containerd[1795]: time="2025-02-13T20:17:29.160293089Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" Feb 13 20:17:29.160349 containerd[1795]: time="2025-02-13T20:17:29.160339371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:2,}" Feb 13 20:17:29.160409 containerd[1795]: time="2025-02-13T20:17:29.160397442Z" level=info msg="Ensure that sandbox e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf in task-service has been cleanup successfully" Feb 13 20:17:29.160493 containerd[1795]: time="2025-02-13T20:17:29.160482884Z" level=info msg="TearDown network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" successfully" Feb 13 20:17:29.160493 containerd[1795]: time="2025-02-13T20:17:29.160491876Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" returns successfully" Feb 13 20:17:29.160597 containerd[1795]: time="2025-02-13T20:17:29.160587345Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:17:29.160617 kubelet[3267]: I0213 20:17:29.160605 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff" Feb 13 20:17:29.160650 containerd[1795]: time="2025-02-13T20:17:29.160629601Z" level=info msg="TearDown network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" successfully" Feb 13 20:17:29.160675 containerd[1795]: time="2025-02-13T20:17:29.160650318Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" returns successfully" Feb 13 20:17:29.160817 systemd[1]: run-netns-cni\x2d34b3ec3e\x2d2f97\x2d9d19\x2d6169\x2d324a61376616.mount: Deactivated successfully. Feb 13 20:17:29.160876 containerd[1795]: time="2025-02-13T20:17:29.160821634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:2,}" Feb 13 20:17:29.160904 containerd[1795]: time="2025-02-13T20:17:29.160822819Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" Feb 13 20:17:29.161010 containerd[1795]: time="2025-02-13T20:17:29.160999414Z" level=info msg="Ensure that sandbox fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff in task-service has been cleanup successfully" Feb 13 20:17:29.161099 containerd[1795]: time="2025-02-13T20:17:29.161086966Z" level=info msg="TearDown network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" successfully" Feb 13 20:17:29.161130 containerd[1795]: time="2025-02-13T20:17:29.161098530Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" returns successfully" Feb 13 20:17:29.161163 kubelet[3267]: I0213 20:17:29.161153 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c" Feb 13 20:17:29.161245 containerd[1795]: time="2025-02-13T20:17:29.161233227Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:17:29.161294 containerd[1795]: time="2025-02-13T20:17:29.161284416Z" level=info msg="TearDown network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" successfully" Feb 13 20:17:29.161326 containerd[1795]: time="2025-02-13T20:17:29.161293258Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" returns successfully" Feb 13 20:17:29.161350 containerd[1795]: time="2025-02-13T20:17:29.161342931Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" Feb 13 20:17:29.161432 containerd[1795]: time="2025-02-13T20:17:29.161421335Z" level=info msg="Ensure that sandbox 21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c in task-service has been cleanup successfully" Feb 13 20:17:29.161505 containerd[1795]: time="2025-02-13T20:17:29.161496013Z" level=info msg="TearDown network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" successfully" Feb 13 20:17:29.161505 containerd[1795]: time="2025-02-13T20:17:29.161504073Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" returns successfully" Feb 13 20:17:29.161565 containerd[1795]: time="2025-02-13T20:17:29.161499725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:2,}" Feb 13 20:17:29.161698 containerd[1795]: time="2025-02-13T20:17:29.161688973Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:17:29.161735 containerd[1795]: time="2025-02-13T20:17:29.161725770Z" level=info msg="TearDown network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" successfully" Feb 13 20:17:29.161753 containerd[1795]: time="2025-02-13T20:17:29.161735702Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" returns successfully" Feb 13 20:17:29.161769 kubelet[3267]: I0213 20:17:29.161750 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53" Feb 13 20:17:29.161941 containerd[1795]: time="2025-02-13T20:17:29.161931060Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" Feb 13 20:17:29.161979 containerd[1795]: time="2025-02-13T20:17:29.161943976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:2,}" Feb 13 20:17:29.162026 containerd[1795]: time="2025-02-13T20:17:29.162016726Z" level=info msg="Ensure that sandbox d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53 in task-service has been cleanup successfully" Feb 13 20:17:29.162097 containerd[1795]: time="2025-02-13T20:17:29.162087583Z" level=info msg="TearDown network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" successfully" Feb 13 20:17:29.162097 containerd[1795]: time="2025-02-13T20:17:29.162095614Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" returns successfully" Feb 13 20:17:29.162238 containerd[1795]: time="2025-02-13T20:17:29.162227646Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:17:29.162300 containerd[1795]: time="2025-02-13T20:17:29.162273494Z" level=info msg="TearDown network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" successfully" Feb 13 20:17:29.162324 containerd[1795]: time="2025-02-13T20:17:29.162300970Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" returns successfully" Feb 13 20:17:29.162318 systemd[1]: run-netns-cni\x2d21617ac8\x2d8141\x2df36c\x2d634d\x2dc7b317365638.mount: Deactivated successfully. Feb 13 20:17:29.162370 systemd[1]: run-netns-cni\x2dd57542bb\x2d36b4\x2dca23\x2d5789\x2dc76c36f65822.mount: Deactivated successfully. Feb 13 20:17:29.162404 systemd[1]: run-netns-cni\x2dcc9ec10f\x2d56f0\x2d3219\x2d1bc4\x2d99ade3b21b58.mount: Deactivated successfully. Feb 13 20:17:29.162467 containerd[1795]: time="2025-02-13T20:17:29.162457187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:2,}" Feb 13 20:17:29.164103 systemd[1]: run-netns-cni\x2d17cfcc7d\x2d94a6\x2d1141\x2d370e\x2dc5955f922174.mount: Deactivated successfully. Feb 13 20:17:29.164168 systemd[1]: run-netns-cni\x2d2cd4dfe3\x2d789e\x2d4d01\x2db932\x2d4383f03e806a.mount: Deactivated successfully. Feb 13 20:17:29.199809 containerd[1795]: time="2025-02-13T20:17:29.199780877Z" level=error msg="Failed to destroy network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200005 containerd[1795]: time="2025-02-13T20:17:29.199987763Z" level=error msg="encountered an error cleaning up failed sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200046 containerd[1795]: time="2025-02-13T20:17:29.200031343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200168 containerd[1795]: time="2025-02-13T20:17:29.200147336Z" level=error msg="Failed to destroy network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200361 kubelet[3267]: E0213 20:17:29.200198 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200361 kubelet[3267]: E0213 20:17:29.200258 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:29.200440 kubelet[3267]: E0213 20:17:29.200358 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:29.200440 kubelet[3267]: E0213 20:17:29.200396 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-frmtf" podUID="542fc81f-a219-47bd-b6bf-841062c32db8" Feb 13 20:17:29.200534 containerd[1795]: time="2025-02-13T20:17:29.200348662Z" level=error msg="encountered an error cleaning up failed sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200534 containerd[1795]: time="2025-02-13T20:17:29.200381530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200609 kubelet[3267]: E0213 20:17:29.200475 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.200609 kubelet[3267]: E0213 20:17:29.200503 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:29.200609 kubelet[3267]: E0213 20:17:29.200523 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:29.200694 kubelet[3267]: E0213 20:17:29.200545 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" podUID="a01e5e0d-d13b-4c53-a211-8d274b915928" Feb 13 20:17:29.203061 containerd[1795]: time="2025-02-13T20:17:29.203032235Z" level=error msg="Failed to destroy network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203382 containerd[1795]: time="2025-02-13T20:17:29.203258116Z" level=error msg="encountered an error cleaning up failed sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203382 containerd[1795]: time="2025-02-13T20:17:29.203314922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203486 containerd[1795]: time="2025-02-13T20:17:29.203425276Z" level=error msg="Failed to destroy network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203512 kubelet[3267]: E0213 20:17:29.203454 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203512 kubelet[3267]: E0213 20:17:29.203484 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:29.203512 kubelet[3267]: E0213 20:17:29.203498 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:29.203578 kubelet[3267]: E0213 20:17:29.203521 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" podUID="36ab6480-30fc-4b9c-bca0-5bd3671f05dc" Feb 13 20:17:29.203683 containerd[1795]: time="2025-02-13T20:17:29.203665514Z" level=error msg="encountered an error cleaning up failed sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203717 containerd[1795]: time="2025-02-13T20:17:29.203703236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203804 kubelet[3267]: E0213 20:17:29.203789 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.203844 kubelet[3267]: E0213 20:17:29.203813 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:29.203844 kubelet[3267]: E0213 20:17:29.203823 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:29.203945 kubelet[3267]: E0213 20:17:29.203840 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:29.203998 containerd[1795]: time="2025-02-13T20:17:29.203867226Z" level=error msg="Failed to destroy network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204033 containerd[1795]: time="2025-02-13T20:17:29.204010994Z" level=error msg="encountered an error cleaning up failed sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204065 containerd[1795]: time="2025-02-13T20:17:29.204034859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204105 kubelet[3267]: E0213 20:17:29.204094 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204137 kubelet[3267]: E0213 20:17:29.204109 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:29.204137 kubelet[3267]: E0213 20:17:29.204119 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:29.204197 kubelet[3267]: E0213 20:17:29.204134 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cnhpl" podUID="f56e4292-4136-4f41-a2d5-245b627504ec" Feb 13 20:17:29.204369 containerd[1795]: time="2025-02-13T20:17:29.204353022Z" level=error msg="Failed to destroy network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204537 containerd[1795]: time="2025-02-13T20:17:29.204523250Z" level=error msg="encountered an error cleaning up failed sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204557 containerd[1795]: time="2025-02-13T20:17:29.204548773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204634 kubelet[3267]: E0213 20:17:29.204600 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:29.204634 kubelet[3267]: E0213 20:17:29.204614 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:29.204634 kubelet[3267]: E0213 20:17:29.204626 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:29.204695 kubelet[3267]: E0213 20:17:29.204643 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" podUID="853990b6-ca4d-4293-b605-05f073fd0f8e" Feb 13 20:17:30.141147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c-shm.mount: Deactivated successfully. Feb 13 20:17:30.164256 kubelet[3267]: I0213 20:17:30.164238 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d" Feb 13 20:17:30.164594 containerd[1795]: time="2025-02-13T20:17:30.164575851Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\"" Feb 13 20:17:30.165161 containerd[1795]: time="2025-02-13T20:17:30.165139280Z" level=info msg="Ensure that sandbox 742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d in task-service has been cleanup successfully" Feb 13 20:17:30.165722 containerd[1795]: time="2025-02-13T20:17:30.165705337Z" level=info msg="TearDown network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" successfully" Feb 13 20:17:30.165994 containerd[1795]: time="2025-02-13T20:17:30.165722760Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" returns successfully" Feb 13 20:17:30.166125 kubelet[3267]: I0213 20:17:30.166108 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20" Feb 13 20:17:30.166190 containerd[1795]: time="2025-02-13T20:17:30.166177879Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" Feb 13 20:17:30.166244 containerd[1795]: time="2025-02-13T20:17:30.166232286Z" level=info msg="TearDown network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" successfully" Feb 13 20:17:30.166285 containerd[1795]: time="2025-02-13T20:17:30.166245330Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" returns successfully" Feb 13 20:17:30.166424 containerd[1795]: time="2025-02-13T20:17:30.166409020Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:17:30.166470 containerd[1795]: time="2025-02-13T20:17:30.166457965Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\"" Feb 13 20:17:30.166521 containerd[1795]: time="2025-02-13T20:17:30.166484033Z" level=info msg="TearDown network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" successfully" Feb 13 20:17:30.166551 containerd[1795]: time="2025-02-13T20:17:30.166522437Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" returns successfully" Feb 13 20:17:30.166597 containerd[1795]: time="2025-02-13T20:17:30.166583824Z" level=info msg="Ensure that sandbox 2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20 in task-service has been cleanup successfully" Feb 13 20:17:30.166706 containerd[1795]: time="2025-02-13T20:17:30.166692501Z" level=info msg="TearDown network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" successfully" Feb 13 20:17:30.166733 containerd[1795]: time="2025-02-13T20:17:30.166707104Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" returns successfully" Feb 13 20:17:30.166782 containerd[1795]: time="2025-02-13T20:17:30.166767644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:3,}" Feb 13 20:17:30.166867 containerd[1795]: time="2025-02-13T20:17:30.166854973Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" Feb 13 20:17:30.166938 containerd[1795]: time="2025-02-13T20:17:30.166907009Z" level=info msg="TearDown network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" successfully" Feb 13 20:17:30.166938 containerd[1795]: time="2025-02-13T20:17:30.166937010Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" returns successfully" Feb 13 20:17:30.167018 kubelet[3267]: I0213 20:17:30.166938 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c" Feb 13 20:17:30.167077 containerd[1795]: time="2025-02-13T20:17:30.167064496Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:17:30.167127 containerd[1795]: time="2025-02-13T20:17:30.167116872Z" level=info msg="TearDown network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" successfully" Feb 13 20:17:30.167166 containerd[1795]: time="2025-02-13T20:17:30.167126245Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" returns successfully" Feb 13 20:17:30.167249 systemd[1]: run-netns-cni\x2da1c9fef9\x2daf3d\x2d7ac3\x2dde88\x2d00a65a9c130e.mount: Deactivated successfully. Feb 13 20:17:30.167308 containerd[1795]: time="2025-02-13T20:17:30.167262986Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\"" Feb 13 20:17:30.167353 containerd[1795]: time="2025-02-13T20:17:30.167342330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:3,}" Feb 13 20:17:30.167416 containerd[1795]: time="2025-02-13T20:17:30.167403203Z" level=info msg="Ensure that sandbox 68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c in task-service has been cleanup successfully" Feb 13 20:17:30.167516 containerd[1795]: time="2025-02-13T20:17:30.167502514Z" level=info msg="TearDown network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" successfully" Feb 13 20:17:30.167516 containerd[1795]: time="2025-02-13T20:17:30.167512998Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" returns successfully" Feb 13 20:17:30.167706 containerd[1795]: time="2025-02-13T20:17:30.167694890Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" Feb 13 20:17:30.167753 containerd[1795]: time="2025-02-13T20:17:30.167740449Z" level=info msg="TearDown network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" successfully" Feb 13 20:17:30.167795 containerd[1795]: time="2025-02-13T20:17:30.167752556Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" returns successfully" Feb 13 20:17:30.167850 kubelet[3267]: I0213 20:17:30.167800 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577" Feb 13 20:17:30.167911 containerd[1795]: time="2025-02-13T20:17:30.167894455Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:17:30.167976 containerd[1795]: time="2025-02-13T20:17:30.167950422Z" level=info msg="TearDown network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" successfully" Feb 13 20:17:30.168006 containerd[1795]: time="2025-02-13T20:17:30.167976593Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" returns successfully" Feb 13 20:17:30.168129 containerd[1795]: time="2025-02-13T20:17:30.168118359Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\"" Feb 13 20:17:30.168228 containerd[1795]: time="2025-02-13T20:17:30.168213474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:3,}" Feb 13 20:17:30.168293 containerd[1795]: time="2025-02-13T20:17:30.168278903Z" level=info msg="Ensure that sandbox a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577 in task-service has been cleanup successfully" Feb 13 20:17:30.168412 containerd[1795]: time="2025-02-13T20:17:30.168401589Z" level=info msg="TearDown network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" successfully" Feb 13 20:17:30.168438 containerd[1795]: time="2025-02-13T20:17:30.168412541Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" returns successfully" Feb 13 20:17:30.168575 containerd[1795]: time="2025-02-13T20:17:30.168563457Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" Feb 13 20:17:30.168626 containerd[1795]: time="2025-02-13T20:17:30.168615358Z" level=info msg="TearDown network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" successfully" Feb 13 20:17:30.168654 containerd[1795]: time="2025-02-13T20:17:30.168624645Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" returns successfully" Feb 13 20:17:30.168685 kubelet[3267]: I0213 20:17:30.168627 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac" Feb 13 20:17:30.168765 containerd[1795]: time="2025-02-13T20:17:30.168750657Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:17:30.168825 containerd[1795]: time="2025-02-13T20:17:30.168800225Z" level=info msg="TearDown network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" successfully" Feb 13 20:17:30.168856 containerd[1795]: time="2025-02-13T20:17:30.168824709Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" returns successfully" Feb 13 20:17:30.168917 containerd[1795]: time="2025-02-13T20:17:30.168907894Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\"" Feb 13 20:17:30.169021 containerd[1795]: time="2025-02-13T20:17:30.169008987Z" level=info msg="Ensure that sandbox 53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac in task-service has been cleanup successfully" Feb 13 20:17:30.169050 containerd[1795]: time="2025-02-13T20:17:30.169032535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:3,}" Feb 13 20:17:30.169117 containerd[1795]: time="2025-02-13T20:17:30.169104245Z" level=info msg="TearDown network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" successfully" Feb 13 20:17:30.169145 containerd[1795]: time="2025-02-13T20:17:30.169117210Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" returns successfully" Feb 13 20:17:30.169260 containerd[1795]: time="2025-02-13T20:17:30.169248839Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" Feb 13 20:17:30.169311 containerd[1795]: time="2025-02-13T20:17:30.169301804Z" level=info msg="TearDown network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" successfully" Feb 13 20:17:30.169335 containerd[1795]: time="2025-02-13T20:17:30.169310932Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" returns successfully" Feb 13 20:17:30.169383 kubelet[3267]: I0213 20:17:30.169374 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca" Feb 13 20:17:30.169469 containerd[1795]: time="2025-02-13T20:17:30.169456397Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:17:30.169518 containerd[1795]: time="2025-02-13T20:17:30.169508592Z" level=info msg="TearDown network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" successfully" Feb 13 20:17:30.169551 containerd[1795]: time="2025-02-13T20:17:30.169517264Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" returns successfully" Feb 13 20:17:30.169610 systemd[1]: run-netns-cni\x2d2f1506bc\x2d4d7d\x2d9b5f\x2df781\x2d9489dd6de624.mount: Deactivated successfully. Feb 13 20:17:30.169680 containerd[1795]: time="2025-02-13T20:17:30.169606574Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\"" Feb 13 20:17:30.169704 containerd[1795]: time="2025-02-13T20:17:30.169696868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:3,}" Feb 13 20:17:30.169686 systemd[1]: run-netns-cni\x2d4e6839cf\x2da014\x2d9136\x2d0126\x2d20aac57b409e.mount: Deactivated successfully. Feb 13 20:17:30.169749 containerd[1795]: time="2025-02-13T20:17:30.169711808Z" level=info msg="Ensure that sandbox d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca in task-service has been cleanup successfully" Feb 13 20:17:30.169791 containerd[1795]: time="2025-02-13T20:17:30.169782680Z" level=info msg="TearDown network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" successfully" Feb 13 20:17:30.169826 containerd[1795]: time="2025-02-13T20:17:30.169790407Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" returns successfully" Feb 13 20:17:30.169902 containerd[1795]: time="2025-02-13T20:17:30.169892742Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" Feb 13 20:17:30.169936 containerd[1795]: time="2025-02-13T20:17:30.169926619Z" level=info msg="TearDown network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" successfully" Feb 13 20:17:30.169936 containerd[1795]: time="2025-02-13T20:17:30.169932143Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" returns successfully" Feb 13 20:17:30.170037 containerd[1795]: time="2025-02-13T20:17:30.170028062Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:17:30.170074 containerd[1795]: time="2025-02-13T20:17:30.170063074Z" level=info msg="TearDown network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" successfully" Feb 13 20:17:30.170074 containerd[1795]: time="2025-02-13T20:17:30.170068710Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" returns successfully" Feb 13 20:17:30.170239 containerd[1795]: time="2025-02-13T20:17:30.170229130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:3,}" Feb 13 20:17:30.171262 systemd[1]: run-netns-cni\x2dafb6acb8\x2d04cf\x2d741d\x2d9987\x2d94fede93ef58.mount: Deactivated successfully. Feb 13 20:17:30.171306 systemd[1]: run-netns-cni\x2d58efc637\x2dbabb\x2d267c\x2d6df3\x2d030a6afa223b.mount: Deactivated successfully. Feb 13 20:17:30.171338 systemd[1]: run-netns-cni\x2d1df807b4\x2db7a1\x2db6b8\x2d2226\x2d99233a326f4a.mount: Deactivated successfully. Feb 13 20:17:30.275287 containerd[1795]: time="2025-02-13T20:17:30.275243204Z" level=error msg="Failed to destroy network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275408 containerd[1795]: time="2025-02-13T20:17:30.275340838Z" level=error msg="Failed to destroy network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275495 containerd[1795]: time="2025-02-13T20:17:30.275480161Z" level=error msg="Failed to destroy network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275543 containerd[1795]: time="2025-02-13T20:17:30.275528842Z" level=error msg="encountered an error cleaning up failed sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275582 containerd[1795]: time="2025-02-13T20:17:30.275567145Z" level=error msg="encountered an error cleaning up failed sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275619 containerd[1795]: time="2025-02-13T20:17:30.275607724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275652 containerd[1795]: time="2025-02-13T20:17:30.275567547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275716 containerd[1795]: time="2025-02-13T20:17:30.275659466Z" level=error msg="encountered an error cleaning up failed sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275716 containerd[1795]: time="2025-02-13T20:17:30.275689460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275785 kubelet[3267]: E0213 20:17:30.275727 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275785 kubelet[3267]: E0213 20:17:30.275754 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275785 kubelet[3267]: E0213 20:17:30.275768 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:30.275785 kubelet[3267]: E0213 20:17:30.275774 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:30.275903 kubelet[3267]: E0213 20:17:30.275782 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:30.275903 kubelet[3267]: E0213 20:17:30.275787 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:30.275903 kubelet[3267]: E0213 20:17:30.275727 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275963 containerd[1795]: time="2025-02-13T20:17:30.275786142Z" level=error msg="Failed to destroy network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.275963 containerd[1795]: time="2025-02-13T20:17:30.275896675Z" level=error msg="Failed to destroy network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276001 kubelet[3267]: E0213 20:17:30.275806 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" podUID="a01e5e0d-d13b-4c53-a211-8d274b915928" Feb 13 20:17:30.276001 kubelet[3267]: E0213 20:17:30.275829 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:30.276001 kubelet[3267]: E0213 20:17:30.275844 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:30.276075 kubelet[3267]: E0213 20:17:30.275806 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:30.276075 kubelet[3267]: E0213 20:17:30.275866 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cnhpl" podUID="f56e4292-4136-4f41-a2d5-245b627504ec" Feb 13 20:17:30.276136 containerd[1795]: time="2025-02-13T20:17:30.275989240Z" level=error msg="encountered an error cleaning up failed sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276136 containerd[1795]: time="2025-02-13T20:17:30.276039534Z" level=error msg="encountered an error cleaning up failed sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276136 containerd[1795]: time="2025-02-13T20:17:30.276055090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276136 containerd[1795]: time="2025-02-13T20:17:30.276068558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276223 kubelet[3267]: E0213 20:17:30.276138 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276223 kubelet[3267]: E0213 20:17:30.276163 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:30.276223 kubelet[3267]: E0213 20:17:30.276178 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:30.276223 kubelet[3267]: E0213 20:17:30.276142 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276300 kubelet[3267]: E0213 20:17:30.276227 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:30.276300 kubelet[3267]: E0213 20:17:30.276242 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:30.276300 kubelet[3267]: E0213 20:17:30.276263 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" podUID="36ab6480-30fc-4b9c-bca0-5bd3671f05dc" Feb 13 20:17:30.276368 kubelet[3267]: E0213 20:17:30.276201 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-frmtf" podUID="542fc81f-a219-47bd-b6bf-841062c32db8" Feb 13 20:17:30.276549 containerd[1795]: time="2025-02-13T20:17:30.276535645Z" level=error msg="Failed to destroy network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276691 containerd[1795]: time="2025-02-13T20:17:30.276677705Z" level=error msg="encountered an error cleaning up failed sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276727 containerd[1795]: time="2025-02-13T20:17:30.276704478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276781 kubelet[3267]: E0213 20:17:30.276770 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:30.276805 kubelet[3267]: E0213 20:17:30.276786 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:30.276805 kubelet[3267]: E0213 20:17:30.276795 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:30.276849 kubelet[3267]: E0213 20:17:30.276810 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" podUID="853990b6-ca4d-4293-b605-05f073fd0f8e" Feb 13 20:17:31.171054 kubelet[3267]: I0213 20:17:31.171037 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627" Feb 13 20:17:31.171359 containerd[1795]: time="2025-02-13T20:17:31.171339966Z" level=info msg="StopPodSandbox for \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\"" Feb 13 20:17:31.171515 containerd[1795]: time="2025-02-13T20:17:31.171476159Z" level=info msg="Ensure that sandbox 26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627 in task-service has been cleanup successfully" Feb 13 20:17:31.171589 containerd[1795]: time="2025-02-13T20:17:31.171576345Z" level=info msg="TearDown network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" successfully" Feb 13 20:17:31.171589 containerd[1795]: time="2025-02-13T20:17:31.171587042Z" level=info msg="StopPodSandbox for \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" returns successfully" Feb 13 20:17:31.171717 containerd[1795]: time="2025-02-13T20:17:31.171702560Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\"" Feb 13 20:17:31.171788 containerd[1795]: time="2025-02-13T20:17:31.171758475Z" level=info msg="TearDown network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" successfully" Feb 13 20:17:31.171820 containerd[1795]: time="2025-02-13T20:17:31.171787968Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" returns successfully" Feb 13 20:17:31.171887 kubelet[3267]: I0213 20:17:31.171874 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0" Feb 13 20:17:31.171943 containerd[1795]: time="2025-02-13T20:17:31.171905511Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" Feb 13 20:17:31.171994 containerd[1795]: time="2025-02-13T20:17:31.171963613Z" level=info msg="TearDown network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" successfully" Feb 13 20:17:31.172028 containerd[1795]: time="2025-02-13T20:17:31.171995374Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" returns successfully" Feb 13 20:17:31.172096 containerd[1795]: time="2025-02-13T20:17:31.172086843Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:17:31.172137 containerd[1795]: time="2025-02-13T20:17:31.172125894Z" level=info msg="StopPodSandbox for \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\"" Feb 13 20:17:31.172169 containerd[1795]: time="2025-02-13T20:17:31.172135492Z" level=info msg="TearDown network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" successfully" Feb 13 20:17:31.172169 containerd[1795]: time="2025-02-13T20:17:31.172144561Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" returns successfully" Feb 13 20:17:31.172263 containerd[1795]: time="2025-02-13T20:17:31.172249845Z" level=info msg="Ensure that sandbox d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0 in task-service has been cleanup successfully" Feb 13 20:17:31.172338 containerd[1795]: time="2025-02-13T20:17:31.172325069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:4,}" Feb 13 20:17:31.172376 containerd[1795]: time="2025-02-13T20:17:31.172357738Z" level=info msg="TearDown network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" successfully" Feb 13 20:17:31.172376 containerd[1795]: time="2025-02-13T20:17:31.172369231Z" level=info msg="StopPodSandbox for \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" returns successfully" Feb 13 20:17:31.172534 containerd[1795]: time="2025-02-13T20:17:31.172519622Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\"" Feb 13 20:17:31.172585 containerd[1795]: time="2025-02-13T20:17:31.172574579Z" level=info msg="TearDown network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" successfully" Feb 13 20:17:31.172644 containerd[1795]: time="2025-02-13T20:17:31.172585226Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" returns successfully" Feb 13 20:17:31.172767 containerd[1795]: time="2025-02-13T20:17:31.172758015Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" Feb 13 20:17:31.172806 containerd[1795]: time="2025-02-13T20:17:31.172798402Z" level=info msg="TearDown network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" successfully" Feb 13 20:17:31.172838 containerd[1795]: time="2025-02-13T20:17:31.172806333Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" returns successfully" Feb 13 20:17:31.172873 kubelet[3267]: I0213 20:17:31.172822 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a" Feb 13 20:17:31.172957 containerd[1795]: time="2025-02-13T20:17:31.172945846Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:17:31.173008 containerd[1795]: time="2025-02-13T20:17:31.172997524Z" level=info msg="TearDown network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" successfully" Feb 13 20:17:31.173040 containerd[1795]: time="2025-02-13T20:17:31.173008459Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" returns successfully" Feb 13 20:17:31.173070 containerd[1795]: time="2025-02-13T20:17:31.173057388Z" level=info msg="StopPodSandbox for \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\"" Feb 13 20:17:31.173190 containerd[1795]: time="2025-02-13T20:17:31.173176895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:4,}" Feb 13 20:17:31.173223 containerd[1795]: time="2025-02-13T20:17:31.173189794Z" level=info msg="Ensure that sandbox 59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a in task-service has been cleanup successfully" Feb 13 20:17:31.173277 containerd[1795]: time="2025-02-13T20:17:31.173268740Z" level=info msg="TearDown network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" successfully" Feb 13 20:17:31.173301 containerd[1795]: time="2025-02-13T20:17:31.173277379Z" level=info msg="StopPodSandbox for \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" returns successfully" Feb 13 20:17:31.173375 containerd[1795]: time="2025-02-13T20:17:31.173367162Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\"" Feb 13 20:17:31.173409 containerd[1795]: time="2025-02-13T20:17:31.173402417Z" level=info msg="TearDown network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" successfully" Feb 13 20:17:31.173438 containerd[1795]: time="2025-02-13T20:17:31.173408998Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" returns successfully" Feb 13 20:17:31.173561 containerd[1795]: time="2025-02-13T20:17:31.173550912Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" Feb 13 20:17:31.173616 containerd[1795]: time="2025-02-13T20:17:31.173592819Z" level=info msg="TearDown network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" successfully" Feb 13 20:17:31.173640 containerd[1795]: time="2025-02-13T20:17:31.173616081Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" returns successfully" Feb 13 20:17:31.173688 kubelet[3267]: I0213 20:17:31.173680 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50" Feb 13 20:17:31.173721 containerd[1795]: time="2025-02-13T20:17:31.173713056Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:17:31.173759 containerd[1795]: time="2025-02-13T20:17:31.173751955Z" level=info msg="TearDown network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" successfully" Feb 13 20:17:31.173781 containerd[1795]: time="2025-02-13T20:17:31.173758928Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" returns successfully" Feb 13 20:17:31.173890 containerd[1795]: time="2025-02-13T20:17:31.173878275Z" level=info msg="StopPodSandbox for \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\"" Feb 13 20:17:31.173948 containerd[1795]: time="2025-02-13T20:17:31.173936177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:4,}" Feb 13 20:17:31.173970 systemd[1]: run-netns-cni\x2d5500915c\x2d46a9\x2d3282\x2deeae\x2d53a3c9a03ac2.mount: Deactivated successfully. Feb 13 20:17:31.174129 containerd[1795]: time="2025-02-13T20:17:31.173975908Z" level=info msg="Ensure that sandbox 3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50 in task-service has been cleanup successfully" Feb 13 20:17:31.174129 containerd[1795]: time="2025-02-13T20:17:31.174070953Z" level=info msg="TearDown network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" successfully" Feb 13 20:17:31.174129 containerd[1795]: time="2025-02-13T20:17:31.174078623Z" level=info msg="StopPodSandbox for \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" returns successfully" Feb 13 20:17:31.174210 containerd[1795]: time="2025-02-13T20:17:31.174197263Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\"" Feb 13 20:17:31.174263 containerd[1795]: time="2025-02-13T20:17:31.174253261Z" level=info msg="TearDown network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" successfully" Feb 13 20:17:31.174296 containerd[1795]: time="2025-02-13T20:17:31.174263057Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" returns successfully" Feb 13 20:17:31.174377 containerd[1795]: time="2025-02-13T20:17:31.174367479Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" Feb 13 20:17:31.174415 containerd[1795]: time="2025-02-13T20:17:31.174407031Z" level=info msg="TearDown network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" successfully" Feb 13 20:17:31.174415 containerd[1795]: time="2025-02-13T20:17:31.174413740Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" returns successfully" Feb 13 20:17:31.174455 kubelet[3267]: I0213 20:17:31.174440 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636" Feb 13 20:17:31.174545 containerd[1795]: time="2025-02-13T20:17:31.174534401Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:17:31.174589 containerd[1795]: time="2025-02-13T20:17:31.174580166Z" level=info msg="TearDown network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" successfully" Feb 13 20:17:31.174610 containerd[1795]: time="2025-02-13T20:17:31.174589415Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" returns successfully" Feb 13 20:17:31.174694 containerd[1795]: time="2025-02-13T20:17:31.174683209Z" level=info msg="StopPodSandbox for \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\"" Feb 13 20:17:31.174743 containerd[1795]: time="2025-02-13T20:17:31.174731313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:4,}" Feb 13 20:17:31.174800 containerd[1795]: time="2025-02-13T20:17:31.174789936Z" level=info msg="Ensure that sandbox 58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636 in task-service has been cleanup successfully" Feb 13 20:17:31.174893 containerd[1795]: time="2025-02-13T20:17:31.174883156Z" level=info msg="TearDown network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" successfully" Feb 13 20:17:31.174932 containerd[1795]: time="2025-02-13T20:17:31.174893713Z" level=info msg="StopPodSandbox for \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" returns successfully" Feb 13 20:17:31.174997 containerd[1795]: time="2025-02-13T20:17:31.174988709Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\"" Feb 13 20:17:31.175034 containerd[1795]: time="2025-02-13T20:17:31.175025303Z" level=info msg="TearDown network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" successfully" Feb 13 20:17:31.175034 containerd[1795]: time="2025-02-13T20:17:31.175032139Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" returns successfully" Feb 13 20:17:31.175152 containerd[1795]: time="2025-02-13T20:17:31.175138149Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" Feb 13 20:17:31.175202 containerd[1795]: time="2025-02-13T20:17:31.175191661Z" level=info msg="TearDown network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" successfully" Feb 13 20:17:31.175227 containerd[1795]: time="2025-02-13T20:17:31.175202495Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" returns successfully" Feb 13 20:17:31.175267 kubelet[3267]: I0213 20:17:31.175260 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6" Feb 13 20:17:31.175311 containerd[1795]: time="2025-02-13T20:17:31.175300499Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:17:31.175368 containerd[1795]: time="2025-02-13T20:17:31.175346002Z" level=info msg="TearDown network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" successfully" Feb 13 20:17:31.175393 containerd[1795]: time="2025-02-13T20:17:31.175367898Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" returns successfully" Feb 13 20:17:31.175495 containerd[1795]: time="2025-02-13T20:17:31.175484876Z" level=info msg="StopPodSandbox for \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\"" Feb 13 20:17:31.175549 containerd[1795]: time="2025-02-13T20:17:31.175537810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:4,}" Feb 13 20:17:31.175600 containerd[1795]: time="2025-02-13T20:17:31.175590499Z" level=info msg="Ensure that sandbox 4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6 in task-service has been cleanup successfully" Feb 13 20:17:31.175679 containerd[1795]: time="2025-02-13T20:17:31.175670539Z" level=info msg="TearDown network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" successfully" Feb 13 20:17:31.175702 containerd[1795]: time="2025-02-13T20:17:31.175679039Z" level=info msg="StopPodSandbox for \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" returns successfully" Feb 13 20:17:31.175798 containerd[1795]: time="2025-02-13T20:17:31.175788322Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\"" Feb 13 20:17:31.175837 containerd[1795]: time="2025-02-13T20:17:31.175830247Z" level=info msg="TearDown network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" successfully" Feb 13 20:17:31.175860 containerd[1795]: time="2025-02-13T20:17:31.175836784Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" returns successfully" Feb 13 20:17:31.175962 containerd[1795]: time="2025-02-13T20:17:31.175951177Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" Feb 13 20:17:31.176012 containerd[1795]: time="2025-02-13T20:17:31.176003816Z" level=info msg="TearDown network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" successfully" Feb 13 20:17:31.176045 containerd[1795]: time="2025-02-13T20:17:31.176012322Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" returns successfully" Feb 13 20:17:31.176137 containerd[1795]: time="2025-02-13T20:17:31.176125572Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:17:31.176188 containerd[1795]: time="2025-02-13T20:17:31.176178028Z" level=info msg="TearDown network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" successfully" Feb 13 20:17:31.176211 containerd[1795]: time="2025-02-13T20:17:31.176188038Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" returns successfully" Feb 13 20:17:31.176354 containerd[1795]: time="2025-02-13T20:17:31.176342954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:4,}" Feb 13 20:17:31.176905 systemd[1]: run-netns-cni\x2dca92144c\x2d8501\x2db6f8\x2d6a3a\x2d5eae92abc8ae.mount: Deactivated successfully. Feb 13 20:17:31.177015 systemd[1]: run-netns-cni\x2d6bcdc006\x2ddff8\x2d25a4\x2d2276\x2ddb7edd0b418a.mount: Deactivated successfully. Feb 13 20:17:31.177068 systemd[1]: run-netns-cni\x2db389bf2d\x2d86d1\x2d6475\x2d5327\x2d2fb3651e5df3.mount: Deactivated successfully. Feb 13 20:17:31.177122 systemd[1]: run-netns-cni\x2d121d3e8d\x2d82e8\x2d4bdc\x2de0db\x2d101e0940e931.mount: Deactivated successfully. Feb 13 20:17:31.179359 systemd[1]: run-netns-cni\x2de5f940ba\x2d1961\x2df534\x2da45f\x2dbe010b562dfe.mount: Deactivated successfully. Feb 13 20:17:31.225124 containerd[1795]: time="2025-02-13T20:17:31.225080899Z" level=error msg="Failed to destroy network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225278 containerd[1795]: time="2025-02-13T20:17:31.225257671Z" level=error msg="Failed to destroy network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225324 containerd[1795]: time="2025-02-13T20:17:31.225311355Z" level=error msg="encountered an error cleaning up failed sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225361 containerd[1795]: time="2025-02-13T20:17:31.225350320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225523 kubelet[3267]: E0213 20:17:31.225501 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225582 containerd[1795]: time="2025-02-13T20:17:31.225506109Z" level=error msg="encountered an error cleaning up failed sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225582 containerd[1795]: time="2025-02-13T20:17:31.225540481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225684 kubelet[3267]: E0213 20:17:31.225545 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:31.225684 kubelet[3267]: E0213 20:17:31.225566 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cnhpl" Feb 13 20:17:31.225684 kubelet[3267]: E0213 20:17:31.225602 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cnhpl_kube-system(f56e4292-4136-4f41-a2d5-245b627504ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cnhpl" podUID="f56e4292-4136-4f41-a2d5-245b627504ec" Feb 13 20:17:31.225792 kubelet[3267]: E0213 20:17:31.225628 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.225792 kubelet[3267]: E0213 20:17:31.225658 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:31.225792 kubelet[3267]: E0213 20:17:31.225675 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" Feb 13 20:17:31.225856 kubelet[3267]: E0213 20:17:31.225705 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55886c68fb-v8l5j_calico-system(853990b6-ca4d-4293-b605-05f073fd0f8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" podUID="853990b6-ca4d-4293-b605-05f073fd0f8e" Feb 13 20:17:31.226344 containerd[1795]: time="2025-02-13T20:17:31.226325947Z" level=error msg="Failed to destroy network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.226526 containerd[1795]: time="2025-02-13T20:17:31.226506347Z" level=error msg="encountered an error cleaning up failed sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.226582 containerd[1795]: time="2025-02-13T20:17:31.226543150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.226671 kubelet[3267]: E0213 20:17:31.226657 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.226712 kubelet[3267]: E0213 20:17:31.226688 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:31.226712 kubelet[3267]: E0213 20:17:31.226705 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kcvvn" Feb 13 20:17:31.226779 kubelet[3267]: E0213 20:17:31.226732 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kcvvn_calico-system(64b01e33-b95d-49cd-8822-d1aaf4457527)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kcvvn" podUID="64b01e33-b95d-49cd-8822-d1aaf4457527" Feb 13 20:17:31.226874 containerd[1795]: time="2025-02-13T20:17:31.226856356Z" level=error msg="Failed to destroy network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227061 containerd[1795]: time="2025-02-13T20:17:31.227044786Z" level=error msg="encountered an error cleaning up failed sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227092 containerd[1795]: time="2025-02-13T20:17:31.227078374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227179 kubelet[3267]: E0213 20:17:31.227165 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227213 kubelet[3267]: E0213 20:17:31.227187 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:31.227213 kubelet[3267]: E0213 20:17:31.227199 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" Feb 13 20:17:31.227275 kubelet[3267]: E0213 20:17:31.227221 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-8wrbb_calico-apiserver(a01e5e0d-d13b-4c53-a211-8d274b915928)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" podUID="a01e5e0d-d13b-4c53-a211-8d274b915928" Feb 13 20:17:31.227512 containerd[1795]: time="2025-02-13T20:17:31.227491622Z" level=error msg="Failed to destroy network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227650 containerd[1795]: time="2025-02-13T20:17:31.227637200Z" level=error msg="encountered an error cleaning up failed sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227680 containerd[1795]: time="2025-02-13T20:17:31.227665173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227702 containerd[1795]: time="2025-02-13T20:17:31.227682001Z" level=error msg="Failed to destroy network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227745 kubelet[3267]: E0213 20:17:31.227732 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227769 kubelet[3267]: E0213 20:17:31.227752 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:31.227769 kubelet[3267]: E0213 20:17:31.227764 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-frmtf" Feb 13 20:17:31.227811 kubelet[3267]: E0213 20:17:31.227783 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-frmtf_kube-system(542fc81f-a219-47bd-b6bf-841062c32db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-frmtf" podUID="542fc81f-a219-47bd-b6bf-841062c32db8" Feb 13 20:17:31.227847 containerd[1795]: time="2025-02-13T20:17:31.227812996Z" level=error msg="encountered an error cleaning up failed sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227847 containerd[1795]: time="2025-02-13T20:17:31.227838024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227915 kubelet[3267]: E0213 20:17:31.227902 3267 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:17:31.227950 kubelet[3267]: E0213 20:17:31.227924 3267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:31.227950 kubelet[3267]: E0213 20:17:31.227940 3267 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" Feb 13 20:17:31.228003 kubelet[3267]: E0213 20:17:31.227966 3267 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff9d88786-v9j7t_calico-apiserver(36ab6480-30fc-4b9c-bca0-5bd3671f05dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" podUID="36ab6480-30fc-4b9c-bca0-5bd3671f05dc" Feb 13 20:17:31.521268 containerd[1795]: time="2025-02-13T20:17:31.521175477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:31.521442 containerd[1795]: time="2025-02-13T20:17:31.521397777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:17:31.521746 containerd[1795]: time="2025-02-13T20:17:31.521703781Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:31.522561 containerd[1795]: time="2025-02-13T20:17:31.522547756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:31.522987 containerd[1795]: time="2025-02-13T20:17:31.522974555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.370295526s" Feb 13 20:17:31.523015 containerd[1795]: time="2025-02-13T20:17:31.522990688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:17:31.526411 containerd[1795]: time="2025-02-13T20:17:31.526398119Z" level=info msg="CreateContainer within sandbox \"e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:17:31.537166 containerd[1795]: time="2025-02-13T20:17:31.537120216Z" level=info msg="CreateContainer within sandbox \"e4ac01215380ff3fbc397c88d560637cb936265063f048065866bd5bd584d2d5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5978cfdb7a568914399cf4c18a14c492fb468d7c1da2dfe819a9f7a84abc3276\"" Feb 13 20:17:31.537420 containerd[1795]: time="2025-02-13T20:17:31.537383111Z" level=info msg="StartContainer for \"5978cfdb7a568914399cf4c18a14c492fb468d7c1da2dfe819a9f7a84abc3276\"" Feb 13 20:17:31.561606 systemd[1]: Started cri-containerd-5978cfdb7a568914399cf4c18a14c492fb468d7c1da2dfe819a9f7a84abc3276.scope - libcontainer container 5978cfdb7a568914399cf4c18a14c492fb468d7c1da2dfe819a9f7a84abc3276. Feb 13 20:17:31.583249 containerd[1795]: time="2025-02-13T20:17:31.583217936Z" level=info msg="StartContainer for \"5978cfdb7a568914399cf4c18a14c492fb468d7c1da2dfe819a9f7a84abc3276\" returns successfully" Feb 13 20:17:31.651249 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:17:31.651307 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:17:32.144649 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b-shm.mount: Deactivated successfully. Feb 13 20:17:32.144721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558071734.mount: Deactivated successfully. Feb 13 20:17:32.177602 kubelet[3267]: I0213 20:17:32.177583 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211" Feb 13 20:17:32.177901 containerd[1795]: time="2025-02-13T20:17:32.177883997Z" level=info msg="StopPodSandbox for \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\"" Feb 13 20:17:32.178063 containerd[1795]: time="2025-02-13T20:17:32.178020477Z" level=info msg="Ensure that sandbox 0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211 in task-service has been cleanup successfully" Feb 13 20:17:32.178155 containerd[1795]: time="2025-02-13T20:17:32.178140696Z" level=info msg="TearDown network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\" successfully" Feb 13 20:17:32.178185 containerd[1795]: time="2025-02-13T20:17:32.178155862Z" level=info msg="StopPodSandbox for \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\" returns successfully" Feb 13 20:17:32.178317 containerd[1795]: time="2025-02-13T20:17:32.178300296Z" level=info msg="StopPodSandbox for \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\"" Feb 13 20:17:32.178416 containerd[1795]: time="2025-02-13T20:17:32.178373919Z" level=info msg="TearDown network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" successfully" Feb 13 20:17:32.178441 containerd[1795]: time="2025-02-13T20:17:32.178417484Z" level=info msg="StopPodSandbox for \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" returns successfully" Feb 13 20:17:32.178681 containerd[1795]: time="2025-02-13T20:17:32.178631124Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\"" Feb 13 20:17:32.178725 containerd[1795]: time="2025-02-13T20:17:32.178692660Z" level=info msg="TearDown network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" successfully" Feb 13 20:17:32.178725 containerd[1795]: time="2025-02-13T20:17:32.178719473Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" returns successfully" Feb 13 20:17:32.178836 kubelet[3267]: I0213 20:17:32.178823 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b" Feb 13 20:17:32.178923 containerd[1795]: time="2025-02-13T20:17:32.178887522Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" Feb 13 20:17:32.178993 containerd[1795]: time="2025-02-13T20:17:32.178977935Z" level=info msg="TearDown network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" successfully" Feb 13 20:17:32.179041 containerd[1795]: time="2025-02-13T20:17:32.178993933Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" returns successfully" Feb 13 20:17:32.179162 containerd[1795]: time="2025-02-13T20:17:32.179150132Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:17:32.179193 containerd[1795]: time="2025-02-13T20:17:32.179171500Z" level=info msg="StopPodSandbox for \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\"" Feb 13 20:17:32.179218 containerd[1795]: time="2025-02-13T20:17:32.179200631Z" level=info msg="TearDown network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" successfully" Feb 13 20:17:32.179218 containerd[1795]: time="2025-02-13T20:17:32.179208551Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" returns successfully" Feb 13 20:17:32.179291 containerd[1795]: time="2025-02-13T20:17:32.179275826Z" level=info msg="Ensure that sandbox 39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b in task-service has been cleanup successfully" Feb 13 20:17:32.179388 containerd[1795]: time="2025-02-13T20:17:32.179374646Z" level=info msg="TearDown network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\" successfully" Feb 13 20:17:32.179435 containerd[1795]: time="2025-02-13T20:17:32.179387851Z" level=info msg="StopPodSandbox for \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\" returns successfully" Feb 13 20:17:32.179435 containerd[1795]: time="2025-02-13T20:17:32.179420384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:5,}" Feb 13 20:17:32.179584 containerd[1795]: time="2025-02-13T20:17:32.179565357Z" level=info msg="StopPodSandbox for \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\"" Feb 13 20:17:32.179678 containerd[1795]: time="2025-02-13T20:17:32.179643670Z" level=info msg="TearDown network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" successfully" Feb 13 20:17:32.179715 containerd[1795]: time="2025-02-13T20:17:32.179679131Z" level=info msg="StopPodSandbox for \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" returns successfully" Feb 13 20:17:32.179833 containerd[1795]: time="2025-02-13T20:17:32.179824015Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\"" Feb 13 20:17:32.179874 containerd[1795]: time="2025-02-13T20:17:32.179865346Z" level=info msg="TearDown network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" successfully" Feb 13 20:17:32.179894 containerd[1795]: time="2025-02-13T20:17:32.179875440Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" returns successfully" Feb 13 20:17:32.179974 containerd[1795]: time="2025-02-13T20:17:32.179966788Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" Feb 13 20:17:32.179998 kubelet[3267]: I0213 20:17:32.179988 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4" Feb 13 20:17:32.180021 containerd[1795]: time="2025-02-13T20:17:32.180000191Z" level=info msg="TearDown network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" successfully" Feb 13 20:17:32.180021 containerd[1795]: time="2025-02-13T20:17:32.180005717Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" returns successfully" Feb 13 20:17:32.180114 containerd[1795]: time="2025-02-13T20:17:32.180104830Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:17:32.180150 containerd[1795]: time="2025-02-13T20:17:32.180142477Z" level=info msg="TearDown network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" successfully" Feb 13 20:17:32.180184 containerd[1795]: time="2025-02-13T20:17:32.180148894Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" returns successfully" Feb 13 20:17:32.180273 systemd[1]: run-netns-cni\x2d628e0b5a\x2d72e7\x2ddd0e\x2d82ed\x2d4cc6110bb5ec.mount: Deactivated successfully. Feb 13 20:17:32.180397 containerd[1795]: time="2025-02-13T20:17:32.180289534Z" level=info msg="StopPodSandbox for \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\"" Feb 13 20:17:32.180397 containerd[1795]: time="2025-02-13T20:17:32.180317317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:5,}" Feb 13 20:17:32.180397 containerd[1795]: time="2025-02-13T20:17:32.180375171Z" level=info msg="Ensure that sandbox 17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4 in task-service has been cleanup successfully" Feb 13 20:17:32.180478 containerd[1795]: time="2025-02-13T20:17:32.180452017Z" level=info msg="TearDown network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\" successfully" Feb 13 20:17:32.180478 containerd[1795]: time="2025-02-13T20:17:32.180462569Z" level=info msg="StopPodSandbox for \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\" returns successfully" Feb 13 20:17:32.180581 containerd[1795]: time="2025-02-13T20:17:32.180571341Z" level=info msg="StopPodSandbox for \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\"" Feb 13 20:17:32.180625 containerd[1795]: time="2025-02-13T20:17:32.180617761Z" level=info msg="TearDown network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" successfully" Feb 13 20:17:32.180643 containerd[1795]: time="2025-02-13T20:17:32.180625726Z" level=info msg="StopPodSandbox for \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" returns successfully" Feb 13 20:17:32.180754 containerd[1795]: time="2025-02-13T20:17:32.180742787Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\"" Feb 13 20:17:32.180799 containerd[1795]: time="2025-02-13T20:17:32.180782302Z" level=info msg="TearDown network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" successfully" Feb 13 20:17:32.180799 containerd[1795]: time="2025-02-13T20:17:32.180788513Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" returns successfully" Feb 13 20:17:32.180909 containerd[1795]: time="2025-02-13T20:17:32.180898956Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" Feb 13 20:17:32.180946 containerd[1795]: time="2025-02-13T20:17:32.180938519Z" level=info msg="TearDown network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" successfully" Feb 13 20:17:32.180980 containerd[1795]: time="2025-02-13T20:17:32.180945823Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" returns successfully" Feb 13 20:17:32.181010 kubelet[3267]: I0213 20:17:32.180948 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904" Feb 13 20:17:32.181095 containerd[1795]: time="2025-02-13T20:17:32.181084310Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:17:32.181149 containerd[1795]: time="2025-02-13T20:17:32.181126977Z" level=info msg="TearDown network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" successfully" Feb 13 20:17:32.181182 containerd[1795]: time="2025-02-13T20:17:32.181149237Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" returns successfully" Feb 13 20:17:32.181182 containerd[1795]: time="2025-02-13T20:17:32.181133028Z" level=info msg="StopPodSandbox for \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\"" Feb 13 20:17:32.181274 containerd[1795]: time="2025-02-13T20:17:32.181264508Z" level=info msg="Ensure that sandbox 50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904 in task-service has been cleanup successfully" Feb 13 20:17:32.181318 containerd[1795]: time="2025-02-13T20:17:32.181307666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:5,}" Feb 13 20:17:32.181350 containerd[1795]: time="2025-02-13T20:17:32.181339997Z" level=info msg="TearDown network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\" successfully" Feb 13 20:17:32.181379 containerd[1795]: time="2025-02-13T20:17:32.181349484Z" level=info msg="StopPodSandbox for \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\" returns successfully" Feb 13 20:17:32.181477 containerd[1795]: time="2025-02-13T20:17:32.181467096Z" level=info msg="StopPodSandbox for \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\"" Feb 13 20:17:32.181515 containerd[1795]: time="2025-02-13T20:17:32.181508549Z" level=info msg="TearDown network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" successfully" Feb 13 20:17:32.181535 containerd[1795]: time="2025-02-13T20:17:32.181515762Z" level=info msg="StopPodSandbox for \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" returns successfully" Feb 13 20:17:32.181672 containerd[1795]: time="2025-02-13T20:17:32.181663309Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\"" Feb 13 20:17:32.181705 containerd[1795]: time="2025-02-13T20:17:32.181698261Z" level=info msg="TearDown network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" successfully" Feb 13 20:17:32.181705 containerd[1795]: time="2025-02-13T20:17:32.181704119Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" returns successfully" Feb 13 20:17:32.181824 containerd[1795]: time="2025-02-13T20:17:32.181814710Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" Feb 13 20:17:32.181847 kubelet[3267]: I0213 20:17:32.181822 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c" Feb 13 20:17:32.181875 containerd[1795]: time="2025-02-13T20:17:32.181855236Z" level=info msg="TearDown network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" successfully" Feb 13 20:17:32.181875 containerd[1795]: time="2025-02-13T20:17:32.181861702Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" returns successfully" Feb 13 20:17:32.181989 containerd[1795]: time="2025-02-13T20:17:32.181976952Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:17:32.182026 containerd[1795]: time="2025-02-13T20:17:32.182018419Z" level=info msg="TearDown network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" successfully" Feb 13 20:17:32.182054 containerd[1795]: time="2025-02-13T20:17:32.182025731Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" returns successfully" Feb 13 20:17:32.182107 containerd[1795]: time="2025-02-13T20:17:32.182096575Z" level=info msg="StopPodSandbox for \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\"" Feb 13 20:17:32.182195 containerd[1795]: time="2025-02-13T20:17:32.182182443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:5,}" Feb 13 20:17:32.182228 containerd[1795]: time="2025-02-13T20:17:32.182200439Z" level=info msg="Ensure that sandbox 79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c in task-service has been cleanup successfully" Feb 13 20:17:32.182305 containerd[1795]: time="2025-02-13T20:17:32.182295209Z" level=info msg="TearDown network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\" successfully" Feb 13 20:17:32.182343 containerd[1795]: time="2025-02-13T20:17:32.182304419Z" level=info msg="StopPodSandbox for \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\" returns successfully" Feb 13 20:17:32.182379 systemd[1]: run-netns-cni\x2d956e8e20\x2dcca7\x2d26ee\x2d5e77\x2db7edb37bdcad.mount: Deactivated successfully. Feb 13 20:17:32.182434 containerd[1795]: time="2025-02-13T20:17:32.182423658Z" level=info msg="StopPodSandbox for \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\"" Feb 13 20:17:32.182463 systemd[1]: run-netns-cni\x2db5c3f7b1\x2d8fbf\x2d74cc\x2dc823\x2d531955a84803.mount: Deactivated successfully. Feb 13 20:17:32.182512 containerd[1795]: time="2025-02-13T20:17:32.182478471Z" level=info msg="TearDown network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" successfully" Feb 13 20:17:32.182512 containerd[1795]: time="2025-02-13T20:17:32.182488371Z" level=info msg="StopPodSandbox for \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" returns successfully" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182571831Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\"" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182609132Z" level=info msg="TearDown network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" successfully" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182615035Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" returns successfully" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182715415Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182756457Z" level=info msg="TearDown network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" successfully" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182762700Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" returns successfully" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182887107Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182928021Z" level=info msg="TearDown network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" successfully" Feb 13 20:17:32.183050 containerd[1795]: time="2025-02-13T20:17:32.182937333Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" returns successfully" Feb 13 20:17:32.183296 containerd[1795]: time="2025-02-13T20:17:32.183129965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:5,}" Feb 13 20:17:32.183557 kubelet[3267]: I0213 20:17:32.183549 3267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc" Feb 13 20:17:32.183773 containerd[1795]: time="2025-02-13T20:17:32.183762417Z" level=info msg="StopPodSandbox for \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\"" Feb 13 20:17:32.183875 containerd[1795]: time="2025-02-13T20:17:32.183863942Z" level=info msg="Ensure that sandbox 9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc in task-service has been cleanup successfully" Feb 13 20:17:32.183969 containerd[1795]: time="2025-02-13T20:17:32.183959103Z" level=info msg="TearDown network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\" successfully" Feb 13 20:17:32.184003 containerd[1795]: time="2025-02-13T20:17:32.183968050Z" level=info msg="StopPodSandbox for \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\" returns successfully" Feb 13 20:17:32.184102 containerd[1795]: time="2025-02-13T20:17:32.184091051Z" level=info msg="StopPodSandbox for \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\"" Feb 13 20:17:32.184151 containerd[1795]: time="2025-02-13T20:17:32.184141995Z" level=info msg="TearDown network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" successfully" Feb 13 20:17:32.184187 containerd[1795]: time="2025-02-13T20:17:32.184151084Z" level=info msg="StopPodSandbox for \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" returns successfully" Feb 13 20:17:32.184286 containerd[1795]: time="2025-02-13T20:17:32.184276330Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\"" Feb 13 20:17:32.184321 containerd[1795]: time="2025-02-13T20:17:32.184314925Z" level=info msg="TearDown network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" successfully" Feb 13 20:17:32.184340 containerd[1795]: time="2025-02-13T20:17:32.184321756Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" returns successfully" Feb 13 20:17:32.184435 containerd[1795]: time="2025-02-13T20:17:32.184424621Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" Feb 13 20:17:32.184497 containerd[1795]: time="2025-02-13T20:17:32.184486939Z" level=info msg="TearDown network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" successfully" Feb 13 20:17:32.184515 containerd[1795]: time="2025-02-13T20:17:32.184497808Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" returns successfully" Feb 13 20:17:32.184619 containerd[1795]: time="2025-02-13T20:17:32.184609758Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:17:32.184655 containerd[1795]: time="2025-02-13T20:17:32.184648403Z" level=info msg="TearDown network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" successfully" Feb 13 20:17:32.184684 containerd[1795]: time="2025-02-13T20:17:32.184655132Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" returns successfully" Feb 13 20:17:32.184742 systemd[1]: run-netns-cni\x2d4c784674\x2d991a\x2de1be\x2d3901\x2d90aaa6d76ad3.mount: Deactivated successfully. Feb 13 20:17:32.184827 systemd[1]: run-netns-cni\x2d1b64e754\x2d2b14\x2d2a61\x2d6fcf\x2de01904a459a7.mount: Deactivated successfully. Feb 13 20:17:32.184858 containerd[1795]: time="2025-02-13T20:17:32.184834998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:5,}" Feb 13 20:17:32.186972 systemd[1]: run-netns-cni\x2dbbe4693c\x2d5f2c\x2dbbff\x2d298f\x2d7be11e73421a.mount: Deactivated successfully. Feb 13 20:17:32.206250 kubelet[3267]: I0213 20:17:32.206136 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fc7qg" podStartSLOduration=1.206249772 podStartE2EDuration="13.206125321s" podCreationTimestamp="2025-02-13 20:17:19 +0000 UTC" firstStartedPulling="2025-02-13 20:17:19.52350133 +0000 UTC m=+19.513783248" lastFinishedPulling="2025-02-13 20:17:31.523376886 +0000 UTC m=+31.513658797" observedRunningTime="2025-02-13 20:17:32.205933278 +0000 UTC m=+32.196215193" watchObservedRunningTime="2025-02-13 20:17:32.206125321 +0000 UTC m=+32.196407229" Feb 13 20:17:32.253959 systemd-networkd[1706]: calidc7d898be18: Link UP Feb 13 20:17:32.254056 systemd-networkd[1706]: calidc7d898be18: Gained carrier Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.206 [INFO][5718] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.212 [INFO][5718] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0 calico-apiserver-ff9d88786- calico-apiserver 36ab6480-30fc-4b9c-bca0-5bd3671f05dc 665 0 2025-02-13 20:17:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff9d88786 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.1-a-5d3d77ba07 calico-apiserver-ff9d88786-v9j7t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidc7d898be18 [] []}} ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.212 [INFO][5718] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.227 [INFO][5829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" HandleID="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.234 [INFO][5829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" HandleID="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029a400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.1-a-5d3d77ba07", "pod":"calico-apiserver-ff9d88786-v9j7t", "timestamp":"2025-02-13 20:17:32.227293135 +0000 UTC"}, Hostname:"ci-4152.2.1-a-5d3d77ba07", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.234 [INFO][5829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.234 [INFO][5829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.234 [INFO][5829] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-5d3d77ba07' Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.235 [INFO][5829] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.237 [INFO][5829] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.240 [INFO][5829] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.241 [INFO][5829] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.242 [INFO][5829] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.242 [INFO][5829] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.243 [INFO][5829] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84 Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.245 [INFO][5829] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.248 [INFO][5829] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.1/26] block=192.168.99.0/26 handle="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.248 [INFO][5829] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.1/26] handle="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.248 [INFO][5829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:32.258668 containerd[1795]: 2025-02-13 20:17:32.248 [INFO][5829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.1/26] IPv6=[] ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" HandleID="k8s-pod-network.a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" Feb 13 20:17:32.259259 containerd[1795]: 2025-02-13 20:17:32.249 [INFO][5718] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0", GenerateName:"calico-apiserver-ff9d88786-", Namespace:"calico-apiserver", SelfLink:"", UID:"36ab6480-30fc-4b9c-bca0-5bd3671f05dc", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff9d88786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"", Pod:"calico-apiserver-ff9d88786-v9j7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc7d898be18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.259259 containerd[1795]: 2025-02-13 20:17:32.249 [INFO][5718] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.1/32] ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" Feb 13 20:17:32.259259 containerd[1795]: 2025-02-13 20:17:32.249 [INFO][5718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc7d898be18 ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" Feb 13 20:17:32.259259 containerd[1795]: 2025-02-13 20:17:32.254 [INFO][5718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" Feb 13 20:17:32.259259 containerd[1795]: 2025-02-13 20:17:32.254 [INFO][5718] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0", GenerateName:"calico-apiserver-ff9d88786-", Namespace:"calico-apiserver", SelfLink:"", UID:"36ab6480-30fc-4b9c-bca0-5bd3671f05dc", ResourceVersion:"665", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff9d88786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84", Pod:"calico-apiserver-ff9d88786-v9j7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidc7d898be18", MAC:"7e:d7:12:7b:5a:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.259259 containerd[1795]: 2025-02-13 20:17:32.257 [INFO][5718] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-v9j7t" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--v9j7t-eth0" Feb 13 20:17:32.264608 systemd-networkd[1706]: cali9ce1500d683: Link UP Feb 13 20:17:32.264765 systemd-networkd[1706]: cali9ce1500d683: Gained carrier Feb 13 20:17:32.269504 containerd[1795]: time="2025-02-13T20:17:32.269454917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:32.269504 containerd[1795]: time="2025-02-13T20:17:32.269494045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:32.269504 containerd[1795]: time="2025-02-13T20:17:32.269501876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.269621 containerd[1795]: time="2025-02-13T20:17:32.269547789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.209 [INFO][5742] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.214 [INFO][5742] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0 csi-node-driver- calico-system 64b01e33-b95d-49cd-8822-d1aaf4457527 594 0 2025-02-13 20:17:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152.2.1-a-5d3d77ba07 csi-node-driver-kcvvn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9ce1500d683 [] []}} ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.214 [INFO][5742] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.230 [INFO][5836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" HandleID="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.236 [INFO][5836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" HandleID="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.1-a-5d3d77ba07", "pod":"csi-node-driver-kcvvn", "timestamp":"2025-02-13 20:17:32.230024395 +0000 UTC"}, Hostname:"ci-4152.2.1-a-5d3d77ba07", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.236 [INFO][5836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.248 [INFO][5836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.248 [INFO][5836] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-5d3d77ba07' Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.249 [INFO][5836] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.251 [INFO][5836] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.253 [INFO][5836] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.255 [INFO][5836] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.256 [INFO][5836] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.256 [INFO][5836] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.257 [INFO][5836] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.259 [INFO][5836] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.262 [INFO][5836] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.2/26] block=192.168.99.0/26 handle="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.262 [INFO][5836] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.2/26] handle="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.262 [INFO][5836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:32.269782 containerd[1795]: 2025-02-13 20:17:32.262 [INFO][5836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.2/26] IPv6=[] ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" HandleID="k8s-pod-network.ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" Feb 13 20:17:32.270147 containerd[1795]: 2025-02-13 20:17:32.263 [INFO][5742] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64b01e33-b95d-49cd-8822-d1aaf4457527", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"", Pod:"csi-node-driver-kcvvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ce1500d683", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.270147 containerd[1795]: 2025-02-13 20:17:32.263 [INFO][5742] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.2/32] ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" Feb 13 20:17:32.270147 containerd[1795]: 2025-02-13 20:17:32.263 [INFO][5742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ce1500d683 ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" Feb 13 20:17:32.270147 containerd[1795]: 2025-02-13 20:17:32.264 [INFO][5742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" Feb 13 20:17:32.270147 containerd[1795]: 2025-02-13 20:17:32.264 [INFO][5742] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64b01e33-b95d-49cd-8822-d1aaf4457527", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e", Pod:"csi-node-driver-kcvvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ce1500d683", MAC:"ea:8e:2d:91:4e:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.270147 containerd[1795]: 2025-02-13 20:17:32.269 [INFO][5742] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e" Namespace="calico-system" Pod="csi-node-driver-kcvvn" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-csi--node--driver--kcvvn-eth0" Feb 13 20:17:32.278834 systemd-networkd[1706]: calia0a5192e377: Link UP Feb 13 20:17:32.278935 systemd-networkd[1706]: calia0a5192e377: Gained carrier Feb 13 20:17:32.279780 containerd[1795]: time="2025-02-13T20:17:32.279714796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:32.279780 containerd[1795]: time="2025-02-13T20:17:32.279747836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:32.279780 containerd[1795]: time="2025-02-13T20:17:32.279754744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.279874 containerd[1795]: time="2025-02-13T20:17:32.279798072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.282645 systemd[1]: Started cri-containerd-a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84.scope - libcontainer container a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84. Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.209 [INFO][5766] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.215 [INFO][5766] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0 coredns-7db6d8ff4d- kube-system 542fc81f-a219-47bd-b6bf-841062c32db8 670 0 2025-02-13 20:17:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.1-a-5d3d77ba07 coredns-7db6d8ff4d-frmtf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia0a5192e377 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.215 [INFO][5766] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.232 [INFO][5837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" HandleID="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.237 [INFO][5837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" HandleID="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000299800), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.1-a-5d3d77ba07", "pod":"coredns-7db6d8ff4d-frmtf", "timestamp":"2025-02-13 20:17:32.232368631 +0000 UTC"}, Hostname:"ci-4152.2.1-a-5d3d77ba07", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.237 [INFO][5837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.262 [INFO][5837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.262 [INFO][5837] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-5d3d77ba07' Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.263 [INFO][5837] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.266 [INFO][5837] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.268 [INFO][5837] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.270 [INFO][5837] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.271 [INFO][5837] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.271 [INFO][5837] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.272 [INFO][5837] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63 Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.274 [INFO][5837] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.277 [INFO][5837] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.3/26] block=192.168.99.0/26 handle="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.277 [INFO][5837] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.3/26] handle="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.277 [INFO][5837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:32.283656 containerd[1795]: 2025-02-13 20:17:32.277 [INFO][5837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.3/26] IPv6=[] ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" HandleID="k8s-pod-network.0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" Feb 13 20:17:32.284121 containerd[1795]: 2025-02-13 20:17:32.278 [INFO][5766] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"542fc81f-a219-47bd-b6bf-841062c32db8", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"", Pod:"coredns-7db6d8ff4d-frmtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0a5192e377", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.284121 containerd[1795]: 2025-02-13 20:17:32.278 [INFO][5766] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.3/32] ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" Feb 13 20:17:32.284121 containerd[1795]: 2025-02-13 20:17:32.278 [INFO][5766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0a5192e377 ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" Feb 13 20:17:32.284121 containerd[1795]: 2025-02-13 20:17:32.278 [INFO][5766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" Feb 13 20:17:32.284121 containerd[1795]: 2025-02-13 20:17:32.279 [INFO][5766] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"542fc81f-a219-47bd-b6bf-841062c32db8", ResourceVersion:"670", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63", Pod:"coredns-7db6d8ff4d-frmtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0a5192e377", MAC:"02:9d:78:26:9a:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.284121 containerd[1795]: 2025-02-13 20:17:32.282 [INFO][5766] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63" Namespace="kube-system" Pod="coredns-7db6d8ff4d-frmtf" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--frmtf-eth0" Feb 13 20:17:32.286303 systemd[1]: Started cri-containerd-ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e.scope - libcontainer container ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e. Feb 13 20:17:32.293027 systemd-networkd[1706]: cali7439c22ced2: Link UP Feb 13 20:17:32.293230 systemd-networkd[1706]: cali7439c22ced2: Gained carrier Feb 13 20:17:32.294715 containerd[1795]: time="2025-02-13T20:17:32.294669161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:32.294715 containerd[1795]: time="2025-02-13T20:17:32.294706228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:32.294806 containerd[1795]: time="2025-02-13T20:17:32.294717318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.294806 containerd[1795]: time="2025-02-13T20:17:32.294773199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.298607 containerd[1795]: time="2025-02-13T20:17:32.298585067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kcvvn,Uid:64b01e33-b95d-49cd-8822-d1aaf4457527,Namespace:calico-system,Attempt:5,} returns sandbox id \"ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e\"" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.203 [INFO][5704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.210 [INFO][5704] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0 calico-apiserver-ff9d88786- calico-apiserver a01e5e0d-d13b-4c53-a211-8d274b915928 669 0 2025-02-13 20:17:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff9d88786 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.1-a-5d3d77ba07 calico-apiserver-ff9d88786-8wrbb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7439c22ced2 [] []}} ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.210 [INFO][5704] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.231 [INFO][5819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" HandleID="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.237 [INFO][5819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" HandleID="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.1-a-5d3d77ba07", "pod":"calico-apiserver-ff9d88786-8wrbb", "timestamp":"2025-02-13 20:17:32.231690306 +0000 UTC"}, Hostname:"ci-4152.2.1-a-5d3d77ba07", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.237 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.277 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.277 [INFO][5819] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-5d3d77ba07' Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.278 [INFO][5819] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.280 [INFO][5819] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.282 [INFO][5819] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.284 [INFO][5819] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.285 [INFO][5819] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.285 [INFO][5819] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.286 [INFO][5819] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.288 [INFO][5819] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.290 [INFO][5819] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.4/26] block=192.168.99.0/26 handle="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.291 [INFO][5819] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.4/26] handle="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.291 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:32.298995 containerd[1795]: 2025-02-13 20:17:32.291 [INFO][5819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.4/26] IPv6=[] ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" HandleID="k8s-pod-network.3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" Feb 13 20:17:32.299484 containerd[1795]: 2025-02-13 20:17:32.291 [INFO][5704] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0", GenerateName:"calico-apiserver-ff9d88786-", Namespace:"calico-apiserver", SelfLink:"", UID:"a01e5e0d-d13b-4c53-a211-8d274b915928", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff9d88786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"", Pod:"calico-apiserver-ff9d88786-8wrbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7439c22ced2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.299484 containerd[1795]: 2025-02-13 20:17:32.292 [INFO][5704] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.4/32] ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" Feb 13 20:17:32.299484 containerd[1795]: 2025-02-13 20:17:32.292 [INFO][5704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7439c22ced2 ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" Feb 13 20:17:32.299484 containerd[1795]: 2025-02-13 20:17:32.293 [INFO][5704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" Feb 13 20:17:32.299484 containerd[1795]: 2025-02-13 20:17:32.293 [INFO][5704] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0", GenerateName:"calico-apiserver-ff9d88786-", Namespace:"calico-apiserver", SelfLink:"", UID:"a01e5e0d-d13b-4c53-a211-8d274b915928", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff9d88786", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b", Pod:"calico-apiserver-ff9d88786-8wrbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7439c22ced2", MAC:"86:58:f8:db:3e:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.299484 containerd[1795]: 2025-02-13 20:17:32.298 [INFO][5704] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b" Namespace="calico-apiserver" Pod="calico-apiserver-ff9d88786-8wrbb" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--apiserver--ff9d88786--8wrbb-eth0" Feb 13 20:17:32.299484 containerd[1795]: time="2025-02-13T20:17:32.299275160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:17:32.302263 systemd[1]: Started cri-containerd-0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63.scope - libcontainer container 0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63. Feb 13 20:17:32.307886 containerd[1795]: time="2025-02-13T20:17:32.307860822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-v9j7t,Uid:36ab6480-30fc-4b9c-bca0-5bd3671f05dc,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84\"" Feb 13 20:17:32.309153 systemd-networkd[1706]: cali5ef5728a34d: Link UP Feb 13 20:17:32.309311 systemd-networkd[1706]: cali5ef5728a34d: Gained carrier Feb 13 20:17:32.310289 containerd[1795]: time="2025-02-13T20:17:32.310059647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:32.310289 containerd[1795]: time="2025-02-13T20:17:32.310281133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:32.310365 containerd[1795]: time="2025-02-13T20:17:32.310289254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.310365 containerd[1795]: time="2025-02-13T20:17:32.310332998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.203 [INFO][5695] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.210 [INFO][5695] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0 coredns-7db6d8ff4d- kube-system f56e4292-4136-4f41-a2d5-245b627504ec 667 0 2025-02-13 20:17:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.1-a-5d3d77ba07 coredns-7db6d8ff4d-cnhpl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ef5728a34d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.210 [INFO][5695] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.231 [INFO][5820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" HandleID="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.237 [INFO][5820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" HandleID="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004254a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.1-a-5d3d77ba07", "pod":"coredns-7db6d8ff4d-cnhpl", "timestamp":"2025-02-13 20:17:32.23187018 +0000 UTC"}, Hostname:"ci-4152.2.1-a-5d3d77ba07", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.237 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.291 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.291 [INFO][5820] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-5d3d77ba07' Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.292 [INFO][5820] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.294 [INFO][5820] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.297 [INFO][5820] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.299 [INFO][5820] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.300 [INFO][5820] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.300 [INFO][5820] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.301 [INFO][5820] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1 Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.303 [INFO][5820] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.307 [INFO][5820] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.5/26] block=192.168.99.0/26 handle="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.307 [INFO][5820] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.5/26] handle="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.307 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:32.315510 containerd[1795]: 2025-02-13 20:17:32.307 [INFO][5820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.5/26] IPv6=[] ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" HandleID="k8s-pod-network.8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" Feb 13 20:17:32.315935 containerd[1795]: 2025-02-13 20:17:32.308 [INFO][5695] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f56e4292-4136-4f41-a2d5-245b627504ec", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"", Pod:"coredns-7db6d8ff4d-cnhpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ef5728a34d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.315935 containerd[1795]: 2025-02-13 20:17:32.308 [INFO][5695] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.5/32] ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" Feb 13 20:17:32.315935 containerd[1795]: 2025-02-13 20:17:32.308 [INFO][5695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ef5728a34d ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" Feb 13 20:17:32.315935 containerd[1795]: 2025-02-13 20:17:32.309 [INFO][5695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" Feb 13 20:17:32.315935 containerd[1795]: 2025-02-13 20:17:32.309 [INFO][5695] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f56e4292-4136-4f41-a2d5-245b627504ec", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1", Pod:"coredns-7db6d8ff4d-cnhpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ef5728a34d", MAC:"de:75:c5:1a:18:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.315935 containerd[1795]: 2025-02-13 20:17:32.314 [INFO][5695] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cnhpl" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-coredns--7db6d8ff4d--cnhpl-eth0" Feb 13 20:17:32.317558 systemd[1]: Started cri-containerd-3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b.scope - libcontainer container 3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b. Feb 13 20:17:32.324821 systemd-networkd[1706]: cali61c0d4c2899: Link UP Feb 13 20:17:32.324990 systemd-networkd[1706]: cali61c0d4c2899: Gained carrier Feb 13 20:17:32.328359 containerd[1795]: time="2025-02-13T20:17:32.328116189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:32.328359 containerd[1795]: time="2025-02-13T20:17:32.328348168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:32.328359 containerd[1795]: time="2025-02-13T20:17:32.328357724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.328522 containerd[1795]: time="2025-02-13T20:17:32.328400487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.330059 containerd[1795]: time="2025-02-13T20:17:32.330033980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-frmtf,Uid:542fc81f-a219-47bd-b6bf-841062c32db8,Namespace:kube-system,Attempt:5,} returns sandbox id \"0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63\"" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.209 [INFO][5733] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.216 [INFO][5733] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0 calico-kube-controllers-55886c68fb- calico-system 853990b6-ca4d-4293-b605-05f073fd0f8e 668 0 2025-02-13 20:17:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55886c68fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152.2.1-a-5d3d77ba07 calico-kube-controllers-55886c68fb-v8l5j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali61c0d4c2899 [] []}} ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.216 [INFO][5733] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.232 [INFO][5846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" HandleID="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.238 [INFO][5846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" HandleID="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4fb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.1-a-5d3d77ba07", "pod":"calico-kube-controllers-55886c68fb-v8l5j", "timestamp":"2025-02-13 20:17:32.232910629 +0000 UTC"}, Hostname:"ci-4152.2.1-a-5d3d77ba07", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.238 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.307 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.307 [INFO][5846] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.1-a-5d3d77ba07' Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.308 [INFO][5846] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.311 [INFO][5846] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.314 [INFO][5846] ipam/ipam.go 489: Trying affinity for 192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.315 [INFO][5846] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.316 [INFO][5846] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.0/26 host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.316 [INFO][5846] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.0/26 handle="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.317 [INFO][5846] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63 Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.319 [INFO][5846] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.0/26 handle="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.323 [INFO][5846] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.6/26] block=192.168.99.0/26 handle="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.323 [INFO][5846] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.6/26] handle="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" host="ci-4152.2.1-a-5d3d77ba07" Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.323 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:17:32.331234 containerd[1795]: 2025-02-13 20:17:32.323 [INFO][5846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.6/26] IPv6=[] ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" HandleID="k8s-pod-network.9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Workload="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" Feb 13 20:17:32.331724 containerd[1795]: 2025-02-13 20:17:32.324 [INFO][5733] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0", GenerateName:"calico-kube-controllers-55886c68fb-", Namespace:"calico-system", SelfLink:"", UID:"853990b6-ca4d-4293-b605-05f073fd0f8e", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55886c68fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"", Pod:"calico-kube-controllers-55886c68fb-v8l5j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali61c0d4c2899", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.331724 containerd[1795]: 2025-02-13 20:17:32.324 [INFO][5733] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.6/32] ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" Feb 13 20:17:32.331724 containerd[1795]: 2025-02-13 20:17:32.324 [INFO][5733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61c0d4c2899 ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" Feb 13 20:17:32.331724 containerd[1795]: 2025-02-13 20:17:32.324 [INFO][5733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" Feb 13 20:17:32.331724 containerd[1795]: 2025-02-13 20:17:32.325 [INFO][5733] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0", GenerateName:"calico-kube-controllers-55886c68fb-", Namespace:"calico-system", SelfLink:"", UID:"853990b6-ca4d-4293-b605-05f073fd0f8e", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55886c68fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.1-a-5d3d77ba07", ContainerID:"9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63", Pod:"calico-kube-controllers-55886c68fb-v8l5j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali61c0d4c2899", MAC:"16:3f:ff:03:64:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:17:32.331724 containerd[1795]: 2025-02-13 20:17:32.330 [INFO][5733] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63" Namespace="calico-system" Pod="calico-kube-controllers-55886c68fb-v8l5j" WorkloadEndpoint="ci--4152.2.1--a--5d3d77ba07-k8s-calico--kube--controllers--55886c68fb--v8l5j-eth0" Feb 13 20:17:32.331724 containerd[1795]: time="2025-02-13T20:17:32.331358727Z" level=info msg="CreateContainer within sandbox \"0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:17:32.336860 containerd[1795]: time="2025-02-13T20:17:32.336839036Z" level=info msg="CreateContainer within sandbox \"0ee2461d7c6a40851c4c991265720d318c5724bbd94012b59877e1ea86cb5c63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e785262bdd49cbbe3ef002acdb901054462f47491da41db6a6fb527ffd635177\"" Feb 13 20:17:32.337129 containerd[1795]: time="2025-02-13T20:17:32.337097677Z" level=info msg="StartContainer for \"e785262bdd49cbbe3ef002acdb901054462f47491da41db6a6fb527ffd635177\"" Feb 13 20:17:32.341114 containerd[1795]: time="2025-02-13T20:17:32.341020005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:32.341114 containerd[1795]: time="2025-02-13T20:17:32.341054055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:32.341114 containerd[1795]: time="2025-02-13T20:17:32.341061211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.341214 containerd[1795]: time="2025-02-13T20:17:32.341103703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:32.343613 systemd[1]: Started cri-containerd-8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1.scope - libcontainer container 8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1. Feb 13 20:17:32.348033 containerd[1795]: time="2025-02-13T20:17:32.348010096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff9d88786-8wrbb,Uid:a01e5e0d-d13b-4c53-a211-8d274b915928,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b\"" Feb 13 20:17:32.348193 systemd[1]: Started cri-containerd-9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63.scope - libcontainer container 9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63. Feb 13 20:17:32.348840 systemd[1]: Started cri-containerd-e785262bdd49cbbe3ef002acdb901054462f47491da41db6a6fb527ffd635177.scope - libcontainer container e785262bdd49cbbe3ef002acdb901054462f47491da41db6a6fb527ffd635177. Feb 13 20:17:32.360965 containerd[1795]: time="2025-02-13T20:17:32.360940147Z" level=info msg="StartContainer for \"e785262bdd49cbbe3ef002acdb901054462f47491da41db6a6fb527ffd635177\" returns successfully" Feb 13 20:17:32.367926 containerd[1795]: time="2025-02-13T20:17:32.367899222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cnhpl,Uid:f56e4292-4136-4f41-a2d5-245b627504ec,Namespace:kube-system,Attempt:5,} returns sandbox id \"8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1\"" Feb 13 20:17:32.369291 containerd[1795]: time="2025-02-13T20:17:32.369268052Z" level=info msg="CreateContainer within sandbox \"8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:17:32.372105 containerd[1795]: time="2025-02-13T20:17:32.372060997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55886c68fb-v8l5j,Uid:853990b6-ca4d-4293-b605-05f073fd0f8e,Namespace:calico-system,Attempt:5,} returns sandbox id \"9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63\"" Feb 13 20:17:32.374348 containerd[1795]: time="2025-02-13T20:17:32.374326725Z" level=info msg="CreateContainer within sandbox \"8a806caebed6fd1f68375b366a8ef1e274a61a62d7697bb11a2f620bd9fc5dc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"085243f31dd66a0b33feb609ba6064c46214680052fbe1079fc7e188d775edc4\"" Feb 13 20:17:32.374507 containerd[1795]: time="2025-02-13T20:17:32.374495026Z" level=info msg="StartContainer for \"085243f31dd66a0b33feb609ba6064c46214680052fbe1079fc7e188d775edc4\"" Feb 13 20:17:32.397613 systemd[1]: Started cri-containerd-085243f31dd66a0b33feb609ba6064c46214680052fbe1079fc7e188d775edc4.scope - libcontainer container 085243f31dd66a0b33feb609ba6064c46214680052fbe1079fc7e188d775edc4. Feb 13 20:17:32.409491 containerd[1795]: time="2025-02-13T20:17:32.409466633Z" level=info msg="StartContainer for \"085243f31dd66a0b33feb609ba6064c46214680052fbe1079fc7e188d775edc4\" returns successfully" Feb 13 20:17:32.851517 kernel: bpftool[6447]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:17:33.000370 systemd-networkd[1706]: vxlan.calico: Link UP Feb 13 20:17:33.000374 systemd-networkd[1706]: vxlan.calico: Gained carrier Feb 13 20:17:33.191099 kubelet[3267]: I0213 20:17:33.191082 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:33.194385 kubelet[3267]: I0213 20:17:33.194342 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-frmtf" podStartSLOduration=19.194329509 podStartE2EDuration="19.194329509s" podCreationTimestamp="2025-02-13 20:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:33.194064658 +0000 UTC m=+33.184346574" watchObservedRunningTime="2025-02-13 20:17:33.194329509 +0000 UTC m=+33.184611419" Feb 13 20:17:33.205272 kubelet[3267]: I0213 20:17:33.205236 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cnhpl" podStartSLOduration=19.205224258 podStartE2EDuration="19.205224258s" podCreationTimestamp="2025-02-13 20:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:33.204792086 +0000 UTC m=+33.195074002" watchObservedRunningTime="2025-02-13 20:17:33.205224258 +0000 UTC m=+33.195506168" Feb 13 20:17:33.550738 systemd-networkd[1706]: cali7439c22ced2: Gained IPv6LL Feb 13 20:17:33.551521 systemd-networkd[1706]: cali61c0d4c2899: Gained IPv6LL Feb 13 20:17:33.552192 systemd-networkd[1706]: cali9ce1500d683: Gained IPv6LL Feb 13 20:17:33.678870 systemd-networkd[1706]: calia0a5192e377: Gained IPv6LL Feb 13 20:17:34.062589 systemd-networkd[1706]: cali5ef5728a34d: Gained IPv6LL Feb 13 20:17:34.077960 containerd[1795]: time="2025-02-13T20:17:34.077912358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:34.078163 containerd[1795]: time="2025-02-13T20:17:34.078123212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:17:34.078493 containerd[1795]: time="2025-02-13T20:17:34.078469739Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:34.079397 containerd[1795]: time="2025-02-13T20:17:34.079356839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:34.079799 containerd[1795]: time="2025-02-13T20:17:34.079754286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.780461836s" Feb 13 20:17:34.079799 containerd[1795]: time="2025-02-13T20:17:34.079768509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:17:34.080233 containerd[1795]: time="2025-02-13T20:17:34.080193168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:17:34.080760 containerd[1795]: time="2025-02-13T20:17:34.080719035Z" level=info msg="CreateContainer within sandbox \"ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:17:34.085816 containerd[1795]: time="2025-02-13T20:17:34.085800540Z" level=info msg="CreateContainer within sandbox \"ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9c0f915c1e58dca348d4f52e8961a30b0967ef1ca1a23b41258eaab88e2a08bf\"" Feb 13 20:17:34.086026 containerd[1795]: time="2025-02-13T20:17:34.086011570Z" level=info msg="StartContainer for \"9c0f915c1e58dca348d4f52e8961a30b0967ef1ca1a23b41258eaab88e2a08bf\"" Feb 13 20:17:34.109622 systemd[1]: Started cri-containerd-9c0f915c1e58dca348d4f52e8961a30b0967ef1ca1a23b41258eaab88e2a08bf.scope - libcontainer container 9c0f915c1e58dca348d4f52e8961a30b0967ef1ca1a23b41258eaab88e2a08bf. Feb 13 20:17:34.123496 containerd[1795]: time="2025-02-13T20:17:34.123438618Z" level=info msg="StartContainer for \"9c0f915c1e58dca348d4f52e8961a30b0967ef1ca1a23b41258eaab88e2a08bf\" returns successfully" Feb 13 20:17:34.254808 systemd-networkd[1706]: calidc7d898be18: Gained IPv6LL Feb 13 20:17:34.958885 systemd-networkd[1706]: vxlan.calico: Gained IPv6LL Feb 13 20:17:36.156513 containerd[1795]: time="2025-02-13T20:17:36.156488718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:36.156783 containerd[1795]: time="2025-02-13T20:17:36.156726551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:17:36.157006 containerd[1795]: time="2025-02-13T20:17:36.156993452Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:36.158452 containerd[1795]: time="2025-02-13T20:17:36.158435824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:36.158745 containerd[1795]: time="2025-02-13T20:17:36.158732401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.07852497s" Feb 13 20:17:36.158792 containerd[1795]: time="2025-02-13T20:17:36.158745714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:17:36.159251 containerd[1795]: time="2025-02-13T20:17:36.159241717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:17:36.159865 containerd[1795]: time="2025-02-13T20:17:36.159853084Z" level=info msg="CreateContainer within sandbox \"a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:17:36.163838 containerd[1795]: time="2025-02-13T20:17:36.163822553Z" level=info msg="CreateContainer within sandbox \"a92661161cb54178273a254eeafbfcf0f82aac18ce9dbd4b2ed4c408618fca84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b91dae784e48ada99d16d20ccdea9e128b9ce69623292172a76d8b41eeb111f\"" Feb 13 20:17:36.164050 containerd[1795]: time="2025-02-13T20:17:36.164035757Z" level=info msg="StartContainer for \"5b91dae784e48ada99d16d20ccdea9e128b9ce69623292172a76d8b41eeb111f\"" Feb 13 20:17:36.184630 systemd[1]: Started cri-containerd-5b91dae784e48ada99d16d20ccdea9e128b9ce69623292172a76d8b41eeb111f.scope - libcontainer container 5b91dae784e48ada99d16d20ccdea9e128b9ce69623292172a76d8b41eeb111f. Feb 13 20:17:36.208636 containerd[1795]: time="2025-02-13T20:17:36.208613339Z" level=info msg="StartContainer for \"5b91dae784e48ada99d16d20ccdea9e128b9ce69623292172a76d8b41eeb111f\" returns successfully" Feb 13 20:17:36.601543 containerd[1795]: time="2025-02-13T20:17:36.601521184Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:36.601699 containerd[1795]: time="2025-02-13T20:17:36.601683988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:17:36.603025 containerd[1795]: time="2025-02-13T20:17:36.603011764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 443.755095ms" Feb 13 20:17:36.603059 containerd[1795]: time="2025-02-13T20:17:36.603028036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:17:36.603503 containerd[1795]: time="2025-02-13T20:17:36.603460904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:17:36.604085 containerd[1795]: time="2025-02-13T20:17:36.604072159Z" level=info msg="CreateContainer within sandbox \"3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:17:36.608102 containerd[1795]: time="2025-02-13T20:17:36.608058579Z" level=info msg="CreateContainer within sandbox \"3e9eda64368ed76ff00011cc0d377c3c25f54081959544e8e9572b13b979123b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e0450e19b398f51a4a4eef2111997e5a68ca3790634a0afd413a4fa9f6e3b722\"" Feb 13 20:17:36.608299 containerd[1795]: time="2025-02-13T20:17:36.608257045Z" level=info msg="StartContainer for \"e0450e19b398f51a4a4eef2111997e5a68ca3790634a0afd413a4fa9f6e3b722\"" Feb 13 20:17:36.637715 systemd[1]: Started cri-containerd-e0450e19b398f51a4a4eef2111997e5a68ca3790634a0afd413a4fa9f6e3b722.scope - libcontainer container e0450e19b398f51a4a4eef2111997e5a68ca3790634a0afd413a4fa9f6e3b722. Feb 13 20:17:36.667940 containerd[1795]: time="2025-02-13T20:17:36.667915970Z" level=info msg="StartContainer for \"e0450e19b398f51a4a4eef2111997e5a68ca3790634a0afd413a4fa9f6e3b722\" returns successfully" Feb 13 20:17:37.237952 kubelet[3267]: I0213 20:17:37.237828 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ff9d88786-8wrbb" podStartSLOduration=13.983043634 podStartE2EDuration="18.237791469s" podCreationTimestamp="2025-02-13 20:17:19 +0000 UTC" firstStartedPulling="2025-02-13 20:17:32.348659064 +0000 UTC m=+32.338940975" lastFinishedPulling="2025-02-13 20:17:36.603406899 +0000 UTC m=+36.593688810" observedRunningTime="2025-02-13 20:17:37.236484745 +0000 UTC m=+37.226766726" watchObservedRunningTime="2025-02-13 20:17:37.237791469 +0000 UTC m=+37.228073434" Feb 13 20:17:37.271189 kubelet[3267]: I0213 20:17:37.271034 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ff9d88786-v9j7t" podStartSLOduration=14.420311017 podStartE2EDuration="18.270980441s" podCreationTimestamp="2025-02-13 20:17:19 +0000 UTC" firstStartedPulling="2025-02-13 20:17:32.308521535 +0000 UTC m=+32.298803446" lastFinishedPulling="2025-02-13 20:17:36.159190959 +0000 UTC m=+36.149472870" observedRunningTime="2025-02-13 20:17:37.269199603 +0000 UTC m=+37.259481596" watchObservedRunningTime="2025-02-13 20:17:37.270980441 +0000 UTC m=+37.261262416" Feb 13 20:17:38.223919 kubelet[3267]: I0213 20:17:38.223899 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:38.223919 kubelet[3267]: I0213 20:17:38.223918 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:38.424954 containerd[1795]: time="2025-02-13T20:17:38.424898448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:38.425169 containerd[1795]: time="2025-02-13T20:17:38.425117474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:17:38.425563 containerd[1795]: time="2025-02-13T20:17:38.425520604Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:38.426438 containerd[1795]: time="2025-02-13T20:17:38.426398982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:38.426870 containerd[1795]: time="2025-02-13T20:17:38.426828098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.823350874s" Feb 13 20:17:38.426870 containerd[1795]: time="2025-02-13T20:17:38.426845261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:17:38.427323 containerd[1795]: time="2025-02-13T20:17:38.427308570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:17:38.430267 containerd[1795]: time="2025-02-13T20:17:38.430245269Z" level=info msg="CreateContainer within sandbox \"9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:17:38.434399 containerd[1795]: time="2025-02-13T20:17:38.434358236Z" level=info msg="CreateContainer within sandbox \"9711cfc3945e4fda85762e300afa56925a9259a7e0b4663feeafe48d595f8f63\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b8d9675e1484942f4de9baa18b837fa3fcaffba33c6c493f12fe0c96bbdb8eb4\"" Feb 13 20:17:38.434618 containerd[1795]: time="2025-02-13T20:17:38.434578257Z" level=info msg="StartContainer for \"b8d9675e1484942f4de9baa18b837fa3fcaffba33c6c493f12fe0c96bbdb8eb4\"" Feb 13 20:17:38.463943 systemd[1]: Started cri-containerd-b8d9675e1484942f4de9baa18b837fa3fcaffba33c6c493f12fe0c96bbdb8eb4.scope - libcontainer container b8d9675e1484942f4de9baa18b837fa3fcaffba33c6c493f12fe0c96bbdb8eb4. Feb 13 20:17:38.534098 containerd[1795]: time="2025-02-13T20:17:38.534006400Z" level=info msg="StartContainer for \"b8d9675e1484942f4de9baa18b837fa3fcaffba33c6c493f12fe0c96bbdb8eb4\" returns successfully" Feb 13 20:17:39.249802 kubelet[3267]: I0213 20:17:39.249763 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55886c68fb-v8l5j" podStartSLOduration=14.196130663 podStartE2EDuration="20.249747315s" podCreationTimestamp="2025-02-13 20:17:19 +0000 UTC" firstStartedPulling="2025-02-13 20:17:32.37363974 +0000 UTC m=+32.363921649" lastFinishedPulling="2025-02-13 20:17:38.42725639 +0000 UTC m=+38.417538301" observedRunningTime="2025-02-13 20:17:39.249498875 +0000 UTC m=+39.239780786" watchObservedRunningTime="2025-02-13 20:17:39.249747315 +0000 UTC m=+39.240029223" Feb 13 20:17:39.975908 containerd[1795]: time="2025-02-13T20:17:39.975882685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:39.976186 containerd[1795]: time="2025-02-13T20:17:39.976048332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:17:39.976404 containerd[1795]: time="2025-02-13T20:17:39.976390099Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:39.977465 containerd[1795]: time="2025-02-13T20:17:39.977451999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:39.977868 containerd[1795]: time="2025-02-13T20:17:39.977853798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.550525584s" Feb 13 20:17:39.977923 containerd[1795]: time="2025-02-13T20:17:39.977871674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:17:39.978834 containerd[1795]: time="2025-02-13T20:17:39.978823208Z" level=info msg="CreateContainer within sandbox \"ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:17:39.983314 containerd[1795]: time="2025-02-13T20:17:39.983272803Z" level=info msg="CreateContainer within sandbox \"ff7bef863d1e8a62c7aff72ff536b0538831f3d8be50435f893cfa459de9555e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"733bfb3171657e92f120c98c5fa4e1312ec1e1036fc027400b81114aae2f28b6\"" Feb 13 20:17:39.983531 containerd[1795]: time="2025-02-13T20:17:39.983462583Z" level=info msg="StartContainer for \"733bfb3171657e92f120c98c5fa4e1312ec1e1036fc027400b81114aae2f28b6\"" Feb 13 20:17:40.014638 systemd[1]: Started cri-containerd-733bfb3171657e92f120c98c5fa4e1312ec1e1036fc027400b81114aae2f28b6.scope - libcontainer container 733bfb3171657e92f120c98c5fa4e1312ec1e1036fc027400b81114aae2f28b6. Feb 13 20:17:40.029566 containerd[1795]: time="2025-02-13T20:17:40.029516356Z" level=info msg="StartContainer for \"733bfb3171657e92f120c98c5fa4e1312ec1e1036fc027400b81114aae2f28b6\" returns successfully" Feb 13 20:17:40.117275 kubelet[3267]: I0213 20:17:40.117220 3267 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:17:40.117275 kubelet[3267]: I0213 20:17:40.117290 3267 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:17:40.257693 kubelet[3267]: I0213 20:17:40.257512 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kcvvn" podStartSLOduration=13.578434724 podStartE2EDuration="21.257483918s" podCreationTimestamp="2025-02-13 20:17:19 +0000 UTC" firstStartedPulling="2025-02-13 20:17:32.299157538 +0000 UTC m=+32.289439452" lastFinishedPulling="2025-02-13 20:17:39.978206735 +0000 UTC m=+39.968488646" observedRunningTime="2025-02-13 20:17:40.256965211 +0000 UTC m=+40.247247167" watchObservedRunningTime="2025-02-13 20:17:40.257483918 +0000 UTC m=+40.247765858" Feb 13 20:17:43.538786 kubelet[3267]: I0213 20:17:43.538700 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:17:56.344293 kubelet[3267]: I0213 20:17:56.344055 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:18:00.065645 containerd[1795]: time="2025-02-13T20:18:00.065589702Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:18:00.066107 containerd[1795]: time="2025-02-13T20:18:00.065721291Z" level=info msg="TearDown network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" successfully" Feb 13 20:18:00.066107 containerd[1795]: time="2025-02-13T20:18:00.065728102Z" level=info msg="StopPodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" returns successfully" Feb 13 20:18:00.066107 containerd[1795]: time="2025-02-13T20:18:00.066067258Z" level=info msg="RemovePodSandbox for \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:18:00.066107 containerd[1795]: time="2025-02-13T20:18:00.066082662Z" level=info msg="Forcibly stopping sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\"" Feb 13 20:18:00.066196 containerd[1795]: time="2025-02-13T20:18:00.066143463Z" level=info msg="TearDown network for sandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" successfully" Feb 13 20:18:00.067586 containerd[1795]: time="2025-02-13T20:18:00.067575395Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.067617 containerd[1795]: time="2025-02-13T20:18:00.067596542Z" level=info msg="RemovePodSandbox \"4101e018441a5f170b331eaff8faff2caa5b81435959ed6c493ff174cf8464eb\" returns successfully" Feb 13 20:18:00.067772 containerd[1795]: time="2025-02-13T20:18:00.067759384Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" Feb 13 20:18:00.067864 containerd[1795]: time="2025-02-13T20:18:00.067839899Z" level=info msg="TearDown network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" successfully" Feb 13 20:18:00.067864 containerd[1795]: time="2025-02-13T20:18:00.067861623Z" level=info msg="StopPodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" returns successfully" Feb 13 20:18:00.068084 containerd[1795]: time="2025-02-13T20:18:00.068073682Z" level=info msg="RemovePodSandbox for \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" Feb 13 20:18:00.068127 containerd[1795]: time="2025-02-13T20:18:00.068086383Z" level=info msg="Forcibly stopping sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\"" Feb 13 20:18:00.068209 containerd[1795]: time="2025-02-13T20:18:00.068158045Z" level=info msg="TearDown network for sandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" successfully" Feb 13 20:18:00.069381 containerd[1795]: time="2025-02-13T20:18:00.069369979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.069412 containerd[1795]: time="2025-02-13T20:18:00.069388394Z" level=info msg="RemovePodSandbox \"d1b03cea096b09213f490a993ed65f8af74b79974ee36e7254a5f60edefd9c53\" returns successfully" Feb 13 20:18:00.069631 containerd[1795]: time="2025-02-13T20:18:00.069619524Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\"" Feb 13 20:18:00.069693 containerd[1795]: time="2025-02-13T20:18:00.069667954Z" level=info msg="TearDown network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" successfully" Feb 13 20:18:00.069719 containerd[1795]: time="2025-02-13T20:18:00.069692362Z" level=info msg="StopPodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" returns successfully" Feb 13 20:18:00.069882 containerd[1795]: time="2025-02-13T20:18:00.069851461Z" level=info msg="RemovePodSandbox for \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\"" Feb 13 20:18:00.069921 containerd[1795]: time="2025-02-13T20:18:00.069884614Z" level=info msg="Forcibly stopping sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\"" Feb 13 20:18:00.069947 containerd[1795]: time="2025-02-13T20:18:00.069930309Z" level=info msg="TearDown network for sandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" successfully" Feb 13 20:18:00.071100 containerd[1795]: time="2025-02-13T20:18:00.071089985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.071149 containerd[1795]: time="2025-02-13T20:18:00.071107219Z" level=info msg="RemovePodSandbox \"742662f7025f6bf31a0897647c434208ed695dd3a3c685264cb7223a8679270d\" returns successfully" Feb 13 20:18:00.071294 containerd[1795]: time="2025-02-13T20:18:00.071284472Z" level=info msg="StopPodSandbox for \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\"" Feb 13 20:18:00.071345 containerd[1795]: time="2025-02-13T20:18:00.071334981Z" level=info msg="TearDown network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" successfully" Feb 13 20:18:00.071366 containerd[1795]: time="2025-02-13T20:18:00.071345878Z" level=info msg="StopPodSandbox for \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" returns successfully" Feb 13 20:18:00.071474 containerd[1795]: time="2025-02-13T20:18:00.071462068Z" level=info msg="RemovePodSandbox for \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\"" Feb 13 20:18:00.071499 containerd[1795]: time="2025-02-13T20:18:00.071476526Z" level=info msg="Forcibly stopping sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\"" Feb 13 20:18:00.071581 containerd[1795]: time="2025-02-13T20:18:00.071533571Z" level=info msg="TearDown network for sandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" successfully" Feb 13 20:18:00.072904 containerd[1795]: time="2025-02-13T20:18:00.072892420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.072948 containerd[1795]: time="2025-02-13T20:18:00.072911164Z" level=info msg="RemovePodSandbox \"59336c9984fec613f36c76eda4de6d6c6f327505190ae7d66c8b70b1d1898a5a\" returns successfully" Feb 13 20:18:00.073080 containerd[1795]: time="2025-02-13T20:18:00.073070586Z" level=info msg="StopPodSandbox for \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\"" Feb 13 20:18:00.073149 containerd[1795]: time="2025-02-13T20:18:00.073142513Z" level=info msg="TearDown network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\" successfully" Feb 13 20:18:00.073185 containerd[1795]: time="2025-02-13T20:18:00.073149638Z" level=info msg="StopPodSandbox for \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\" returns successfully" Feb 13 20:18:00.073302 containerd[1795]: time="2025-02-13T20:18:00.073276512Z" level=info msg="RemovePodSandbox for \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\"" Feb 13 20:18:00.073330 containerd[1795]: time="2025-02-13T20:18:00.073304907Z" level=info msg="Forcibly stopping sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\"" Feb 13 20:18:00.073349 containerd[1795]: time="2025-02-13T20:18:00.073336166Z" level=info msg="TearDown network for sandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\" successfully" Feb 13 20:18:00.074446 containerd[1795]: time="2025-02-13T20:18:00.074435906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.074502 containerd[1795]: time="2025-02-13T20:18:00.074460105Z" level=info msg="RemovePodSandbox \"79fbcbd4b2d2256cf38904365a9ab3dc6f9cbbcd211a09e10d87bbad90d5214c\" returns successfully" Feb 13 20:18:00.074615 containerd[1795]: time="2025-02-13T20:18:00.074605403Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:18:00.074653 containerd[1795]: time="2025-02-13T20:18:00.074645688Z" level=info msg="TearDown network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" successfully" Feb 13 20:18:00.074678 containerd[1795]: time="2025-02-13T20:18:00.074652285Z" level=info msg="StopPodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" returns successfully" Feb 13 20:18:00.074770 containerd[1795]: time="2025-02-13T20:18:00.074758941Z" level=info msg="RemovePodSandbox for \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:18:00.074808 containerd[1795]: time="2025-02-13T20:18:00.074788778Z" level=info msg="Forcibly stopping sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\"" Feb 13 20:18:00.074855 containerd[1795]: time="2025-02-13T20:18:00.074839054Z" level=info msg="TearDown network for sandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" successfully" Feb 13 20:18:00.075919 containerd[1795]: time="2025-02-13T20:18:00.075907724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.075945 containerd[1795]: time="2025-02-13T20:18:00.075927677Z" level=info msg="RemovePodSandbox \"7ed1a43b73d432d232df1df54c79d9006e1b5d16a86f107e55387abae9ed7d7f\" returns successfully" Feb 13 20:18:00.076092 containerd[1795]: time="2025-02-13T20:18:00.076082050Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" Feb 13 20:18:00.076127 containerd[1795]: time="2025-02-13T20:18:00.076120334Z" level=info msg="TearDown network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" successfully" Feb 13 20:18:00.076127 containerd[1795]: time="2025-02-13T20:18:00.076126729Z" level=info msg="StopPodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" returns successfully" Feb 13 20:18:00.076238 containerd[1795]: time="2025-02-13T20:18:00.076229931Z" level=info msg="RemovePodSandbox for \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" Feb 13 20:18:00.076262 containerd[1795]: time="2025-02-13T20:18:00.076240898Z" level=info msg="Forcibly stopping sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\"" Feb 13 20:18:00.076287 containerd[1795]: time="2025-02-13T20:18:00.076272902Z" level=info msg="TearDown network for sandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" successfully" Feb 13 20:18:00.077352 containerd[1795]: time="2025-02-13T20:18:00.077341590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.077381 containerd[1795]: time="2025-02-13T20:18:00.077359160Z" level=info msg="RemovePodSandbox \"21e4bf6e21241b54d8bfddcb494e2519a92b48eb90676fa95896db0ca262525c\" returns successfully" Feb 13 20:18:00.077530 containerd[1795]: time="2025-02-13T20:18:00.077520041Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\"" Feb 13 20:18:00.077567 containerd[1795]: time="2025-02-13T20:18:00.077559555Z" level=info msg="TearDown network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" successfully" Feb 13 20:18:00.077588 containerd[1795]: time="2025-02-13T20:18:00.077566061Z" level=info msg="StopPodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" returns successfully" Feb 13 20:18:00.077739 containerd[1795]: time="2025-02-13T20:18:00.077729130Z" level=info msg="RemovePodSandbox for \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\"" Feb 13 20:18:00.077778 containerd[1795]: time="2025-02-13T20:18:00.077741696Z" level=info msg="Forcibly stopping sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\"" Feb 13 20:18:00.077801 containerd[1795]: time="2025-02-13T20:18:00.077786302Z" level=info msg="TearDown network for sandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" successfully" Feb 13 20:18:00.078794 containerd[1795]: time="2025-02-13T20:18:00.078784327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.078823 containerd[1795]: time="2025-02-13T20:18:00.078801153Z" level=info msg="RemovePodSandbox \"d7f28ca0ecacdf622e04d94b3cfec4244085c08948f732c8def05b038973d2ca\" returns successfully" Feb 13 20:18:00.078941 containerd[1795]: time="2025-02-13T20:18:00.078932007Z" level=info msg="StopPodSandbox for \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\"" Feb 13 20:18:00.078992 containerd[1795]: time="2025-02-13T20:18:00.078985480Z" level=info msg="TearDown network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" successfully" Feb 13 20:18:00.079030 containerd[1795]: time="2025-02-13T20:18:00.078991907Z" level=info msg="StopPodSandbox for \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" returns successfully" Feb 13 20:18:00.079104 containerd[1795]: time="2025-02-13T20:18:00.079095555Z" level=info msg="RemovePodSandbox for \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\"" Feb 13 20:18:00.079125 containerd[1795]: time="2025-02-13T20:18:00.079104979Z" level=info msg="Forcibly stopping sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\"" Feb 13 20:18:00.079146 containerd[1795]: time="2025-02-13T20:18:00.079132521Z" level=info msg="TearDown network for sandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" successfully" Feb 13 20:18:00.080275 containerd[1795]: time="2025-02-13T20:18:00.080264693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.080297 containerd[1795]: time="2025-02-13T20:18:00.080289361Z" level=info msg="RemovePodSandbox \"d735a1491d84d0eaf4a12bd073967a73bb758324b20ddd2802535023b346c9b0\" returns successfully" Feb 13 20:18:00.080389 containerd[1795]: time="2025-02-13T20:18:00.080380702Z" level=info msg="StopPodSandbox for \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\"" Feb 13 20:18:00.080423 containerd[1795]: time="2025-02-13T20:18:00.080416314Z" level=info msg="TearDown network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\" successfully" Feb 13 20:18:00.080443 containerd[1795]: time="2025-02-13T20:18:00.080422530Z" level=info msg="StopPodSandbox for \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\" returns successfully" Feb 13 20:18:00.080577 containerd[1795]: time="2025-02-13T20:18:00.080567828Z" level=info msg="RemovePodSandbox for \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\"" Feb 13 20:18:00.080598 containerd[1795]: time="2025-02-13T20:18:00.080578107Z" level=info msg="Forcibly stopping sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\"" Feb 13 20:18:00.080643 containerd[1795]: time="2025-02-13T20:18:00.080626919Z" level=info msg="TearDown network for sandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\" successfully" Feb 13 20:18:00.081703 containerd[1795]: time="2025-02-13T20:18:00.081679233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.081752 containerd[1795]: time="2025-02-13T20:18:00.081728451Z" level=info msg="RemovePodSandbox \"50a08a9a1d45fee7025d64772410578495a19605755cc69a8c3e5ea906ec2904\" returns successfully" Feb 13 20:18:00.081907 containerd[1795]: time="2025-02-13T20:18:00.081898323Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:18:00.081961 containerd[1795]: time="2025-02-13T20:18:00.081952672Z" level=info msg="TearDown network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" successfully" Feb 13 20:18:00.081961 containerd[1795]: time="2025-02-13T20:18:00.081960093Z" level=info msg="StopPodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" returns successfully" Feb 13 20:18:00.082070 containerd[1795]: time="2025-02-13T20:18:00.082060520Z" level=info msg="RemovePodSandbox for \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:18:00.082093 containerd[1795]: time="2025-02-13T20:18:00.082072160Z" level=info msg="Forcibly stopping sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\"" Feb 13 20:18:00.082121 containerd[1795]: time="2025-02-13T20:18:00.082105068Z" level=info msg="TearDown network for sandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" successfully" Feb 13 20:18:00.083235 containerd[1795]: time="2025-02-13T20:18:00.083224884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.083262 containerd[1795]: time="2025-02-13T20:18:00.083241626Z" level=info msg="RemovePodSandbox \"b4ce2a92348a0c26ae24d67b825325d18f8179c342c157b6519d8610f65f514b\" returns successfully" Feb 13 20:18:00.083384 containerd[1795]: time="2025-02-13T20:18:00.083374277Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" Feb 13 20:18:00.083418 containerd[1795]: time="2025-02-13T20:18:00.083411412Z" level=info msg="TearDown network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" successfully" Feb 13 20:18:00.083437 containerd[1795]: time="2025-02-13T20:18:00.083417809Z" level=info msg="StopPodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" returns successfully" Feb 13 20:18:00.083637 containerd[1795]: time="2025-02-13T20:18:00.083560121Z" level=info msg="RemovePodSandbox for \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" Feb 13 20:18:00.083637 containerd[1795]: time="2025-02-13T20:18:00.083587308Z" level=info msg="Forcibly stopping sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\"" Feb 13 20:18:00.083740 containerd[1795]: time="2025-02-13T20:18:00.083680768Z" level=info msg="TearDown network for sandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" successfully" Feb 13 20:18:00.084765 containerd[1795]: time="2025-02-13T20:18:00.084724435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.084765 containerd[1795]: time="2025-02-13T20:18:00.084743952Z" level=info msg="RemovePodSandbox \"fedba8a051bccde5aefce8bb80d817a2869be0c7484736372697428c813b98ff\" returns successfully" Feb 13 20:18:00.084928 containerd[1795]: time="2025-02-13T20:18:00.084888540Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\"" Feb 13 20:18:00.084960 containerd[1795]: time="2025-02-13T20:18:00.084930069Z" level=info msg="TearDown network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" successfully" Feb 13 20:18:00.084960 containerd[1795]: time="2025-02-13T20:18:00.084936407Z" level=info msg="StopPodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" returns successfully" Feb 13 20:18:00.085099 containerd[1795]: time="2025-02-13T20:18:00.085061216Z" level=info msg="RemovePodSandbox for \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\"" Feb 13 20:18:00.085099 containerd[1795]: time="2025-02-13T20:18:00.085071207Z" level=info msg="Forcibly stopping sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\"" Feb 13 20:18:00.085197 containerd[1795]: time="2025-02-13T20:18:00.085101134Z" level=info msg="TearDown network for sandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" successfully" Feb 13 20:18:00.086825 containerd[1795]: time="2025-02-13T20:18:00.086779971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.086825 containerd[1795]: time="2025-02-13T20:18:00.086801329Z" level=info msg="RemovePodSandbox \"53ad3a4fafcda47a238d07836dd93b5693aae139440c6f77a6d1eb3fbc605fac\" returns successfully" Feb 13 20:18:00.087115 containerd[1795]: time="2025-02-13T20:18:00.087056908Z" level=info msg="StopPodSandbox for \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\"" Feb 13 20:18:00.087162 containerd[1795]: time="2025-02-13T20:18:00.087137098Z" level=info msg="TearDown network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" successfully" Feb 13 20:18:00.087162 containerd[1795]: time="2025-02-13T20:18:00.087157993Z" level=info msg="StopPodSandbox for \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" returns successfully" Feb 13 20:18:00.087350 containerd[1795]: time="2025-02-13T20:18:00.087314924Z" level=info msg="RemovePodSandbox for \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\"" Feb 13 20:18:00.087350 containerd[1795]: time="2025-02-13T20:18:00.087344692Z" level=info msg="Forcibly stopping sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\"" Feb 13 20:18:00.087396 containerd[1795]: time="2025-02-13T20:18:00.087378441Z" level=info msg="TearDown network for sandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" successfully" Feb 13 20:18:00.088434 containerd[1795]: time="2025-02-13T20:18:00.088394675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.088434 containerd[1795]: time="2025-02-13T20:18:00.088431925Z" level=info msg="RemovePodSandbox \"4221ea701a39ec4dd3a0d137cab931e4952b6af05d86d5f2ccc74c4d565d63d6\" returns successfully" Feb 13 20:18:00.088669 containerd[1795]: time="2025-02-13T20:18:00.088607230Z" level=info msg="StopPodSandbox for \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\"" Feb 13 20:18:00.088743 containerd[1795]: time="2025-02-13T20:18:00.088703060Z" level=info msg="TearDown network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\" successfully" Feb 13 20:18:00.088743 containerd[1795]: time="2025-02-13T20:18:00.088736847Z" level=info msg="StopPodSandbox for \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\" returns successfully" Feb 13 20:18:00.088933 containerd[1795]: time="2025-02-13T20:18:00.088892790Z" level=info msg="RemovePodSandbox for \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\"" Feb 13 20:18:00.088933 containerd[1795]: time="2025-02-13T20:18:00.088902726Z" level=info msg="Forcibly stopping sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\"" Feb 13 20:18:00.089051 containerd[1795]: time="2025-02-13T20:18:00.088974760Z" level=info msg="TearDown network for sandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\" successfully" Feb 13 20:18:00.090168 containerd[1795]: time="2025-02-13T20:18:00.090113407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.090168 containerd[1795]: time="2025-02-13T20:18:00.090130881Z" level=info msg="RemovePodSandbox \"17106c8bb18d62f653a47520e50170e8dd8bd8982475b8f783cc008aa8f335b4\" returns successfully" Feb 13 20:18:00.090319 containerd[1795]: time="2025-02-13T20:18:00.090280577Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:18:00.090352 containerd[1795]: time="2025-02-13T20:18:00.090321502Z" level=info msg="TearDown network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" successfully" Feb 13 20:18:00.090352 containerd[1795]: time="2025-02-13T20:18:00.090327803Z" level=info msg="StopPodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" returns successfully" Feb 13 20:18:00.090510 containerd[1795]: time="2025-02-13T20:18:00.090456643Z" level=info msg="RemovePodSandbox for \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:18:00.090510 containerd[1795]: time="2025-02-13T20:18:00.090484277Z" level=info msg="Forcibly stopping sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\"" Feb 13 20:18:00.090587 containerd[1795]: time="2025-02-13T20:18:00.090516177Z" level=info msg="TearDown network for sandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" successfully" Feb 13 20:18:00.091570 containerd[1795]: time="2025-02-13T20:18:00.091531180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.091570 containerd[1795]: time="2025-02-13T20:18:00.091547969Z" level=info msg="RemovePodSandbox \"fd52d3f97e9a1405278690dbfe9773e4c5bf6ddadf750c07b522c7e0964feb47\" returns successfully" Feb 13 20:18:00.091737 containerd[1795]: time="2025-02-13T20:18:00.091704324Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" Feb 13 20:18:00.091770 containerd[1795]: time="2025-02-13T20:18:00.091760439Z" level=info msg="TearDown network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" successfully" Feb 13 20:18:00.091770 containerd[1795]: time="2025-02-13T20:18:00.091766686Z" level=info msg="StopPodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" returns successfully" Feb 13 20:18:00.091899 containerd[1795]: time="2025-02-13T20:18:00.091872830Z" level=info msg="RemovePodSandbox for \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" Feb 13 20:18:00.091899 containerd[1795]: time="2025-02-13T20:18:00.091883038Z" level=info msg="Forcibly stopping sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\"" Feb 13 20:18:00.091993 containerd[1795]: time="2025-02-13T20:18:00.091939310Z" level=info msg="TearDown network for sandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" successfully" Feb 13 20:18:00.093049 containerd[1795]: time="2025-02-13T20:18:00.093009749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.093049 containerd[1795]: time="2025-02-13T20:18:00.093026844Z" level=info msg="RemovePodSandbox \"e924f016fb6c20430a67bbab9156e89c69d4fea27ab974c6c2a114f4e60ddfaf\" returns successfully" Feb 13 20:18:00.093242 containerd[1795]: time="2025-02-13T20:18:00.093188738Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\"" Feb 13 20:18:00.093310 containerd[1795]: time="2025-02-13T20:18:00.093298129Z" level=info msg="TearDown network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" successfully" Feb 13 20:18:00.093310 containerd[1795]: time="2025-02-13T20:18:00.093304934Z" level=info msg="StopPodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" returns successfully" Feb 13 20:18:00.093402 containerd[1795]: time="2025-02-13T20:18:00.093393269Z" level=info msg="RemovePodSandbox for \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\"" Feb 13 20:18:00.093423 containerd[1795]: time="2025-02-13T20:18:00.093404032Z" level=info msg="Forcibly stopping sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\"" Feb 13 20:18:00.093445 containerd[1795]: time="2025-02-13T20:18:00.093432418Z" level=info msg="TearDown network for sandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" successfully" Feb 13 20:18:00.094523 containerd[1795]: time="2025-02-13T20:18:00.094478904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.094523 containerd[1795]: time="2025-02-13T20:18:00.094512788Z" level=info msg="RemovePodSandbox \"a6724c0edda64bd1462a71a9c7e44436716599495667250f4790209be0006577\" returns successfully" Feb 13 20:18:00.094742 containerd[1795]: time="2025-02-13T20:18:00.094695430Z" level=info msg="StopPodSandbox for \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\"" Feb 13 20:18:00.094797 containerd[1795]: time="2025-02-13T20:18:00.094765562Z" level=info msg="TearDown network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" successfully" Feb 13 20:18:00.094797 containerd[1795]: time="2025-02-13T20:18:00.094794320Z" level=info msg="StopPodSandbox for \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" returns successfully" Feb 13 20:18:00.095070 containerd[1795]: time="2025-02-13T20:18:00.095019712Z" level=info msg="RemovePodSandbox for \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\"" Feb 13 20:18:00.095070 containerd[1795]: time="2025-02-13T20:18:00.095047517Z" level=info msg="Forcibly stopping sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\"" Feb 13 20:18:00.095164 containerd[1795]: time="2025-02-13T20:18:00.095096481Z" level=info msg="TearDown network for sandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" successfully" Feb 13 20:18:00.096353 containerd[1795]: time="2025-02-13T20:18:00.096316129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.096389 containerd[1795]: time="2025-02-13T20:18:00.096352274Z" level=info msg="RemovePodSandbox \"26c4c6208b535369ade9ba2d1c9efe43b14de5e700df1bb94d808459209f5627\" returns successfully" Feb 13 20:18:00.096528 containerd[1795]: time="2025-02-13T20:18:00.096449902Z" level=info msg="StopPodSandbox for \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\"" Feb 13 20:18:00.096528 containerd[1795]: time="2025-02-13T20:18:00.096525211Z" level=info msg="TearDown network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\" successfully" Feb 13 20:18:00.096528 containerd[1795]: time="2025-02-13T20:18:00.096546184Z" level=info msg="StopPodSandbox for \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\" returns successfully" Feb 13 20:18:00.096627 containerd[1795]: time="2025-02-13T20:18:00.096614744Z" level=info msg="RemovePodSandbox for \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\"" Feb 13 20:18:00.096649 containerd[1795]: time="2025-02-13T20:18:00.096623553Z" level=info msg="Forcibly stopping sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\"" Feb 13 20:18:00.096671 containerd[1795]: time="2025-02-13T20:18:00.096665023Z" level=info msg="TearDown network for sandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\" successfully" Feb 13 20:18:00.097846 containerd[1795]: time="2025-02-13T20:18:00.097811025Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.097846 containerd[1795]: time="2025-02-13T20:18:00.097845552Z" level=info msg="RemovePodSandbox \"39518c1b4feff14618099d0a7fd9326c316b2f1535276072cfc9ea84f951f16b\" returns successfully" Feb 13 20:18:00.098014 containerd[1795]: time="2025-02-13T20:18:00.097962679Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:18:00.098045 containerd[1795]: time="2025-02-13T20:18:00.098035449Z" level=info msg="TearDown network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" successfully" Feb 13 20:18:00.098045 containerd[1795]: time="2025-02-13T20:18:00.098041869Z" level=info msg="StopPodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" returns successfully" Feb 13 20:18:00.098171 containerd[1795]: time="2025-02-13T20:18:00.098135627Z" level=info msg="RemovePodSandbox for \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:18:00.098171 containerd[1795]: time="2025-02-13T20:18:00.098144778Z" level=info msg="Forcibly stopping sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\"" Feb 13 20:18:00.098224 containerd[1795]: time="2025-02-13T20:18:00.098190823Z" level=info msg="TearDown network for sandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" successfully" Feb 13 20:18:00.099259 containerd[1795]: time="2025-02-13T20:18:00.099220240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.099259 containerd[1795]: time="2025-02-13T20:18:00.099236797Z" level=info msg="RemovePodSandbox \"9ad3372cb116d6054d70b1eaa52144318f392d424263acd32de291b731176c1b\" returns successfully" Feb 13 20:18:00.099391 containerd[1795]: time="2025-02-13T20:18:00.099381917Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" Feb 13 20:18:00.099426 containerd[1795]: time="2025-02-13T20:18:00.099419673Z" level=info msg="TearDown network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" successfully" Feb 13 20:18:00.099445 containerd[1795]: time="2025-02-13T20:18:00.099426157Z" level=info msg="StopPodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" returns successfully" Feb 13 20:18:00.099592 containerd[1795]: time="2025-02-13T20:18:00.099555107Z" level=info msg="RemovePodSandbox for \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" Feb 13 20:18:00.099592 containerd[1795]: time="2025-02-13T20:18:00.099587761Z" level=info msg="Forcibly stopping sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\"" Feb 13 20:18:00.099682 containerd[1795]: time="2025-02-13T20:18:00.099621766Z" level=info msg="TearDown network for sandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" successfully" Feb 13 20:18:00.100730 containerd[1795]: time="2025-02-13T20:18:00.100691043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.100730 containerd[1795]: time="2025-02-13T20:18:00.100708777Z" level=info msg="RemovePodSandbox \"4d57310ec40dcb370935e6e9bdfcb6c64a98c34f917a4d8dbc684a1876240bd1\" returns successfully" Feb 13 20:18:00.100909 containerd[1795]: time="2025-02-13T20:18:00.100866937Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\"" Feb 13 20:18:00.100938 containerd[1795]: time="2025-02-13T20:18:00.100925919Z" level=info msg="TearDown network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" successfully" Feb 13 20:18:00.100938 containerd[1795]: time="2025-02-13T20:18:00.100932679Z" level=info msg="StopPodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" returns successfully" Feb 13 20:18:00.101089 containerd[1795]: time="2025-02-13T20:18:00.101053919Z" level=info msg="RemovePodSandbox for \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\"" Feb 13 20:18:00.101089 containerd[1795]: time="2025-02-13T20:18:00.101082580Z" level=info msg="Forcibly stopping sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\"" Feb 13 20:18:00.101140 containerd[1795]: time="2025-02-13T20:18:00.101113375Z" level=info msg="TearDown network for sandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" successfully" Feb 13 20:18:00.102197 containerd[1795]: time="2025-02-13T20:18:00.102156986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.102197 containerd[1795]: time="2025-02-13T20:18:00.102174350Z" level=info msg="RemovePodSandbox \"68b777db6f50b3f0cb4f0f1a6c46d29799a1daa82d37d65230cc719071ff323c\" returns successfully" Feb 13 20:18:00.102322 containerd[1795]: time="2025-02-13T20:18:00.102311385Z" level=info msg="StopPodSandbox for \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\"" Feb 13 20:18:00.102366 containerd[1795]: time="2025-02-13T20:18:00.102358774Z" level=info msg="TearDown network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" successfully" Feb 13 20:18:00.102385 containerd[1795]: time="2025-02-13T20:18:00.102366254Z" level=info msg="StopPodSandbox for \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" returns successfully" Feb 13 20:18:00.102512 containerd[1795]: time="2025-02-13T20:18:00.102472326Z" level=info msg="RemovePodSandbox for \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\"" Feb 13 20:18:00.102512 containerd[1795]: time="2025-02-13T20:18:00.102483826Z" level=info msg="Forcibly stopping sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\"" Feb 13 20:18:00.102565 containerd[1795]: time="2025-02-13T20:18:00.102516975Z" level=info msg="TearDown network for sandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" successfully" Feb 13 20:18:00.103587 containerd[1795]: time="2025-02-13T20:18:00.103552199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.103587 containerd[1795]: time="2025-02-13T20:18:00.103587494Z" level=info msg="RemovePodSandbox \"3baac9f550f08e67ebd65d0dc083ed7988985fc017ce08034e9dbf44d3c29e50\" returns successfully" Feb 13 20:18:00.103796 containerd[1795]: time="2025-02-13T20:18:00.103747410Z" level=info msg="StopPodSandbox for \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\"" Feb 13 20:18:00.103825 containerd[1795]: time="2025-02-13T20:18:00.103797972Z" level=info msg="TearDown network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\" successfully" Feb 13 20:18:00.103825 containerd[1795]: time="2025-02-13T20:18:00.103803617Z" level=info msg="StopPodSandbox for \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\" returns successfully" Feb 13 20:18:00.104017 containerd[1795]: time="2025-02-13T20:18:00.103967318Z" level=info msg="RemovePodSandbox for \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\"" Feb 13 20:18:00.104017 containerd[1795]: time="2025-02-13T20:18:00.104012625Z" level=info msg="Forcibly stopping sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\"" Feb 13 20:18:00.104060 containerd[1795]: time="2025-02-13T20:18:00.104042897Z" level=info msg="TearDown network for sandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\" successfully" Feb 13 20:18:00.105231 containerd[1795]: time="2025-02-13T20:18:00.105176717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.105231 containerd[1795]: time="2025-02-13T20:18:00.105225219Z" level=info msg="RemovePodSandbox \"9eafbbcf7cbb6cba455fe6f2b8a333f2c00d360c0dc5136bb219532e1258d5bc\" returns successfully" Feb 13 20:18:00.105346 containerd[1795]: time="2025-02-13T20:18:00.105336730Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:18:00.105385 containerd[1795]: time="2025-02-13T20:18:00.105377944Z" level=info msg="TearDown network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" successfully" Feb 13 20:18:00.105409 containerd[1795]: time="2025-02-13T20:18:00.105385472Z" level=info msg="StopPodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" returns successfully" Feb 13 20:18:00.105576 containerd[1795]: time="2025-02-13T20:18:00.105536250Z" level=info msg="RemovePodSandbox for \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:18:00.105576 containerd[1795]: time="2025-02-13T20:18:00.105546977Z" level=info msg="Forcibly stopping sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\"" Feb 13 20:18:00.105627 containerd[1795]: time="2025-02-13T20:18:00.105595869Z" level=info msg="TearDown network for sandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" successfully" Feb 13 20:18:00.106657 containerd[1795]: time="2025-02-13T20:18:00.106623900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.106688 containerd[1795]: time="2025-02-13T20:18:00.106662001Z" level=info msg="RemovePodSandbox \"b3497dacf36ec8b56010ca6778e1a849107e29c7f04852168e3e16e0ca2c7348\" returns successfully" Feb 13 20:18:00.106827 containerd[1795]: time="2025-02-13T20:18:00.106780225Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" Feb 13 20:18:00.106890 containerd[1795]: time="2025-02-13T20:18:00.106857007Z" level=info msg="TearDown network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" successfully" Feb 13 20:18:00.106890 containerd[1795]: time="2025-02-13T20:18:00.106882155Z" level=info msg="StopPodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" returns successfully" Feb 13 20:18:00.107145 containerd[1795]: time="2025-02-13T20:18:00.107091962Z" level=info msg="RemovePodSandbox for \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" Feb 13 20:18:00.107145 containerd[1795]: time="2025-02-13T20:18:00.107118195Z" level=info msg="Forcibly stopping sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\"" Feb 13 20:18:00.107207 containerd[1795]: time="2025-02-13T20:18:00.107164722Z" level=info msg="TearDown network for sandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" successfully" Feb 13 20:18:00.108284 containerd[1795]: time="2025-02-13T20:18:00.108245229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.108284 containerd[1795]: time="2025-02-13T20:18:00.108262315Z" level=info msg="RemovePodSandbox \"16ed88c7929a4c6125d9006188b80e4edc5622a36d52f3e519a53143f17d7ef7\" returns successfully" Feb 13 20:18:00.108425 containerd[1795]: time="2025-02-13T20:18:00.108416083Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\"" Feb 13 20:18:00.108528 containerd[1795]: time="2025-02-13T20:18:00.108463897Z" level=info msg="TearDown network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" successfully" Feb 13 20:18:00.108528 containerd[1795]: time="2025-02-13T20:18:00.108470786Z" level=info msg="StopPodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" returns successfully" Feb 13 20:18:00.108684 containerd[1795]: time="2025-02-13T20:18:00.108646303Z" level=info msg="RemovePodSandbox for \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\"" Feb 13 20:18:00.108684 containerd[1795]: time="2025-02-13T20:18:00.108656188Z" level=info msg="Forcibly stopping sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\"" Feb 13 20:18:00.108787 containerd[1795]: time="2025-02-13T20:18:00.108684330Z" level=info msg="TearDown network for sandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" successfully" Feb 13 20:18:00.109775 containerd[1795]: time="2025-02-13T20:18:00.109735115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.109775 containerd[1795]: time="2025-02-13T20:18:00.109752418Z" level=info msg="RemovePodSandbox \"2d9c5629e507dde38691e79e7bbfe3f190891c2fac1fe7d5ba94df8f3cc59e20\" returns successfully" Feb 13 20:18:00.110047 containerd[1795]: time="2025-02-13T20:18:00.109991131Z" level=info msg="StopPodSandbox for \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\"" Feb 13 20:18:00.110110 containerd[1795]: time="2025-02-13T20:18:00.110080924Z" level=info msg="TearDown network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" successfully" Feb 13 20:18:00.110110 containerd[1795]: time="2025-02-13T20:18:00.110087029Z" level=info msg="StopPodSandbox for \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" returns successfully" Feb 13 20:18:00.110283 containerd[1795]: time="2025-02-13T20:18:00.110247246Z" level=info msg="RemovePodSandbox for \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\"" Feb 13 20:18:00.110283 containerd[1795]: time="2025-02-13T20:18:00.110276727Z" level=info msg="Forcibly stopping sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\"" Feb 13 20:18:00.110329 containerd[1795]: time="2025-02-13T20:18:00.110306665Z" level=info msg="TearDown network for sandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" successfully" Feb 13 20:18:00.111377 containerd[1795]: time="2025-02-13T20:18:00.111337623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.111377 containerd[1795]: time="2025-02-13T20:18:00.111354583Z" level=info msg="RemovePodSandbox \"58c1979e9a2a6e936d17dced152cdd4de3bc1944cc1ff89a55fe1e1b1bfbd636\" returns successfully" Feb 13 20:18:00.111556 containerd[1795]: time="2025-02-13T20:18:00.111488821Z" level=info msg="StopPodSandbox for \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\"" Feb 13 20:18:00.111640 containerd[1795]: time="2025-02-13T20:18:00.111581029Z" level=info msg="TearDown network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\" successfully" Feb 13 20:18:00.111640 containerd[1795]: time="2025-02-13T20:18:00.111587295Z" level=info msg="StopPodSandbox for \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\" returns successfully" Feb 13 20:18:00.111808 containerd[1795]: time="2025-02-13T20:18:00.111770215Z" level=info msg="RemovePodSandbox for \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\"" Feb 13 20:18:00.111808 containerd[1795]: time="2025-02-13T20:18:00.111781435Z" level=info msg="Forcibly stopping sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\"" Feb 13 20:18:00.111858 containerd[1795]: time="2025-02-13T20:18:00.111817586Z" level=info msg="TearDown network for sandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\" successfully" Feb 13 20:18:00.112860 containerd[1795]: time="2025-02-13T20:18:00.112821393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:18:00.112860 containerd[1795]: time="2025-02-13T20:18:00.112838532Z" level=info msg="RemovePodSandbox \"0692a0f866719928d57cf8aacec44d023e05d2ab966d0b2ae83a9b3a5ef8d211\" returns successfully" Feb 13 20:18:17.756083 kubelet[3267]: I0213 20:18:17.755968 3267 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:19:48.189949 update_engine[1782]: I20250213 20:19:48.189698 1782 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:19:48.189949 update_engine[1782]: I20250213 20:19:48.189800 1782 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:19:48.191061 update_engine[1782]: I20250213 20:19:48.190192 1782 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:19:48.191383 update_engine[1782]: I20250213 20:19:48.191296 1782 omaha_request_params.cc:62] Current group set to stable Feb 13 20:19:48.191683 update_engine[1782]: I20250213 20:19:48.191581 1782 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:19:48.191683 update_engine[1782]: I20250213 20:19:48.191617 1782 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:19:48.191683 update_engine[1782]: I20250213 20:19:48.191655 1782 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:19:48.192023 update_engine[1782]: I20250213 20:19:48.191726 1782 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:19:48.192023 update_engine[1782]: I20250213 20:19:48.191887 1782 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:19:48.192023 update_engine[1782]: I20250213 20:19:48.191917 1782 omaha_request_action.cc:272] Request: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: Feb 13 20:19:48.192023 update_engine[1782]: I20250213 20:19:48.191934 1782 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:19:48.193043 locksmithd[1831]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:19:48.195748 update_engine[1782]: I20250213 20:19:48.195657 1782 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:19:48.196544 update_engine[1782]: I20250213 20:19:48.196406 1782 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:19:48.197021 update_engine[1782]: E20250213 20:19:48.196908 1782 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:19:48.197201 update_engine[1782]: I20250213 20:19:48.197078 1782 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:19:58.197376 update_engine[1782]: I20250213 20:19:58.197214 1782 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:19:58.198371 update_engine[1782]: I20250213 20:19:58.197849 1782 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:19:58.198509 update_engine[1782]: I20250213 20:19:58.198415 1782 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:19:58.199110 update_engine[1782]: E20250213 20:19:58.198998 1782 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:19:58.199291 update_engine[1782]: I20250213 20:19:58.199138 1782 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:20:08.197472 update_engine[1782]: I20250213 20:20:08.197288 1782 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:20:08.198431 update_engine[1782]: I20250213 20:20:08.197928 1782 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:20:08.198570 update_engine[1782]: I20250213 20:20:08.198495 1782 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:20:08.199025 update_engine[1782]: E20250213 20:20:08.198913 1782 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:20:08.199211 update_engine[1782]: I20250213 20:20:08.199041 1782 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:20:18.197352 update_engine[1782]: I20250213 20:20:18.197185 1782 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:20:18.198394 update_engine[1782]: I20250213 20:20:18.197840 1782 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:20:18.198550 update_engine[1782]: I20250213 20:20:18.198400 1782 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:20:18.198936 update_engine[1782]: E20250213 20:20:18.198825 1782 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:20:18.199148 update_engine[1782]: I20250213 20:20:18.198949 1782 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:20:18.199148 update_engine[1782]: I20250213 20:20:18.198978 1782 omaha_request_action.cc:617] Omaha request response: Feb 13 20:20:18.199363 update_engine[1782]: E20250213 20:20:18.199141 1782 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:20:18.199363 update_engine[1782]: I20250213 20:20:18.199194 1782 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:20:18.199363 update_engine[1782]: I20250213 20:20:18.199213 1782 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:20:18.199363 update_engine[1782]: I20250213 20:20:18.199226 1782 update_attempter.cc:306] Processing Done. Feb 13 20:20:18.199363 update_engine[1782]: E20250213 20:20:18.199260 1782 update_attempter.cc:619] Update failed. Feb 13 20:20:18.199363 update_engine[1782]: I20250213 20:20:18.199276 1782 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:20:18.199363 update_engine[1782]: I20250213 20:20:18.199292 1782 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:20:18.199363 update_engine[1782]: I20250213 20:20:18.199307 1782 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:20:18.200096 update_engine[1782]: I20250213 20:20:18.199486 1782 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:20:18.200096 update_engine[1782]: I20250213 20:20:18.199551 1782 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:20:18.200096 update_engine[1782]: I20250213 20:20:18.199571 1782 omaha_request_action.cc:272] Request: Feb 13 20:20:18.200096 update_engine[1782]: Feb 13 20:20:18.200096 update_engine[1782]: Feb 13 20:20:18.200096 update_engine[1782]: Feb 13 20:20:18.200096 update_engine[1782]: Feb 13 20:20:18.200096 update_engine[1782]: Feb 13 20:20:18.200096 update_engine[1782]: Feb 13 20:20:18.200096 update_engine[1782]: I20250213 20:20:18.199589 1782 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:20:18.200096 update_engine[1782]: I20250213 20:20:18.200008 1782 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.200497 1782 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:20:18.201109 update_engine[1782]: E20250213 20:20:18.200878 1782 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.200994 1782 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.201020 1782 omaha_request_action.cc:617] Omaha request response: Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.201038 1782 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.201054 1782 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.201069 1782 update_attempter.cc:306] Processing Done. Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.201086 1782 update_attempter.cc:310] Error event sent. Feb 13 20:20:18.201109 update_engine[1782]: I20250213 20:20:18.201108 1782 update_check_scheduler.cc:74] Next update check in 41m31s Feb 13 20:20:18.201924 locksmithd[1831]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:20:18.201924 locksmithd[1831]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:21:52.505188 systemd[1]: Started sshd@9-147.75.90.163:22-218.92.0.209:59236.service - OpenSSH per-connection server daemon (218.92.0.209:59236). Feb 13 20:21:52.651441 sshd[7479]: Unable to negotiate with 218.92.0.209 port 59236: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 13 20:21:52.653534 systemd[1]: sshd@9-147.75.90.163:22-218.92.0.209:59236.service: Deactivated successfully. Feb 13 20:22:43.342744 systemd[1]: Started sshd@10-147.75.90.163:22-139.178.68.195:37892.service - OpenSSH per-connection server daemon (139.178.68.195:37892). Feb 13 20:22:43.371676 sshd[7599]: Accepted publickey for core from 139.178.68.195 port 37892 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:22:43.372479 sshd-session[7599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:43.375962 systemd-logind[1777]: New session 12 of user core. Feb 13 20:22:43.393675 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:22:43.486869 sshd[7601]: Connection closed by 139.178.68.195 port 37892 Feb 13 20:22:43.487045 sshd-session[7599]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:43.488980 systemd[1]: sshd@10-147.75.90.163:22-139.178.68.195:37892.service: Deactivated successfully. Feb 13 20:22:43.489901 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:22:43.490232 systemd-logind[1777]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:22:43.490852 systemd-logind[1777]: Removed session 12. Feb 13 20:22:48.530729 systemd[1]: Started sshd@11-147.75.90.163:22-139.178.68.195:40754.service - OpenSSH per-connection server daemon (139.178.68.195:40754). Feb 13 20:22:48.557503 sshd[7655]: Accepted publickey for core from 139.178.68.195 port 40754 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:22:48.558312 sshd-session[7655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:48.561281 systemd-logind[1777]: New session 13 of user core. Feb 13 20:22:48.561974 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:22:48.652678 sshd[7657]: Connection closed by 139.178.68.195 port 40754 Feb 13 20:22:48.652892 sshd-session[7655]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:48.654378 systemd[1]: sshd@11-147.75.90.163:22-139.178.68.195:40754.service: Deactivated successfully. Feb 13 20:22:48.655317 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:22:48.656060 systemd-logind[1777]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:22:48.656692 systemd-logind[1777]: Removed session 13. Feb 13 20:22:53.670722 systemd[1]: Started sshd@12-147.75.90.163:22-139.178.68.195:40758.service - OpenSSH per-connection server daemon (139.178.68.195:40758). Feb 13 20:22:53.699120 sshd[7707]: Accepted publickey for core from 139.178.68.195 port 40758 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:22:53.699800 sshd-session[7707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:53.702891 systemd-logind[1777]: New session 14 of user core. Feb 13 20:22:53.721712 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:22:53.810408 sshd[7709]: Connection closed by 139.178.68.195 port 40758 Feb 13 20:22:53.810633 sshd-session[7707]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:53.835379 systemd[1]: sshd@12-147.75.90.163:22-139.178.68.195:40758.service: Deactivated successfully. Feb 13 20:22:53.836362 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:22:53.837246 systemd-logind[1777]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:22:53.838023 systemd[1]: Started sshd@13-147.75.90.163:22-139.178.68.195:40762.service - OpenSSH per-connection server daemon (139.178.68.195:40762). Feb 13 20:22:53.838570 systemd-logind[1777]: Removed session 14. Feb 13 20:22:53.873253 sshd[7735]: Accepted publickey for core from 139.178.68.195 port 40762 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:22:53.874211 sshd-session[7735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:53.877726 systemd-logind[1777]: New session 15 of user core. Feb 13 20:22:53.888701 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:22:53.988490 sshd[7738]: Connection closed by 139.178.68.195 port 40762 Feb 13 20:22:53.988620 sshd-session[7735]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:54.001133 systemd[1]: sshd@13-147.75.90.163:22-139.178.68.195:40762.service: Deactivated successfully. Feb 13 20:22:54.001973 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:22:54.002709 systemd-logind[1777]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:22:54.003295 systemd[1]: Started sshd@14-147.75.90.163:22-139.178.68.195:40766.service - OpenSSH per-connection server daemon (139.178.68.195:40766). Feb 13 20:22:54.003808 systemd-logind[1777]: Removed session 15. Feb 13 20:22:54.031387 sshd[7760]: Accepted publickey for core from 139.178.68.195 port 40766 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:22:54.032058 sshd-session[7760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:54.034702 systemd-logind[1777]: New session 16 of user core. Feb 13 20:22:54.063961 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:22:54.160307 sshd[7763]: Connection closed by 139.178.68.195 port 40766 Feb 13 20:22:54.160482 sshd-session[7760]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:54.162088 systemd[1]: sshd@14-147.75.90.163:22-139.178.68.195:40766.service: Deactivated successfully. Feb 13 20:22:54.162989 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:22:54.163732 systemd-logind[1777]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:22:54.164339 systemd-logind[1777]: Removed session 16. Feb 13 20:22:59.181786 systemd[1]: Started sshd@15-147.75.90.163:22-139.178.68.195:59980.service - OpenSSH per-connection server daemon (139.178.68.195:59980). Feb 13 20:22:59.211174 sshd[7794]: Accepted publickey for core from 139.178.68.195 port 59980 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:22:59.214509 sshd-session[7794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:59.225835 systemd-logind[1777]: New session 17 of user core. Feb 13 20:22:59.241871 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:22:59.337305 sshd[7796]: Connection closed by 139.178.68.195 port 59980 Feb 13 20:22:59.337512 sshd-session[7794]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:59.339235 systemd[1]: sshd@15-147.75.90.163:22-139.178.68.195:59980.service: Deactivated successfully. Feb 13 20:22:59.340213 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:22:59.340971 systemd-logind[1777]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:22:59.341581 systemd-logind[1777]: Removed session 17. Feb 13 20:23:04.353861 systemd[1]: Started sshd@16-147.75.90.163:22-139.178.68.195:59982.service - OpenSSH per-connection server daemon (139.178.68.195:59982). Feb 13 20:23:04.383564 sshd[7823]: Accepted publickey for core from 139.178.68.195 port 59982 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:04.386923 sshd-session[7823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:04.398544 systemd-logind[1777]: New session 18 of user core. Feb 13 20:23:04.416966 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:23:04.514337 sshd[7825]: Connection closed by 139.178.68.195 port 59982 Feb 13 20:23:04.514530 sshd-session[7823]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:04.534733 systemd[1]: sshd@16-147.75.90.163:22-139.178.68.195:59982.service: Deactivated successfully. Feb 13 20:23:04.538645 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:23:04.542203 systemd-logind[1777]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:23:04.553272 systemd[1]: Started sshd@17-147.75.90.163:22-139.178.68.195:59994.service - OpenSSH per-connection server daemon (139.178.68.195:59994). Feb 13 20:23:04.555765 systemd-logind[1777]: Removed session 18. Feb 13 20:23:04.622774 sshd[7849]: Accepted publickey for core from 139.178.68.195 port 59994 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:04.624241 sshd-session[7849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:04.629250 systemd-logind[1777]: New session 19 of user core. Feb 13 20:23:04.642644 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:23:04.738953 sshd[7853]: Connection closed by 139.178.68.195 port 59994 Feb 13 20:23:04.739146 sshd-session[7849]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:04.762991 systemd[1]: sshd@17-147.75.90.163:22-139.178.68.195:59994.service: Deactivated successfully. Feb 13 20:23:04.764257 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:23:04.765397 systemd-logind[1777]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:23:04.766543 systemd[1]: Started sshd@18-147.75.90.163:22-139.178.68.195:60010.service - OpenSSH per-connection server daemon (139.178.68.195:60010). Feb 13 20:23:04.767356 systemd-logind[1777]: Removed session 19. Feb 13 20:23:04.812824 sshd[7874]: Accepted publickey for core from 139.178.68.195 port 60010 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:04.814186 sshd-session[7874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:04.819111 systemd-logind[1777]: New session 20 of user core. Feb 13 20:23:04.829925 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:23:06.108750 sshd[7876]: Connection closed by 139.178.68.195 port 60010 Feb 13 20:23:06.108967 sshd-session[7874]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:06.126296 systemd[1]: sshd@18-147.75.90.163:22-139.178.68.195:60010.service: Deactivated successfully. Feb 13 20:23:06.127259 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:23:06.128016 systemd-logind[1777]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:23:06.128807 systemd[1]: Started sshd@19-147.75.90.163:22-139.178.68.195:60022.service - OpenSSH per-connection server daemon (139.178.68.195:60022). Feb 13 20:23:06.129371 systemd-logind[1777]: Removed session 20. Feb 13 20:23:06.160308 sshd[7923]: Accepted publickey for core from 139.178.68.195 port 60022 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:06.161072 sshd-session[7923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:06.164186 systemd-logind[1777]: New session 21 of user core. Feb 13 20:23:06.184887 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:23:06.392191 sshd[7928]: Connection closed by 139.178.68.195 port 60022 Feb 13 20:23:06.392392 sshd-session[7923]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:06.406183 systemd[1]: sshd@19-147.75.90.163:22-139.178.68.195:60022.service: Deactivated successfully. Feb 13 20:23:06.407104 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:23:06.407780 systemd-logind[1777]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:23:06.408404 systemd[1]: Started sshd@20-147.75.90.163:22-139.178.68.195:32866.service - OpenSSH per-connection server daemon (139.178.68.195:32866). Feb 13 20:23:06.408851 systemd-logind[1777]: Removed session 21. Feb 13 20:23:06.438223 sshd[7951]: Accepted publickey for core from 139.178.68.195 port 32866 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:06.439013 sshd-session[7951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:06.441715 systemd-logind[1777]: New session 22 of user core. Feb 13 20:23:06.451626 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:23:06.579754 sshd[7953]: Connection closed by 139.178.68.195 port 32866 Feb 13 20:23:06.579925 sshd-session[7951]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:06.581428 systemd[1]: sshd@20-147.75.90.163:22-139.178.68.195:32866.service: Deactivated successfully. Feb 13 20:23:06.582327 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:23:06.582998 systemd-logind[1777]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:23:06.583551 systemd-logind[1777]: Removed session 22. Feb 13 20:23:11.597395 systemd[1]: Started sshd@21-147.75.90.163:22-139.178.68.195:32882.service - OpenSSH per-connection server daemon (139.178.68.195:32882). Feb 13 20:23:11.625893 sshd[7979]: Accepted publickey for core from 139.178.68.195 port 32882 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:11.626776 sshd-session[7979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:11.629716 systemd-logind[1777]: New session 23 of user core. Feb 13 20:23:11.646714 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:23:11.752163 sshd[7981]: Connection closed by 139.178.68.195 port 32882 Feb 13 20:23:11.752375 sshd-session[7979]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:11.754265 systemd[1]: sshd@21-147.75.90.163:22-139.178.68.195:32882.service: Deactivated successfully. Feb 13 20:23:11.755377 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:23:11.756269 systemd-logind[1777]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:23:11.757114 systemd-logind[1777]: Removed session 23. Feb 13 20:23:16.775766 systemd[1]: Started sshd@22-147.75.90.163:22-139.178.68.195:55410.service - OpenSSH per-connection server daemon (139.178.68.195:55410). Feb 13 20:23:16.804723 sshd[8035]: Accepted publickey for core from 139.178.68.195 port 55410 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:16.805476 sshd-session[8035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:16.808336 systemd-logind[1777]: New session 24 of user core. Feb 13 20:23:16.826586 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:23:16.917093 sshd[8037]: Connection closed by 139.178.68.195 port 55410 Feb 13 20:23:16.917272 sshd-session[8035]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:16.918987 systemd[1]: sshd@22-147.75.90.163:22-139.178.68.195:55410.service: Deactivated successfully. Feb 13 20:23:16.919968 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:23:16.920747 systemd-logind[1777]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:23:16.921372 systemd-logind[1777]: Removed session 24. Feb 13 20:23:21.947736 systemd[1]: Started sshd@23-147.75.90.163:22-139.178.68.195:55426.service - OpenSSH per-connection server daemon (139.178.68.195:55426). Feb 13 20:23:21.974739 sshd[8061]: Accepted publickey for core from 139.178.68.195 port 55426 ssh2: RSA SHA256:oUDdG+WEMOtgWcJIqrYZLULMXB2a3NPP3tsueyJY4Nc Feb 13 20:23:21.978081 sshd-session[8061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:21.989415 systemd-logind[1777]: New session 25 of user core. Feb 13 20:23:22.000962 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:23:22.093441 sshd[8063]: Connection closed by 139.178.68.195 port 55426 Feb 13 20:23:22.093654 sshd-session[8061]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:22.095230 systemd[1]: sshd@23-147.75.90.163:22-139.178.68.195:55426.service: Deactivated successfully. Feb 13 20:23:22.096145 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:23:22.096850 systemd-logind[1777]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:23:22.097383 systemd-logind[1777]: Removed session 25.