Feb 13 21:05:29.452756 kernel: microcode: updated early: 0xde -> 0x100, date = 2024-02-05 Feb 13 21:05:29.452771 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 21:05:29.452777 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:05:29.452783 kernel: BIOS-provided physical RAM map: Feb 13 21:05:29.452787 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 21:05:29.452791 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 21:05:29.452796 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 21:05:29.452800 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 21:05:29.452804 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 21:05:29.452808 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000620bafff] usable Feb 13 21:05:29.452812 kernel: BIOS-e820: [mem 0x00000000620bb000-0x00000000620bbfff] ACPI NVS Feb 13 21:05:29.452817 kernel: BIOS-e820: [mem 0x00000000620bc000-0x00000000620bcfff] reserved Feb 13 21:05:29.452822 kernel: BIOS-e820: [mem 0x00000000620bd000-0x000000006c0c4fff] usable Feb 13 21:05:29.452826 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Feb 13 21:05:29.452832 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Feb 13 21:05:29.452836 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Feb 13 21:05:29.452842 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Feb 13 21:05:29.452846 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Feb 13 21:05:29.452851 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Feb 13 21:05:29.452855 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 21:05:29.452860 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 21:05:29.452864 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 21:05:29.452869 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 21:05:29.452873 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 21:05:29.452878 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Feb 13 21:05:29.452882 kernel: NX (Execute Disable) protection: active Feb 13 21:05:29.452887 kernel: APIC: Static calls initialized Feb 13 21:05:29.452893 kernel: SMBIOS 3.2.1 present. Feb 13 21:05:29.452897 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 13 21:05:29.452902 kernel: tsc: Detected 3400.000 MHz processor Feb 13 21:05:29.452907 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 21:05:29.452911 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 21:05:29.452916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 21:05:29.452921 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Feb 13 21:05:29.452926 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 21:05:29.452931 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 21:05:29.452935 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Feb 13 21:05:29.452941 kernel: Using GB pages for direct mapping Feb 13 21:05:29.452946 kernel: ACPI: Early table checksum verification disabled Feb 13 21:05:29.452951 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 21:05:29.452957 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 21:05:29.452962 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Feb 13 21:05:29.452967 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 21:05:29.452972 kernel: ACPI: FACS 0x000000006D762F80 000040 Feb 13 21:05:29.452979 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Feb 13 21:05:29.452984 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Feb 13 21:05:29.452989 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 21:05:29.452994 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 21:05:29.452999 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 21:05:29.453004 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 21:05:29.453009 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 21:05:29.453015 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 21:05:29.453020 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453025 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 21:05:29.453030 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 21:05:29.453035 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453040 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453045 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 21:05:29.453050 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 21:05:29.453055 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453061 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453066 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 21:05:29.453071 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 13 21:05:29.453076 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 21:05:29.453081 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 21:05:29.453086 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 21:05:29.453091 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 ?b 01072009 AMI 00010013) Feb 13 21:05:29.453096 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 21:05:29.453101 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 21:05:29.453107 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 21:05:29.453112 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 21:05:29.453117 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 21:05:29.453122 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Feb 13 21:05:29.453127 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Feb 13 21:05:29.453132 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Feb 13 21:05:29.453137 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Feb 13 21:05:29.453142 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Feb 13 21:05:29.453148 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Feb 13 21:05:29.453153 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Feb 13 21:05:29.453158 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Feb 13 21:05:29.453163 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Feb 13 21:05:29.453168 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Feb 13 21:05:29.453173 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Feb 13 21:05:29.453178 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Feb 13 21:05:29.453183 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Feb 13 21:05:29.453188 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Feb 13 21:05:29.453194 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Feb 13 21:05:29.453198 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Feb 13 21:05:29.453203 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Feb 13 21:05:29.453208 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Feb 13 21:05:29.453213 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Feb 13 21:05:29.453218 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Feb 13 21:05:29.453223 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Feb 13 21:05:29.453228 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Feb 13 21:05:29.453233 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Feb 13 21:05:29.453239 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Feb 13 21:05:29.453244 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Feb 13 21:05:29.453249 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Feb 13 21:05:29.453254 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Feb 13 21:05:29.453259 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Feb 13 21:05:29.453264 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Feb 13 21:05:29.453269 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Feb 13 21:05:29.453274 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Feb 13 21:05:29.453279 kernel: No NUMA configuration found Feb 13 21:05:29.453285 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Feb 13 21:05:29.453290 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Feb 13 21:05:29.453295 kernel: Zone ranges: Feb 13 21:05:29.453300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 21:05:29.453305 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 21:05:29.453310 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Feb 13 21:05:29.453315 kernel: Movable zone start for each node Feb 13 21:05:29.453320 kernel: Early memory node ranges Feb 13 21:05:29.453325 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 21:05:29.453330 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 21:05:29.453336 kernel: node 0: [mem 0x0000000040400000-0x00000000620bafff] Feb 13 21:05:29.453341 kernel: node 0: [mem 0x00000000620bd000-0x000000006c0c4fff] Feb 13 21:05:29.453345 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Feb 13 21:05:29.453351 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Feb 13 21:05:29.453359 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Feb 13 21:05:29.453365 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Feb 13 21:05:29.453370 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 21:05:29.453376 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 21:05:29.453382 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 21:05:29.453388 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 21:05:29.453393 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 13 21:05:29.453398 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 13 21:05:29.453404 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Feb 13 21:05:29.453409 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 21:05:29.453414 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 21:05:29.453420 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 21:05:29.453425 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 21:05:29.453431 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 21:05:29.453436 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 21:05:29.453442 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 21:05:29.453447 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 21:05:29.453452 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 21:05:29.453458 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 21:05:29.453463 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 21:05:29.453468 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 21:05:29.453474 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 21:05:29.453480 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 21:05:29.453485 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 21:05:29.453490 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 21:05:29.453496 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 21:05:29.453501 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 21:05:29.453506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 21:05:29.453512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 21:05:29.453517 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 21:05:29.453522 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 21:05:29.453529 kernel: TSC deadline timer available Feb 13 21:05:29.453534 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 21:05:29.453539 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Feb 13 21:05:29.453545 kernel: Booting paravirtualized kernel on bare hardware Feb 13 21:05:29.453550 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 21:05:29.453556 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 21:05:29.453561 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 21:05:29.453567 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 21:05:29.453572 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 21:05:29.453579 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:05:29.453585 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 21:05:29.453590 kernel: random: crng init done Feb 13 21:05:29.453595 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 21:05:29.453600 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 21:05:29.453606 kernel: Fallback order for Node 0: 0 Feb 13 21:05:29.453611 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Feb 13 21:05:29.453616 kernel: Policy zone: Normal Feb 13 21:05:29.453625 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 21:05:29.453630 kernel: software IO TLB: area num 16. Feb 13 21:05:29.453636 kernel: Memory: 32549268K/33281940K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 732412K reserved, 0K cma-reserved) Feb 13 21:05:29.453641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 21:05:29.453647 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 21:05:29.453652 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 21:05:29.453657 kernel: Dynamic Preempt: voluntary Feb 13 21:05:29.453663 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 21:05:29.453668 kernel: rcu: RCU event tracing is enabled. Feb 13 21:05:29.453675 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 21:05:29.453680 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 21:05:29.453686 kernel: Rude variant of Tasks RCU enabled. Feb 13 21:05:29.453691 kernel: Tracing variant of Tasks RCU enabled. Feb 13 21:05:29.453697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 21:05:29.453702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 21:05:29.453707 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 21:05:29.453713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 21:05:29.453718 kernel: Console: colour VGA+ 80x25 Feb 13 21:05:29.453724 kernel: printk: console [tty0] enabled Feb 13 21:05:29.453729 kernel: printk: console [ttyS1] enabled Feb 13 21:05:29.453735 kernel: ACPI: Core revision 20230628 Feb 13 21:05:29.453740 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 13 21:05:29.453745 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 21:05:29.453751 kernel: DMAR: Host address width 39 Feb 13 21:05:29.453756 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 13 21:05:29.453762 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 13 21:05:29.453767 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 21:05:29.453773 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 21:05:29.453779 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Feb 13 21:05:29.453784 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Feb 13 21:05:29.453789 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 13 21:05:29.453795 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 21:05:29.453800 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 21:05:29.453805 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 21:05:29.453811 kernel: x2apic enabled Feb 13 21:05:29.453816 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 21:05:29.453822 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 21:05:29.453828 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 21:05:29.453833 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 21:05:29.453839 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 21:05:29.453844 kernel: process: using mwait in idle threads Feb 13 21:05:29.453849 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 21:05:29.453854 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 21:05:29.453860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 21:05:29.453865 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 21:05:29.453871 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 21:05:29.453877 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 21:05:29.453882 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 21:05:29.453888 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 21:05:29.453893 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 21:05:29.453898 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 21:05:29.453904 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 21:05:29.453909 kernel: TAA: Mitigation: TSX disabled Feb 13 21:05:29.453914 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 21:05:29.453920 kernel: SRBDS: Mitigation: Microcode Feb 13 21:05:29.453926 kernel: GDS: Mitigation: Microcode Feb 13 21:05:29.453931 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 21:05:29.453936 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 21:05:29.453942 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 21:05:29.453947 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 21:05:29.453952 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 21:05:29.453958 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 21:05:29.453963 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 21:05:29.453969 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 21:05:29.453975 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 21:05:29.453980 kernel: Freeing SMP alternatives memory: 32K Feb 13 21:05:29.453986 kernel: pid_max: default: 32768 minimum: 301 Feb 13 21:05:29.453991 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 21:05:29.453996 kernel: landlock: Up and running. Feb 13 21:05:29.454001 kernel: SELinux: Initializing. Feb 13 21:05:29.454007 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.454012 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.454019 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 21:05:29.454024 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:05:29.454029 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:05:29.454035 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:05:29.454040 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 21:05:29.454045 kernel: ... version: 4 Feb 13 21:05:29.454051 kernel: ... bit width: 48 Feb 13 21:05:29.454056 kernel: ... generic registers: 4 Feb 13 21:05:29.454061 kernel: ... value mask: 0000ffffffffffff Feb 13 21:05:29.454068 kernel: ... max period: 00007fffffffffff Feb 13 21:05:29.454073 kernel: ... fixed-purpose events: 3 Feb 13 21:05:29.454078 kernel: ... event mask: 000000070000000f Feb 13 21:05:29.454083 kernel: signal: max sigframe size: 2032 Feb 13 21:05:29.454089 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 21:05:29.454094 kernel: rcu: Hierarchical SRCU implementation. Feb 13 21:05:29.454099 kernel: rcu: Max phase no-delay instances is 400. Feb 13 21:05:29.454105 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 21:05:29.454110 kernel: smp: Bringing up secondary CPUs ... Feb 13 21:05:29.454116 kernel: smpboot: x86: Booting SMP configuration: Feb 13 21:05:29.454122 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 21:05:29.454127 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 21:05:29.454133 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 21:05:29.454138 kernel: smpboot: Max logical packages: 1 Feb 13 21:05:29.454143 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 21:05:29.454149 kernel: devtmpfs: initialized Feb 13 21:05:29.454154 kernel: x86/mm: Memory block size: 128MB Feb 13 21:05:29.454159 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x620bb000-0x620bbfff] (4096 bytes) Feb 13 21:05:29.454166 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Feb 13 21:05:29.454171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 21:05:29.454176 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 21:05:29.454182 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 21:05:29.454187 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 21:05:29.454192 kernel: audit: initializing netlink subsys (disabled) Feb 13 21:05:29.454198 kernel: audit: type=2000 audit(1739480724.130:1): state=initialized audit_enabled=0 res=1 Feb 13 21:05:29.454203 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 21:05:29.454208 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 21:05:29.454214 kernel: cpuidle: using governor menu Feb 13 21:05:29.454220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 21:05:29.454225 kernel: dca service started, version 1.12.1 Feb 13 21:05:29.454230 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 21:05:29.454236 kernel: PCI: Using configuration type 1 for base access Feb 13 21:05:29.454241 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 21:05:29.454246 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 21:05:29.454252 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 21:05:29.454258 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 21:05:29.454264 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 21:05:29.454269 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 21:05:29.454274 kernel: ACPI: Added _OSI(Module Device) Feb 13 21:05:29.454279 kernel: ACPI: Added _OSI(Processor Device) Feb 13 21:05:29.454285 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 21:05:29.454290 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 21:05:29.454295 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 21:05:29.454301 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454307 kernel: ACPI: SSDT 0xFFFF9810010EE800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 21:05:29.454313 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454318 kernel: ACPI: SSDT 0xFFFF9810010E2000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 21:05:29.454323 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454329 kernel: ACPI: SSDT 0xFFFF9810017E5B00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 21:05:29.454334 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454339 kernel: ACPI: SSDT 0xFFFF9810017FE800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 21:05:29.454344 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454349 kernel: ACPI: SSDT 0xFFFF9810010F4000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 21:05:29.454355 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454361 kernel: ACPI: SSDT 0xFFFF9810010EAC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 21:05:29.454367 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 21:05:29.454372 kernel: ACPI: Interpreter enabled Feb 13 21:05:29.454377 kernel: ACPI: PM: (supports S0 S5) Feb 13 21:05:29.454383 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 21:05:29.454388 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 21:05:29.454393 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 21:05:29.454398 kernel: HEST: Table parsing has been initialized. Feb 13 21:05:29.454404 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 21:05:29.454410 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 21:05:29.454416 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 21:05:29.454421 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 21:05:29.454427 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 21:05:29.454432 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 21:05:29.454437 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 21:05:29.454443 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 21:05:29.454448 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 21:05:29.454454 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 21:05:29.454460 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 21:05:29.454465 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 21:05:29.454471 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 21:05:29.454476 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 21:05:29.454481 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 21:05:29.454487 kernel: ACPI: \PIN_: New power resource Feb 13 21:05:29.454492 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 21:05:29.454565 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 21:05:29.454619 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 21:05:29.454702 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 21:05:29.454710 kernel: PCI host bridge to bus 0000:00 Feb 13 21:05:29.454758 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 21:05:29.454800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 21:05:29.454842 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 21:05:29.454883 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Feb 13 21:05:29.454925 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 21:05:29.454966 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 21:05:29.455022 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 21:05:29.455076 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 21:05:29.455126 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.455179 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 13 21:05:29.455229 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.455282 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 13 21:05:29.455330 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Feb 13 21:05:29.455377 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 13 21:05:29.455424 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 13 21:05:29.455476 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 21:05:29.455524 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Feb 13 21:05:29.455577 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 21:05:29.455628 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Feb 13 21:05:29.455715 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 21:05:29.455763 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Feb 13 21:05:29.455812 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 21:05:29.455870 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 21:05:29.455919 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Feb 13 21:05:29.455967 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Feb 13 21:05:29.456019 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 21:05:29.456065 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:05:29.456118 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 21:05:29.456165 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:05:29.456218 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 21:05:29.456265 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Feb 13 21:05:29.456312 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 21:05:29.456363 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 21:05:29.456410 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Feb 13 21:05:29.456456 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 21:05:29.456506 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 21:05:29.456555 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Feb 13 21:05:29.456601 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 21:05:29.456717 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 21:05:29.456765 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Feb 13 21:05:29.456813 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Feb 13 21:05:29.456860 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 13 21:05:29.456906 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 13 21:05:29.456952 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 13 21:05:29.456999 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Feb 13 21:05:29.457044 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 21:05:29.457095 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 21:05:29.457145 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457197 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 21:05:29.457247 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457301 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 21:05:29.457348 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457400 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 21:05:29.457448 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457502 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 13 21:05:29.457549 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457600 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 21:05:29.457668 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:05:29.457735 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 21:05:29.457790 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 21:05:29.457837 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Feb 13 21:05:29.457884 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 21:05:29.457936 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 21:05:29.457984 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 21:05:29.458031 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:05:29.458087 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 21:05:29.458136 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 21:05:29.458185 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Feb 13 21:05:29.458232 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 13 21:05:29.458280 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:05:29.458328 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:05:29.458382 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 21:05:29.458433 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 21:05:29.458481 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Feb 13 21:05:29.458530 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 13 21:05:29.458577 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:05:29.458627 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:05:29.458710 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 21:05:29.458758 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 13 21:05:29.458808 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:05:29.458854 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 21:05:29.458907 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 21:05:29.458955 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:05:29.459003 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Feb 13 21:05:29.459050 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 21:05:29.459098 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Feb 13 21:05:29.459145 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.459195 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 21:05:29.459242 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:05:29.459288 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 13 21:05:29.459341 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Feb 13 21:05:29.459389 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:05:29.459439 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Feb 13 21:05:29.459487 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 21:05:29.459537 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Feb 13 21:05:29.459584 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.459635 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 21:05:29.459726 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:05:29.459773 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 13 21:05:29.459820 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 21:05:29.459905 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 21:05:29.459955 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 13 21:05:29.460005 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 21:05:29.460054 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:05:29.460101 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 21:05:29.460159 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.460207 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.460261 kernel: pci_bus 0000:08: extended config space not accessible Feb 13 21:05:29.460317 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 21:05:29.460371 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Feb 13 21:05:29.460422 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Feb 13 21:05:29.460472 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 21:05:29.460523 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 21:05:29.460573 kernel: pci 0000:08:00.0: supports D1 D2 Feb 13 21:05:29.460626 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:05:29.460711 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 21:05:29.460762 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.460811 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.460820 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 21:05:29.460826 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 21:05:29.460832 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 21:05:29.460837 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 21:05:29.460843 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 21:05:29.460849 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 21:05:29.460854 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 21:05:29.460862 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 21:05:29.460868 kernel: iommu: Default domain type: Translated Feb 13 21:05:29.460873 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 21:05:29.460879 kernel: PCI: Using ACPI for IRQ routing Feb 13 21:05:29.460885 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 21:05:29.460890 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 21:05:29.460896 kernel: e820: reserve RAM buffer [mem 0x620bb000-0x63ffffff] Feb 13 21:05:29.460902 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Feb 13 21:05:29.460907 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Feb 13 21:05:29.460914 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Feb 13 21:05:29.460963 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 13 21:05:29.461013 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 13 21:05:29.461065 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 21:05:29.461073 kernel: vgaarb: loaded Feb 13 21:05:29.461079 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 21:05:29.461085 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 13 21:05:29.461091 kernel: clocksource: Switched to clocksource tsc-early Feb 13 21:05:29.461096 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 21:05:29.461104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 21:05:29.461109 kernel: pnp: PnP ACPI init Feb 13 21:05:29.461158 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 21:05:29.461206 kernel: pnp 00:02: [dma 0 disabled] Feb 13 21:05:29.461252 kernel: pnp 00:03: [dma 0 disabled] Feb 13 21:05:29.461302 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 21:05:29.461347 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 21:05:29.461393 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 21:05:29.461439 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 21:05:29.461482 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 21:05:29.461524 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 21:05:29.461568 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 21:05:29.461610 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 21:05:29.461698 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 21:05:29.461742 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 21:05:29.461785 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 21:05:29.461835 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 21:05:29.461877 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 21:05:29.461920 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 21:05:29.461961 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 21:05:29.462006 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 21:05:29.462048 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 21:05:29.462090 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 21:05:29.462137 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 21:05:29.462146 kernel: pnp: PnP ACPI: found 10 devices Feb 13 21:05:29.462152 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 21:05:29.462158 kernel: NET: Registered PF_INET protocol family Feb 13 21:05:29.462165 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:05:29.462171 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 21:05:29.462177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 21:05:29.462182 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:05:29.462188 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 21:05:29.462194 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 21:05:29.462199 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.462206 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.462211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 21:05:29.462218 kernel: NET: Registered PF_XDP protocol family Feb 13 21:05:29.462266 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Feb 13 21:05:29.462314 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Feb 13 21:05:29.462362 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Feb 13 21:05:29.462409 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:05:29.462461 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462510 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462558 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462607 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462699 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 21:05:29.462747 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 13 21:05:29.462794 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:05:29.462842 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 21:05:29.462891 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 21:05:29.462939 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:05:29.462986 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 13 21:05:29.463034 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 21:05:29.463081 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:05:29.463128 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 13 21:05:29.463174 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 21:05:29.463222 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 21:05:29.463274 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.463321 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.463368 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 21:05:29.463415 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.463462 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.463505 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 21:05:29.463547 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 21:05:29.463589 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 21:05:29.463633 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 21:05:29.463709 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Feb 13 21:05:29.463751 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 21:05:29.463798 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Feb 13 21:05:29.463843 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:05:29.463892 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 13 21:05:29.463936 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Feb 13 21:05:29.463985 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 21:05:29.464030 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Feb 13 21:05:29.464076 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 21:05:29.464120 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.464166 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 13 21:05:29.464210 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.464218 kernel: PCI: CLS 64 bytes, default 64 Feb 13 21:05:29.464226 kernel: DMAR: No ATSR found Feb 13 21:05:29.464232 kernel: DMAR: No SATC found Feb 13 21:05:29.464238 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 13 21:05:29.464244 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 13 21:05:29.464249 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 13 21:05:29.464255 kernel: DMAR: IOMMU feature pasid inconsistent Feb 13 21:05:29.464261 kernel: DMAR: IOMMU feature eafs inconsistent Feb 13 21:05:29.464266 kernel: DMAR: IOMMU feature prs inconsistent Feb 13 21:05:29.464272 kernel: DMAR: IOMMU feature nest inconsistent Feb 13 21:05:29.464279 kernel: DMAR: IOMMU feature mts inconsistent Feb 13 21:05:29.464284 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 13 21:05:29.464290 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 13 21:05:29.464296 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 21:05:29.464301 kernel: DMAR: dmar1: Using Queued invalidation Feb 13 21:05:29.464349 kernel: pci 0000:00:02.0: Adding to iommu group 0 Feb 13 21:05:29.464399 kernel: pci 0000:00:00.0: Adding to iommu group 1 Feb 13 21:05:29.464447 kernel: pci 0000:00:01.0: Adding to iommu group 2 Feb 13 21:05:29.464494 kernel: pci 0000:00:01.1: Adding to iommu group 2 Feb 13 21:05:29.464544 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 13 21:05:29.464591 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 13 21:05:29.464659 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 13 21:05:29.464720 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 13 21:05:29.464767 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 13 21:05:29.464812 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 13 21:05:29.464859 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 13 21:05:29.464906 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 13 21:05:29.464955 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 13 21:05:29.465002 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 13 21:05:29.465049 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 13 21:05:29.465096 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 13 21:05:29.465143 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 13 21:05:29.465190 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 13 21:05:29.465238 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 13 21:05:29.465285 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 13 21:05:29.465332 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 13 21:05:29.465382 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 13 21:05:29.465428 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 13 21:05:29.465476 kernel: pci 0000:02:00.0: Adding to iommu group 2 Feb 13 21:05:29.465524 kernel: pci 0000:02:00.1: Adding to iommu group 2 Feb 13 21:05:29.465573 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 21:05:29.465621 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 13 21:05:29.465704 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 13 21:05:29.465754 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 13 21:05:29.465764 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 21:05:29.465770 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 21:05:29.465776 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Feb 13 21:05:29.465781 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 13 21:05:29.465787 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 21:05:29.465793 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 21:05:29.465799 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 21:05:29.465804 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 13 21:05:29.465855 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 21:05:29.465864 kernel: Initialise system trusted keyrings Feb 13 21:05:29.465869 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 21:05:29.465875 kernel: Key type asymmetric registered Feb 13 21:05:29.465880 kernel: Asymmetric key parser 'x509' registered Feb 13 21:05:29.465886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 21:05:29.465892 kernel: io scheduler mq-deadline registered Feb 13 21:05:29.465897 kernel: io scheduler kyber registered Feb 13 21:05:29.465903 kernel: io scheduler bfq registered Feb 13 21:05:29.465951 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 13 21:05:29.465998 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 13 21:05:29.466046 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 13 21:05:29.466093 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 13 21:05:29.466140 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 13 21:05:29.466187 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 13 21:05:29.466234 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 13 21:05:29.466288 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 21:05:29.466297 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 21:05:29.466303 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 21:05:29.466309 kernel: pstore: Using crash dump compression: deflate Feb 13 21:05:29.466314 kernel: pstore: Registered erst as persistent store backend Feb 13 21:05:29.466320 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 21:05:29.466326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 21:05:29.466332 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 21:05:29.466339 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 21:05:29.466389 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 21:05:29.466397 kernel: i8042: PNP: No PS/2 controller found. Feb 13 21:05:29.466440 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 21:05:29.466483 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 21:05:29.466526 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T21:05:28 UTC (1739480728) Feb 13 21:05:29.466569 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 21:05:29.466577 kernel: intel_pstate: Intel P-state driver initializing Feb 13 21:05:29.466584 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 21:05:29.466590 kernel: intel_pstate: HWP enabled Feb 13 21:05:29.466596 kernel: NET: Registered PF_INET6 protocol family Feb 13 21:05:29.466601 kernel: Segment Routing with IPv6 Feb 13 21:05:29.466607 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 21:05:29.466613 kernel: NET: Registered PF_PACKET protocol family Feb 13 21:05:29.466618 kernel: Key type dns_resolver registered Feb 13 21:05:29.466658 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 21:05:29.466664 kernel: IPI shorthand broadcast: enabled Feb 13 21:05:29.466686 kernel: sched_clock: Marking stable (2758001165, 1450386165)->(4679400914, -471013584) Feb 13 21:05:29.466691 kernel: registered taskstats version 1 Feb 13 21:05:29.466697 kernel: Loading compiled-in X.509 certificates Feb 13 21:05:29.466703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 21:05:29.466709 kernel: Key type .fscrypt registered Feb 13 21:05:29.466714 kernel: Key type fscrypt-provisioning registered Feb 13 21:05:29.466720 kernel: ima: Allocated hash algorithm: sha1 Feb 13 21:05:29.466725 kernel: ima: No architecture policies found Feb 13 21:05:29.466731 kernel: clk: Disabling unused clocks Feb 13 21:05:29.466738 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 21:05:29.466744 kernel: Write protecting the kernel read-only data: 38912k Feb 13 21:05:29.466750 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 21:05:29.466755 kernel: Run /init as init process Feb 13 21:05:29.466761 kernel: with arguments: Feb 13 21:05:29.466766 kernel: /init Feb 13 21:05:29.466772 kernel: with environment: Feb 13 21:05:29.466778 kernel: HOME=/ Feb 13 21:05:29.466783 kernel: TERM=linux Feb 13 21:05:29.466790 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 21:05:29.466797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:05:29.466804 systemd[1]: Detected architecture x86-64. Feb 13 21:05:29.466810 systemd[1]: Running in initrd. Feb 13 21:05:29.466816 systemd[1]: No hostname configured, using default hostname. Feb 13 21:05:29.466822 systemd[1]: Hostname set to . Feb 13 21:05:29.466827 systemd[1]: Initializing machine ID from random generator. Feb 13 21:05:29.466835 systemd[1]: Queued start job for default target initrd.target. Feb 13 21:05:29.466841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:05:29.466847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:05:29.466853 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 21:05:29.466859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:05:29.466865 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 21:05:29.466871 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 21:05:29.466877 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 21:05:29.466885 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 21:05:29.466891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:05:29.466897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:05:29.466903 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:05:29.466909 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:05:29.466915 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:05:29.466921 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:05:29.466928 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:05:29.466934 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:05:29.466940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 21:05:29.466946 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 21:05:29.466953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:05:29.466959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:05:29.466965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:05:29.466971 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:05:29.466978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 21:05:29.466983 kernel: tsc: Refined TSC clocksource calibration: 3407.987 MHz Feb 13 21:05:29.466989 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc76df96, max_idle_ns: 440795240193 ns Feb 13 21:05:29.466995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:05:29.467001 kernel: clocksource: Switched to clocksource tsc Feb 13 21:05:29.467007 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 21:05:29.467013 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 21:05:29.467019 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:05:29.467036 systemd-journald[268]: Collecting audit messages is disabled. Feb 13 21:05:29.467052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:05:29.467059 systemd-journald[268]: Journal started Feb 13 21:05:29.467073 systemd-journald[268]: Runtime Journal (/run/log/journal/c2b32003106344f48c7b5a666f581e12) is 8.0M, max 636.6M, 628.6M free. Feb 13 21:05:29.469427 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 21:05:29.498290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:05:29.498302 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 21:05:29.506098 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 21:05:29.514135 kernel: Bridge firewalling registered Feb 13 21:05:29.514223 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:05:29.523036 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 21:05:29.523130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:05:29.523216 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 21:05:29.523299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:05:29.544837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:05:29.606861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:05:29.620420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:05:29.642599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:29.674478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:05:29.694368 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:05:29.715355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:05:29.754064 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:05:29.780957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:05:29.782639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:05:29.802602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:05:29.812011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:29.823727 systemd-resolved[295]: Positive Trust Anchors: Feb 13 21:05:29.823735 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:05:29.823771 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:05:29.954737 kernel: SCSI subsystem initialized Feb 13 21:05:29.954755 kernel: Loading iSCSI transport class v2.0-870. Feb 13 21:05:29.954767 kernel: iscsi: registered transport (tcp) Feb 13 21:05:29.954788 dracut-cmdline[309]: dracut-dracut-053 Feb 13 21:05:29.954788 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:05:30.015846 kernel: iscsi: registered transport (qla4xxx) Feb 13 21:05:30.015877 kernel: QLogic iSCSI HBA Driver Feb 13 21:05:29.826021 systemd-resolved[295]: Defaulting to hostname 'linux'. Feb 13 21:05:29.839872 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 21:05:29.850782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:05:29.857821 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:05:30.001189 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 21:05:30.041911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 21:05:30.135962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 21:05:30.135980 kernel: device-mapper: uevent: version 1.0.3 Feb 13 21:05:30.144774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 21:05:30.180656 kernel: raid6: avx2x4 gen() 46501 MB/s Feb 13 21:05:30.201656 kernel: raid6: avx2x2 gen() 53143 MB/s Feb 13 21:05:30.227779 kernel: raid6: avx2x1 gen() 44720 MB/s Feb 13 21:05:30.227797 kernel: raid6: using algorithm avx2x2 gen() 53143 MB/s Feb 13 21:05:30.254859 kernel: raid6: .... xor() 32460 MB/s, rmw enabled Feb 13 21:05:30.254877 kernel: raid6: using avx2x2 recovery algorithm Feb 13 21:05:30.275679 kernel: xor: automatically using best checksumming function avx Feb 13 21:05:30.372669 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 21:05:30.378787 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:05:30.394974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:05:30.405076 systemd-udevd[495]: Using default interface naming scheme 'v255'. Feb 13 21:05:30.417007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:05:30.432716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 21:05:30.487145 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Feb 13 21:05:30.505000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:05:30.527889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:05:30.613137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:05:30.650717 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 21:05:30.650734 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 21:05:30.650748 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 21:05:30.623796 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 21:05:30.682447 kernel: libata version 3.00 loaded. Feb 13 21:05:30.682465 kernel: PTP clock support registered Feb 13 21:05:30.682478 kernel: ACPI: bus type USB registered Feb 13 21:05:30.682491 kernel: usbcore: registered new interface driver usbfs Feb 13 21:05:30.682504 kernel: usbcore: registered new interface driver hub Feb 13 21:05:30.682516 kernel: usbcore: registered new device driver usb Feb 13 21:05:30.682533 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 21:05:30.652451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:05:30.860180 kernel: AES CTR mode by8 optimization enabled Feb 13 21:05:30.860270 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 21:05:30.860608 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:05:30.860798 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 13 21:05:30.860955 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 21:05:30.861123 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 21:05:30.861282 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 21:05:30.861454 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:05:30.861608 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 21:05:30.861817 kernel: scsi host0: ahci Feb 13 21:05:30.861975 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 21:05:30.862126 kernel: scsi host1: ahci Feb 13 21:05:30.862288 kernel: hub 1-0:1.0: USB hub found Feb 13 21:05:30.862469 kernel: scsi host2: ahci Feb 13 21:05:30.862619 kernel: hub 1-0:1.0: 16 ports detected Feb 13 21:05:30.862868 kernel: scsi host3: ahci Feb 13 21:05:30.863033 kernel: hub 2-0:1.0: USB hub found Feb 13 21:05:30.863202 kernel: scsi host4: ahci Feb 13 21:05:30.863359 kernel: hub 2-0:1.0: 10 ports detected Feb 13 21:05:30.863523 kernel: scsi host5: ahci Feb 13 21:05:30.863732 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 21:05:30.863750 kernel: scsi host6: ahci Feb 13 21:05:30.863896 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 21:05:30.863912 kernel: scsi host7: ahci Feb 13 21:05:30.864061 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Feb 13 21:05:30.864078 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Feb 13 21:05:30.864098 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Feb 13 21:05:30.864115 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Feb 13 21:05:30.864128 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Feb 13 21:05:30.864141 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Feb 13 21:05:30.864155 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Feb 13 21:05:30.864168 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Feb 13 21:05:30.864181 kernel: pps pps0: new PPS source ptp0 Feb 13 21:05:30.652516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:30.898402 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 13 21:05:30.931525 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:05:30.931824 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d2 Feb 13 21:05:30.931999 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 13 21:05:30.932233 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:05:30.941656 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 13 21:05:31.431764 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:05:31.431846 kernel: pps pps1: new PPS source ptp1 Feb 13 21:05:31.431912 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 13 21:05:31.431985 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:05:31.432051 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d3 Feb 13 21:05:31.432118 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 13 21:05:31.432182 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:05:31.432245 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 21:05:31.591994 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592005 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592015 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:05:31.592026 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592034 kernel: hub 1-14:1.0: USB hub found Feb 13 21:05:31.592128 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:05:31.592137 kernel: hub 1-14:1.0: 4 ports detected Feb 13 21:05:31.592211 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592220 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592227 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592234 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:05:31.592242 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:05:31.592313 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:05:31.592323 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Feb 13 21:05:31.592390 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:05:31.592398 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:05:31.592406 kernel: ata1.00: Features: NCQ-prio Feb 13 21:05:31.592413 kernel: ata2.00: Features: NCQ-prio Feb 13 21:05:31.592421 kernel: ata1.00: configured for UDMA/133 Feb 13 21:05:31.592428 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:05:31.592498 kernel: ata2.00: configured for UDMA/133 Feb 13 21:05:31.592508 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:05:31.592572 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 13 21:05:31.592645 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:31.592654 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:05:31.592661 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:05:31.592727 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:05:31.592789 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 13 21:05:31.592856 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 13 21:05:31.592919 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 13 21:05:31.592986 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 13 21:05:31.593045 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 13 21:05:31.593104 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 21:05:31.593162 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 21:05:31.593220 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:05:31.593278 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:05:31.593338 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 21:05:31.593396 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 21:05:31.593454 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:31.593463 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:05:31.593470 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 13 21:05:31.593527 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 21:05:31.593535 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:05:31.593599 kernel: GPT:9289727 != 937703087 Feb 13 21:05:31.593609 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 13 21:05:31.999093 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 21:05:31.999167 kernel: GPT:9289727 != 937703087 Feb 13 21:05:31.999204 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 21:05:31.999257 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:31.999306 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 13 21:05:31.999781 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:05:32.000168 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 21:05:32.000811 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (559) Feb 13 21:05:32.000864 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (550) Feb 13 21:05:32.000903 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 21:05:32.000941 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:32.000977 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:32.001014 kernel: usbcore: registered new interface driver usbhid Feb 13 21:05:32.001051 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:32.001086 kernel: usbhid: USB HID core driver Feb 13 21:05:32.001123 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:32.001166 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 21:05:32.001204 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 21:05:32.001647 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 21:05:32.001705 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 21:05:32.002101 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:05:32.002465 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 13 21:05:32.002838 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:05:32.003211 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 13 21:05:30.950451 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:05:32.029853 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 13 21:05:30.965193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:05:30.965291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:31.007744 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:05:31.028788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:05:32.071788 disk-uuid[709]: Primary Header is updated. Feb 13 21:05:32.071788 disk-uuid[709]: Secondary Entries is updated. Feb 13 21:05:32.071788 disk-uuid[709]: Secondary Header is updated. Feb 13 21:05:31.040949 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 21:05:31.042213 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:05:31.042237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:05:31.042262 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:05:31.042765 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 21:05:31.093768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:31.104840 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:05:31.121799 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:05:31.139361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:31.497940 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 21:05:31.529486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 21:05:31.550780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 21:05:31.564966 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:05:31.575696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:05:31.593751 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 21:05:32.624050 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:32.631569 disk-uuid[710]: The operation has completed successfully. Feb 13 21:05:32.639915 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:32.665977 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 21:05:32.666024 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 21:05:32.753777 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 21:05:32.779688 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 21:05:32.779749 sh[737]: Success Feb 13 21:05:32.814148 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 21:05:32.843903 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 21:05:32.852873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 21:05:32.897691 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 21:05:32.897712 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:32.914184 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 21:05:32.921203 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 21:05:32.927053 kernel: BTRFS info (device dm-0): using free space tree Feb 13 21:05:32.940671 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 21:05:32.942225 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 21:05:32.952098 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 21:05:32.965837 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 21:05:33.037880 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:33.037893 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:33.037901 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:05:33.037912 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:05:33.037920 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:05:33.037927 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:33.037891 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 21:05:33.050075 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 21:05:33.084794 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 21:05:33.094956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:05:33.127800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:05:33.140067 systemd-networkd[921]: lo: Link UP Feb 13 21:05:33.148147 ignition[890]: Ignition 2.20.0 Feb 13 21:05:33.140069 systemd-networkd[921]: lo: Gained carrier Feb 13 21:05:33.148151 ignition[890]: Stage: fetch-offline Feb 13 21:05:33.142391 systemd-networkd[921]: Enumeration completed Feb 13 21:05:33.148169 ignition[890]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:33.142471 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:05:33.148174 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:33.143071 systemd-networkd[921]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.148225 ignition[890]: parsed url from cmdline: "" Feb 13 21:05:33.150348 unknown[890]: fetched base config from "system" Feb 13 21:05:33.148227 ignition[890]: no config URL provided Feb 13 21:05:33.150352 unknown[890]: fetched user config from "system" Feb 13 21:05:33.148230 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:05:33.159818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:05:33.148252 ignition[890]: parsing config with SHA512: ecd6ef01da5249ff9c7217eaa7dc8aec3db6e78a10d9b3f90531a89c0510e468f617a438aa3ce586ce9da2bced074b70497daa552f0c7cfecebcf3f31a2a4e12 Feb 13 21:05:33.170768 systemd-networkd[921]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.150543 ignition[890]: fetch-offline: fetch-offline passed Feb 13 21:05:33.178962 systemd[1]: Reached target network.target - Network. Feb 13 21:05:33.150546 ignition[890]: POST message to Packet Timeline Feb 13 21:05:33.191732 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 21:05:33.367825 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 21:05:33.150549 ignition[890]: POST Status error: resource requires networking Feb 13 21:05:33.199029 systemd-networkd[921]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.150586 ignition[890]: Ignition finished successfully Feb 13 21:05:33.202835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 21:05:33.217611 ignition[933]: Ignition 2.20.0 Feb 13 21:05:33.366548 systemd-networkd[921]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.217621 ignition[933]: Stage: kargs Feb 13 21:05:33.217930 ignition[933]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:33.217951 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:33.219289 ignition[933]: kargs: kargs passed Feb 13 21:05:33.219295 ignition[933]: POST message to Packet Timeline Feb 13 21:05:33.219319 ignition[933]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:33.220117 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57981->[::1]:53: read: connection refused Feb 13 21:05:33.421252 ignition[933]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 21:05:33.422331 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38994->[::1]:53: read: connection refused Feb 13 21:05:33.575743 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 21:05:33.576376 systemd-networkd[921]: eno1: Link UP Feb 13 21:05:33.576579 systemd-networkd[921]: eno2: Link UP Feb 13 21:05:33.576736 systemd-networkd[921]: enp2s0f0np0: Link UP Feb 13 21:05:33.576908 systemd-networkd[921]: enp2s0f0np0: Gained carrier Feb 13 21:05:33.586941 systemd-networkd[921]: enp2s0f1np1: Link UP Feb 13 21:05:33.615820 systemd-networkd[921]: enp2s0f0np0: DHCPv4 address 147.28.180.173/31, gateway 147.28.180.172 acquired from 145.40.83.140 Feb 13 21:05:33.822719 ignition[933]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 21:05:33.823916 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42473->[::1]:53: read: connection refused Feb 13 21:05:34.406308 systemd-networkd[921]: enp2s0f1np1: Gained carrier Feb 13 21:05:34.624213 ignition[933]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 21:05:34.625399 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36440->[::1]:53: read: connection refused Feb 13 21:05:35.430128 systemd-networkd[921]: enp2s0f0np0: Gained IPv6LL Feb 13 21:05:36.226922 ignition[933]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 21:05:36.228125 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54546->[::1]:53: read: connection refused Feb 13 21:05:36.390125 systemd-networkd[921]: enp2s0f1np1: Gained IPv6LL Feb 13 21:05:39.431601 ignition[933]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 21:05:40.010304 ignition[933]: GET result: OK Feb 13 21:05:40.385763 ignition[933]: Ignition finished successfully Feb 13 21:05:40.390922 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 21:05:40.416907 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 21:05:40.423011 ignition[948]: Ignition 2.20.0 Feb 13 21:05:40.423015 ignition[948]: Stage: disks Feb 13 21:05:40.423122 ignition[948]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:40.423128 ignition[948]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:40.423640 ignition[948]: disks: disks passed Feb 13 21:05:40.423643 ignition[948]: POST message to Packet Timeline Feb 13 21:05:40.423654 ignition[948]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:40.814988 ignition[948]: GET result: OK Feb 13 21:05:41.236676 ignition[948]: Ignition finished successfully Feb 13 21:05:41.238852 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 21:05:41.255084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 21:05:41.272978 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 21:05:41.294031 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:05:41.315011 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:05:41.334932 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:05:41.354887 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 21:05:41.395101 systemd-fsck[965]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 21:05:41.405981 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 21:05:41.426816 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 21:05:41.523375 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 21:05:41.540829 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 21:05:41.533037 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 21:05:41.561774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:05:41.600209 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (976) Feb 13 21:05:41.600223 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:41.600231 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:41.563653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 21:05:41.636856 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:05:41.636875 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:05:41.636889 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:05:41.641896 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 21:05:41.653178 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 21:05:41.674771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 21:05:41.719810 coreos-metadata[993]: Feb 13 21:05:41.695 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:05:41.674789 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:05:41.760703 coreos-metadata[994]: Feb 13 21:05:41.695 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:05:41.675708 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:05:41.694841 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 21:05:41.788762 initrd-setup-root[1008]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 21:05:41.738759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 21:05:41.809757 initrd-setup-root[1015]: cut: /sysroot/etc/group: No such file or directory Feb 13 21:05:41.819738 initrd-setup-root[1022]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 21:05:41.830814 initrd-setup-root[1029]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 21:05:41.830217 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 21:05:41.839861 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 21:05:41.887831 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:41.885866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 21:05:41.896221 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 21:05:41.907572 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 21:05:41.930850 ignition[1096]: INFO : Ignition 2.20.0 Feb 13 21:05:41.930850 ignition[1096]: INFO : Stage: mount Feb 13 21:05:41.930850 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:41.930850 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:41.930850 ignition[1096]: INFO : mount: mount passed Feb 13 21:05:41.930850 ignition[1096]: INFO : POST message to Packet Timeline Feb 13 21:05:41.930850 ignition[1096]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:42.092922 coreos-metadata[993]: Feb 13 21:05:42.092 INFO Fetch successful Feb 13 21:05:42.169869 coreos-metadata[993]: Feb 13 21:05:42.169 INFO wrote hostname ci-4186.1.1-a-9675b630d5 to /sysroot/etc/hostname Feb 13 21:05:42.171097 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:05:42.383122 coreos-metadata[994]: Feb 13 21:05:42.382 INFO Fetch successful Feb 13 21:05:42.462502 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 21:05:42.462560 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 21:05:42.516046 ignition[1096]: INFO : GET result: OK Feb 13 21:05:42.860890 ignition[1096]: INFO : Ignition finished successfully Feb 13 21:05:42.863385 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 21:05:42.896821 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 21:05:42.907300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:05:42.962359 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1119) Feb 13 21:05:42.962384 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:42.970454 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:42.976351 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:05:42.991346 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:05:42.991362 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:05:42.993383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:05:43.018548 ignition[1136]: INFO : Ignition 2.20.0 Feb 13 21:05:43.018548 ignition[1136]: INFO : Stage: files Feb 13 21:05:43.032847 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:43.032847 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:43.032847 ignition[1136]: DEBUG : files: compiled without relabeling support, skipping Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 21:05:43.032847 ignition[1136]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 21:05:43.023027 unknown[1136]: wrote ssh authorized keys file for user: core Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.413969 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 21:05:43.658720 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 21:05:43.847137 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.847137 ignition[1136]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: files passed Feb 13 21:05:43.877938 ignition[1136]: INFO : POST message to Packet Timeline Feb 13 21:05:43.877938 ignition[1136]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:44.439221 ignition[1136]: INFO : GET result: OK Feb 13 21:05:44.795240 ignition[1136]: INFO : Ignition finished successfully Feb 13 21:05:44.798522 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 21:05:44.830883 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 21:05:44.831349 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 21:05:44.850200 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 21:05:44.850275 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 21:05:44.912930 initrd-setup-root-after-ignition[1176]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:05:44.912930 initrd-setup-root-after-ignition[1176]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:05:44.889976 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:05:44.950866 initrd-setup-root-after-ignition[1181]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:05:44.903915 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 21:05:44.936970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 21:05:45.008041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 21:05:45.008096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 21:05:45.026099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 21:05:45.036958 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 21:05:45.064102 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 21:05:45.073971 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 21:05:45.150742 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:05:45.183132 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 21:05:45.211225 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:05:45.223197 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:05:45.244333 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 21:05:45.262254 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 21:05:45.262674 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:05:45.292484 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 21:05:45.314240 systemd[1]: Stopped target basic.target - Basic System. Feb 13 21:05:45.333369 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 21:05:45.352231 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:05:45.374359 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 21:05:45.395243 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 21:05:45.415231 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:05:45.436279 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 21:05:45.458404 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 21:05:45.478214 systemd[1]: Stopped target swap.target - Swaps. Feb 13 21:05:45.497264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 21:05:45.497686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:05:45.522356 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:05:45.542387 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:05:45.563123 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 21:05:45.563584 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:05:45.586129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 21:05:45.586522 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 21:05:45.618236 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 21:05:45.618704 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:05:45.638416 systemd[1]: Stopped target paths.target - Path Units. Feb 13 21:05:45.656112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 21:05:45.659933 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:05:45.677249 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 21:05:45.696384 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 21:05:45.715222 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 21:05:45.715522 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:05:45.735261 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 21:05:45.735549 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:05:45.758478 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 21:05:45.758901 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:05:45.779326 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 21:05:45.891849 ignition[1201]: INFO : Ignition 2.20.0 Feb 13 21:05:45.891849 ignition[1201]: INFO : Stage: umount Feb 13 21:05:45.891849 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:45.891849 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:45.891849 ignition[1201]: INFO : umount: umount passed Feb 13 21:05:45.891849 ignition[1201]: INFO : POST message to Packet Timeline Feb 13 21:05:45.891849 ignition[1201]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:45.779721 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 21:05:45.798321 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 21:05:45.798727 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:05:45.829891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 21:05:45.848730 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 21:05:45.848863 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:05:45.882974 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 21:05:45.899868 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 21:05:45.900243 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:05:45.918160 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 21:05:45.918476 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:05:45.952312 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 21:05:45.957366 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 21:05:45.957610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 21:05:46.068699 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 21:05:46.068989 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 21:05:46.382283 ignition[1201]: INFO : GET result: OK Feb 13 21:05:46.931198 ignition[1201]: INFO : Ignition finished successfully Feb 13 21:05:46.932253 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 21:05:46.932365 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 21:05:46.950317 systemd[1]: Stopped target network.target - Network. Feb 13 21:05:46.965830 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 21:05:46.965950 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 21:05:46.984023 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 21:05:46.984183 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 21:05:47.002055 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 21:05:47.002210 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 21:05:47.020033 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 21:05:47.020201 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 21:05:47.039025 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 21:05:47.039195 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 21:05:47.058410 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 21:05:47.067784 systemd-networkd[921]: enp2s0f1np1: DHCPv6 lease lost Feb 13 21:05:47.076866 systemd-networkd[921]: enp2s0f0np0: DHCPv6 lease lost Feb 13 21:05:47.077101 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 21:05:47.096693 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 21:05:47.096972 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 21:05:47.116988 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 21:05:47.117342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 21:05:47.137874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 21:05:47.137994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:05:47.168816 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 21:05:47.196780 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 21:05:47.196827 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:05:47.215963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 21:05:47.216055 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:05:47.234043 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 21:05:47.234202 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 21:05:47.254033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 21:05:47.254203 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:05:47.273262 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:05:47.296014 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 21:05:47.296415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:05:47.331892 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 21:05:47.332040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 21:05:47.336232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 21:05:47.336336 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:05:47.365893 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 21:05:47.366020 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:05:47.396174 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 21:05:47.396330 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 21:05:47.426048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:05:47.426219 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:47.471977 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 21:05:47.496738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 21:05:47.496881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:05:47.519929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:05:47.520052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:47.539793 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 21:05:47.759752 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Feb 13 21:05:47.540012 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 21:05:47.618785 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 21:05:47.619042 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 21:05:47.636050 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 21:05:47.672046 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 21:05:47.694674 systemd[1]: Switching root. Feb 13 21:05:47.812804 systemd-journald[268]: Journal stopped Feb 13 21:05:29.452756 kernel: microcode: updated early: 0xde -> 0x100, date = 2024-02-05 Feb 13 21:05:29.452771 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 21:05:29.452777 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:05:29.452783 kernel: BIOS-provided physical RAM map: Feb 13 21:05:29.452787 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 21:05:29.452791 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 21:05:29.452796 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 21:05:29.452800 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 21:05:29.452804 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 21:05:29.452808 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000620bafff] usable Feb 13 21:05:29.452812 kernel: BIOS-e820: [mem 0x00000000620bb000-0x00000000620bbfff] ACPI NVS Feb 13 21:05:29.452817 kernel: BIOS-e820: [mem 0x00000000620bc000-0x00000000620bcfff] reserved Feb 13 21:05:29.452822 kernel: BIOS-e820: [mem 0x00000000620bd000-0x000000006c0c4fff] usable Feb 13 21:05:29.452826 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Feb 13 21:05:29.452832 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Feb 13 21:05:29.452836 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Feb 13 21:05:29.452842 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Feb 13 21:05:29.452846 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Feb 13 21:05:29.452851 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Feb 13 21:05:29.452855 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 21:05:29.452860 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 21:05:29.452864 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 21:05:29.452869 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 21:05:29.452873 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 21:05:29.452878 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Feb 13 21:05:29.452882 kernel: NX (Execute Disable) protection: active Feb 13 21:05:29.452887 kernel: APIC: Static calls initialized Feb 13 21:05:29.452893 kernel: SMBIOS 3.2.1 present. Feb 13 21:05:29.452897 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 13 21:05:29.452902 kernel: tsc: Detected 3400.000 MHz processor Feb 13 21:05:29.452907 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 21:05:29.452911 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 21:05:29.452916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 21:05:29.452921 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Feb 13 21:05:29.452926 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Feb 13 21:05:29.452931 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 21:05:29.452935 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Feb 13 21:05:29.452941 kernel: Using GB pages for direct mapping Feb 13 21:05:29.452946 kernel: ACPI: Early table checksum verification disabled Feb 13 21:05:29.452951 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 21:05:29.452957 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 21:05:29.452962 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Feb 13 21:05:29.452967 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 21:05:29.452972 kernel: ACPI: FACS 0x000000006D762F80 000040 Feb 13 21:05:29.452979 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Feb 13 21:05:29.452984 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Feb 13 21:05:29.452989 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 21:05:29.452994 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 21:05:29.452999 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 21:05:29.453004 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 21:05:29.453009 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 21:05:29.453015 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 21:05:29.453020 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453025 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 21:05:29.453030 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 21:05:29.453035 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453040 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453045 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 21:05:29.453050 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 21:05:29.453055 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453061 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 21:05:29.453066 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 21:05:29.453071 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 13 21:05:29.453076 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 21:05:29.453081 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 21:05:29.453086 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 21:05:29.453091 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 ?b 01072009 AMI 00010013) Feb 13 21:05:29.453096 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 21:05:29.453101 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 21:05:29.453107 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 21:05:29.453112 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 21:05:29.453117 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 21:05:29.453122 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Feb 13 21:05:29.453127 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Feb 13 21:05:29.453132 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Feb 13 21:05:29.453137 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Feb 13 21:05:29.453142 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Feb 13 21:05:29.453148 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Feb 13 21:05:29.453153 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Feb 13 21:05:29.453158 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Feb 13 21:05:29.453163 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Feb 13 21:05:29.453168 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Feb 13 21:05:29.453173 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Feb 13 21:05:29.453178 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Feb 13 21:05:29.453183 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Feb 13 21:05:29.453188 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Feb 13 21:05:29.453194 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Feb 13 21:05:29.453198 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Feb 13 21:05:29.453203 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Feb 13 21:05:29.453208 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Feb 13 21:05:29.453213 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Feb 13 21:05:29.453218 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Feb 13 21:05:29.453223 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Feb 13 21:05:29.453228 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Feb 13 21:05:29.453233 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Feb 13 21:05:29.453239 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Feb 13 21:05:29.453244 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Feb 13 21:05:29.453249 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Feb 13 21:05:29.453254 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Feb 13 21:05:29.453259 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Feb 13 21:05:29.453264 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Feb 13 21:05:29.453269 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Feb 13 21:05:29.453274 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Feb 13 21:05:29.453279 kernel: No NUMA configuration found Feb 13 21:05:29.453285 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Feb 13 21:05:29.453290 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Feb 13 21:05:29.453295 kernel: Zone ranges: Feb 13 21:05:29.453300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 21:05:29.453305 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 21:05:29.453310 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Feb 13 21:05:29.453315 kernel: Movable zone start for each node Feb 13 21:05:29.453320 kernel: Early memory node ranges Feb 13 21:05:29.453325 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 21:05:29.453330 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 21:05:29.453336 kernel: node 0: [mem 0x0000000040400000-0x00000000620bafff] Feb 13 21:05:29.453341 kernel: node 0: [mem 0x00000000620bd000-0x000000006c0c4fff] Feb 13 21:05:29.453345 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Feb 13 21:05:29.453351 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Feb 13 21:05:29.453359 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Feb 13 21:05:29.453365 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Feb 13 21:05:29.453370 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 21:05:29.453376 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 21:05:29.453382 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 21:05:29.453388 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 21:05:29.453393 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 13 21:05:29.453398 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 13 21:05:29.453404 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Feb 13 21:05:29.453409 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 21:05:29.453414 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 21:05:29.453420 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 21:05:29.453425 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 21:05:29.453431 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 21:05:29.453436 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 21:05:29.453442 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 21:05:29.453447 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 21:05:29.453452 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 21:05:29.453458 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 21:05:29.453463 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 21:05:29.453468 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 21:05:29.453474 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 21:05:29.453480 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 21:05:29.453485 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 21:05:29.453490 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 21:05:29.453496 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 21:05:29.453501 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 21:05:29.453506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 21:05:29.453512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 21:05:29.453517 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 21:05:29.453522 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 21:05:29.453529 kernel: TSC deadline timer available Feb 13 21:05:29.453534 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 21:05:29.453539 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Feb 13 21:05:29.453545 kernel: Booting paravirtualized kernel on bare hardware Feb 13 21:05:29.453550 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 21:05:29.453556 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 21:05:29.453561 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 21:05:29.453567 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 21:05:29.453572 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 21:05:29.453579 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:05:29.453585 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 21:05:29.453590 kernel: random: crng init done Feb 13 21:05:29.453595 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 21:05:29.453600 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 21:05:29.453606 kernel: Fallback order for Node 0: 0 Feb 13 21:05:29.453611 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Feb 13 21:05:29.453616 kernel: Policy zone: Normal Feb 13 21:05:29.453625 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 21:05:29.453630 kernel: software IO TLB: area num 16. Feb 13 21:05:29.453636 kernel: Memory: 32549268K/33281940K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 732412K reserved, 0K cma-reserved) Feb 13 21:05:29.453641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 21:05:29.453647 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 21:05:29.453652 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 21:05:29.453657 kernel: Dynamic Preempt: voluntary Feb 13 21:05:29.453663 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 21:05:29.453668 kernel: rcu: RCU event tracing is enabled. Feb 13 21:05:29.453675 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 21:05:29.453680 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 21:05:29.453686 kernel: Rude variant of Tasks RCU enabled. Feb 13 21:05:29.453691 kernel: Tracing variant of Tasks RCU enabled. Feb 13 21:05:29.453697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 21:05:29.453702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 21:05:29.453707 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 21:05:29.453713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 21:05:29.453718 kernel: Console: colour VGA+ 80x25 Feb 13 21:05:29.453724 kernel: printk: console [tty0] enabled Feb 13 21:05:29.453729 kernel: printk: console [ttyS1] enabled Feb 13 21:05:29.453735 kernel: ACPI: Core revision 20230628 Feb 13 21:05:29.453740 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 13 21:05:29.453745 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 21:05:29.453751 kernel: DMAR: Host address width 39 Feb 13 21:05:29.453756 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 13 21:05:29.453762 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 13 21:05:29.453767 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 21:05:29.453773 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 21:05:29.453779 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Feb 13 21:05:29.453784 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Feb 13 21:05:29.453789 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 13 21:05:29.453795 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 21:05:29.453800 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 21:05:29.453805 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 21:05:29.453811 kernel: x2apic enabled Feb 13 21:05:29.453816 kernel: APIC: Switched APIC routing to: cluster x2apic Feb 13 21:05:29.453822 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 21:05:29.453828 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 21:05:29.453833 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 21:05:29.453839 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 21:05:29.453844 kernel: process: using mwait in idle threads Feb 13 21:05:29.453849 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 21:05:29.453854 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 21:05:29.453860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 21:05:29.453865 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 21:05:29.453871 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 21:05:29.453877 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 21:05:29.453882 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 21:05:29.453888 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 21:05:29.453893 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 21:05:29.453898 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 21:05:29.453904 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 21:05:29.453909 kernel: TAA: Mitigation: TSX disabled Feb 13 21:05:29.453914 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 21:05:29.453920 kernel: SRBDS: Mitigation: Microcode Feb 13 21:05:29.453926 kernel: GDS: Mitigation: Microcode Feb 13 21:05:29.453931 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 21:05:29.453936 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 21:05:29.453942 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 21:05:29.453947 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 21:05:29.453952 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 21:05:29.453958 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 21:05:29.453963 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 21:05:29.453969 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 21:05:29.453975 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 21:05:29.453980 kernel: Freeing SMP alternatives memory: 32K Feb 13 21:05:29.453986 kernel: pid_max: default: 32768 minimum: 301 Feb 13 21:05:29.453991 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 21:05:29.453996 kernel: landlock: Up and running. Feb 13 21:05:29.454001 kernel: SELinux: Initializing. Feb 13 21:05:29.454007 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.454012 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.454019 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 21:05:29.454024 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:05:29.454029 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:05:29.454035 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:05:29.454040 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 21:05:29.454045 kernel: ... version: 4 Feb 13 21:05:29.454051 kernel: ... bit width: 48 Feb 13 21:05:29.454056 kernel: ... generic registers: 4 Feb 13 21:05:29.454061 kernel: ... value mask: 0000ffffffffffff Feb 13 21:05:29.454068 kernel: ... max period: 00007fffffffffff Feb 13 21:05:29.454073 kernel: ... fixed-purpose events: 3 Feb 13 21:05:29.454078 kernel: ... event mask: 000000070000000f Feb 13 21:05:29.454083 kernel: signal: max sigframe size: 2032 Feb 13 21:05:29.454089 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 21:05:29.454094 kernel: rcu: Hierarchical SRCU implementation. Feb 13 21:05:29.454099 kernel: rcu: Max phase no-delay instances is 400. Feb 13 21:05:29.454105 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 21:05:29.454110 kernel: smp: Bringing up secondary CPUs ... Feb 13 21:05:29.454116 kernel: smpboot: x86: Booting SMP configuration: Feb 13 21:05:29.454122 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Feb 13 21:05:29.454127 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 21:05:29.454133 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 21:05:29.454138 kernel: smpboot: Max logical packages: 1 Feb 13 21:05:29.454143 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 21:05:29.454149 kernel: devtmpfs: initialized Feb 13 21:05:29.454154 kernel: x86/mm: Memory block size: 128MB Feb 13 21:05:29.454159 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x620bb000-0x620bbfff] (4096 bytes) Feb 13 21:05:29.454166 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Feb 13 21:05:29.454171 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 21:05:29.454176 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 21:05:29.454182 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 21:05:29.454187 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 21:05:29.454192 kernel: audit: initializing netlink subsys (disabled) Feb 13 21:05:29.454198 kernel: audit: type=2000 audit(1739480724.130:1): state=initialized audit_enabled=0 res=1 Feb 13 21:05:29.454203 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 21:05:29.454208 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 21:05:29.454214 kernel: cpuidle: using governor menu Feb 13 21:05:29.454220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 21:05:29.454225 kernel: dca service started, version 1.12.1 Feb 13 21:05:29.454230 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 21:05:29.454236 kernel: PCI: Using configuration type 1 for base access Feb 13 21:05:29.454241 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 21:05:29.454246 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 21:05:29.454252 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 21:05:29.454258 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 21:05:29.454264 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 21:05:29.454269 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 21:05:29.454274 kernel: ACPI: Added _OSI(Module Device) Feb 13 21:05:29.454279 kernel: ACPI: Added _OSI(Processor Device) Feb 13 21:05:29.454285 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 21:05:29.454290 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 21:05:29.454295 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 21:05:29.454301 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454307 kernel: ACPI: SSDT 0xFFFF9810010EE800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 21:05:29.454313 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454318 kernel: ACPI: SSDT 0xFFFF9810010E2000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 21:05:29.454323 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454329 kernel: ACPI: SSDT 0xFFFF9810017E5B00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 21:05:29.454334 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454339 kernel: ACPI: SSDT 0xFFFF9810017FE800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 21:05:29.454344 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454349 kernel: ACPI: SSDT 0xFFFF9810010F4000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 21:05:29.454355 kernel: ACPI: Dynamic OEM Table Load: Feb 13 21:05:29.454361 kernel: ACPI: SSDT 0xFFFF9810010EAC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 21:05:29.454367 kernel: ACPI: _OSC evaluated successfully for all CPUs Feb 13 21:05:29.454372 kernel: ACPI: Interpreter enabled Feb 13 21:05:29.454377 kernel: ACPI: PM: (supports S0 S5) Feb 13 21:05:29.454383 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 21:05:29.454388 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 21:05:29.454393 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 21:05:29.454398 kernel: HEST: Table parsing has been initialized. Feb 13 21:05:29.454404 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 21:05:29.454410 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 21:05:29.454416 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 21:05:29.454421 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 21:05:29.454427 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Feb 13 21:05:29.454432 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Feb 13 21:05:29.454437 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Feb 13 21:05:29.454443 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Feb 13 21:05:29.454448 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Feb 13 21:05:29.454454 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 21:05:29.454460 kernel: ACPI: \_TZ_.FN00: New power resource Feb 13 21:05:29.454465 kernel: ACPI: \_TZ_.FN01: New power resource Feb 13 21:05:29.454471 kernel: ACPI: \_TZ_.FN02: New power resource Feb 13 21:05:29.454476 kernel: ACPI: \_TZ_.FN03: New power resource Feb 13 21:05:29.454481 kernel: ACPI: \_TZ_.FN04: New power resource Feb 13 21:05:29.454487 kernel: ACPI: \PIN_: New power resource Feb 13 21:05:29.454492 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 21:05:29.454565 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 21:05:29.454619 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 21:05:29.454702 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 21:05:29.454710 kernel: PCI host bridge to bus 0000:00 Feb 13 21:05:29.454758 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 21:05:29.454800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 21:05:29.454842 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 21:05:29.454883 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Feb 13 21:05:29.454925 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 21:05:29.454966 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 21:05:29.455022 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 21:05:29.455076 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 21:05:29.455126 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.455179 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 13 21:05:29.455229 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.455282 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 13 21:05:29.455330 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Feb 13 21:05:29.455377 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 13 21:05:29.455424 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 13 21:05:29.455476 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 21:05:29.455524 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Feb 13 21:05:29.455577 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 21:05:29.455628 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Feb 13 21:05:29.455715 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 21:05:29.455763 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Feb 13 21:05:29.455812 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 21:05:29.455870 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 21:05:29.455919 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Feb 13 21:05:29.455967 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Feb 13 21:05:29.456019 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 21:05:29.456065 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:05:29.456118 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 21:05:29.456165 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:05:29.456218 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 21:05:29.456265 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Feb 13 21:05:29.456312 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 21:05:29.456363 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 21:05:29.456410 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Feb 13 21:05:29.456456 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 21:05:29.456506 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 21:05:29.456555 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Feb 13 21:05:29.456601 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 21:05:29.456717 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 21:05:29.456765 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Feb 13 21:05:29.456813 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Feb 13 21:05:29.456860 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 13 21:05:29.456906 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 13 21:05:29.456952 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 13 21:05:29.456999 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Feb 13 21:05:29.457044 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 21:05:29.457095 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 21:05:29.457145 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457197 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 21:05:29.457247 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457301 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 21:05:29.457348 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457400 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 21:05:29.457448 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457502 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 13 21:05:29.457549 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.457600 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 21:05:29.457668 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 21:05:29.457735 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 21:05:29.457790 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 21:05:29.457837 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Feb 13 21:05:29.457884 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 21:05:29.457936 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 21:05:29.457984 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 21:05:29.458031 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:05:29.458087 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 21:05:29.458136 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 21:05:29.458185 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Feb 13 21:05:29.458232 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 13 21:05:29.458280 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:05:29.458328 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:05:29.458382 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 21:05:29.458433 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 21:05:29.458481 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Feb 13 21:05:29.458530 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 13 21:05:29.458577 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 21:05:29.458627 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 21:05:29.458710 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 21:05:29.458758 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 13 21:05:29.458808 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:05:29.458854 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 21:05:29.458907 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Feb 13 21:05:29.458955 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:05:29.459003 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Feb 13 21:05:29.459050 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 21:05:29.459098 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Feb 13 21:05:29.459145 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.459195 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 21:05:29.459242 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:05:29.459288 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 13 21:05:29.459341 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Feb 13 21:05:29.459389 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 13 21:05:29.459439 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Feb 13 21:05:29.459487 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 21:05:29.459537 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Feb 13 21:05:29.459584 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 13 21:05:29.459635 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 21:05:29.459726 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:05:29.459773 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 13 21:05:29.459820 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 21:05:29.459905 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 21:05:29.459955 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 13 21:05:29.460005 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 21:05:29.460054 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:05:29.460101 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 21:05:29.460159 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.460207 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.460261 kernel: pci_bus 0000:08: extended config space not accessible Feb 13 21:05:29.460317 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 21:05:29.460371 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Feb 13 21:05:29.460422 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Feb 13 21:05:29.460472 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 21:05:29.460523 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 21:05:29.460573 kernel: pci 0000:08:00.0: supports D1 D2 Feb 13 21:05:29.460626 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 21:05:29.460711 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 21:05:29.460762 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.460811 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.460820 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 21:05:29.460826 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 21:05:29.460832 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 21:05:29.460837 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 21:05:29.460843 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 21:05:29.460849 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 21:05:29.460854 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 21:05:29.460862 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 21:05:29.460868 kernel: iommu: Default domain type: Translated Feb 13 21:05:29.460873 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 21:05:29.460879 kernel: PCI: Using ACPI for IRQ routing Feb 13 21:05:29.460885 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 21:05:29.460890 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 21:05:29.460896 kernel: e820: reserve RAM buffer [mem 0x620bb000-0x63ffffff] Feb 13 21:05:29.460902 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Feb 13 21:05:29.460907 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Feb 13 21:05:29.460914 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Feb 13 21:05:29.460963 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 13 21:05:29.461013 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 13 21:05:29.461065 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 21:05:29.461073 kernel: vgaarb: loaded Feb 13 21:05:29.461079 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 21:05:29.461085 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 13 21:05:29.461091 kernel: clocksource: Switched to clocksource tsc-early Feb 13 21:05:29.461096 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 21:05:29.461104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 21:05:29.461109 kernel: pnp: PnP ACPI init Feb 13 21:05:29.461158 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 21:05:29.461206 kernel: pnp 00:02: [dma 0 disabled] Feb 13 21:05:29.461252 kernel: pnp 00:03: [dma 0 disabled] Feb 13 21:05:29.461302 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 21:05:29.461347 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 21:05:29.461393 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 21:05:29.461439 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 21:05:29.461482 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 21:05:29.461524 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 21:05:29.461568 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 21:05:29.461610 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 21:05:29.461698 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 21:05:29.461742 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 21:05:29.461785 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 21:05:29.461835 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 21:05:29.461877 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 21:05:29.461920 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 21:05:29.461961 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 21:05:29.462006 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 21:05:29.462048 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 21:05:29.462090 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 21:05:29.462137 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 21:05:29.462146 kernel: pnp: PnP ACPI: found 10 devices Feb 13 21:05:29.462152 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 21:05:29.462158 kernel: NET: Registered PF_INET protocol family Feb 13 21:05:29.462165 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:05:29.462171 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 21:05:29.462177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 21:05:29.462182 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:05:29.462188 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 21:05:29.462194 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 21:05:29.462199 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.462206 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:05:29.462211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 21:05:29.462218 kernel: NET: Registered PF_XDP protocol family Feb 13 21:05:29.462266 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Feb 13 21:05:29.462314 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Feb 13 21:05:29.462362 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Feb 13 21:05:29.462409 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 21:05:29.462461 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462510 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462558 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462607 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 21:05:29.462699 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 21:05:29.462747 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 13 21:05:29.462794 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:05:29.462842 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 21:05:29.462891 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 21:05:29.462939 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 21:05:29.462986 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 13 21:05:29.463034 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 21:05:29.463081 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 21:05:29.463128 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 13 21:05:29.463174 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 21:05:29.463222 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 21:05:29.463274 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.463321 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.463368 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 21:05:29.463415 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 21:05:29.463462 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.463505 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 21:05:29.463547 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 21:05:29.463589 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 21:05:29.463633 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 21:05:29.463709 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Feb 13 21:05:29.463751 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 21:05:29.463798 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Feb 13 21:05:29.463843 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 21:05:29.463892 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 13 21:05:29.463936 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Feb 13 21:05:29.463985 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 21:05:29.464030 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Feb 13 21:05:29.464076 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 21:05:29.464120 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.464166 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 13 21:05:29.464210 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 13 21:05:29.464218 kernel: PCI: CLS 64 bytes, default 64 Feb 13 21:05:29.464226 kernel: DMAR: No ATSR found Feb 13 21:05:29.464232 kernel: DMAR: No SATC found Feb 13 21:05:29.464238 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 13 21:05:29.464244 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 13 21:05:29.464249 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 13 21:05:29.464255 kernel: DMAR: IOMMU feature pasid inconsistent Feb 13 21:05:29.464261 kernel: DMAR: IOMMU feature eafs inconsistent Feb 13 21:05:29.464266 kernel: DMAR: IOMMU feature prs inconsistent Feb 13 21:05:29.464272 kernel: DMAR: IOMMU feature nest inconsistent Feb 13 21:05:29.464279 kernel: DMAR: IOMMU feature mts inconsistent Feb 13 21:05:29.464284 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 13 21:05:29.464290 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 13 21:05:29.464296 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 21:05:29.464301 kernel: DMAR: dmar1: Using Queued invalidation Feb 13 21:05:29.464349 kernel: pci 0000:00:02.0: Adding to iommu group 0 Feb 13 21:05:29.464399 kernel: pci 0000:00:00.0: Adding to iommu group 1 Feb 13 21:05:29.464447 kernel: pci 0000:00:01.0: Adding to iommu group 2 Feb 13 21:05:29.464494 kernel: pci 0000:00:01.1: Adding to iommu group 2 Feb 13 21:05:29.464544 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 13 21:05:29.464591 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 13 21:05:29.464659 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 13 21:05:29.464720 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 13 21:05:29.464767 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 13 21:05:29.464812 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 13 21:05:29.464859 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 13 21:05:29.464906 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 13 21:05:29.464955 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 13 21:05:29.465002 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 13 21:05:29.465049 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 13 21:05:29.465096 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 13 21:05:29.465143 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 13 21:05:29.465190 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 13 21:05:29.465238 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 13 21:05:29.465285 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 13 21:05:29.465332 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 13 21:05:29.465382 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 13 21:05:29.465428 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 13 21:05:29.465476 kernel: pci 0000:02:00.0: Adding to iommu group 2 Feb 13 21:05:29.465524 kernel: pci 0000:02:00.1: Adding to iommu group 2 Feb 13 21:05:29.465573 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 21:05:29.465621 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 13 21:05:29.465704 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 13 21:05:29.465754 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 13 21:05:29.465764 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 21:05:29.465770 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 21:05:29.465776 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Feb 13 21:05:29.465781 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 13 21:05:29.465787 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 21:05:29.465793 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 21:05:29.465799 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 21:05:29.465804 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 13 21:05:29.465855 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 21:05:29.465864 kernel: Initialise system trusted keyrings Feb 13 21:05:29.465869 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 21:05:29.465875 kernel: Key type asymmetric registered Feb 13 21:05:29.465880 kernel: Asymmetric key parser 'x509' registered Feb 13 21:05:29.465886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 21:05:29.465892 kernel: io scheduler mq-deadline registered Feb 13 21:05:29.465897 kernel: io scheduler kyber registered Feb 13 21:05:29.465903 kernel: io scheduler bfq registered Feb 13 21:05:29.465951 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 13 21:05:29.465998 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 13 21:05:29.466046 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 13 21:05:29.466093 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 13 21:05:29.466140 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 13 21:05:29.466187 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 13 21:05:29.466234 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 13 21:05:29.466288 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 21:05:29.466297 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 21:05:29.466303 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 21:05:29.466309 kernel: pstore: Using crash dump compression: deflate Feb 13 21:05:29.466314 kernel: pstore: Registered erst as persistent store backend Feb 13 21:05:29.466320 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 21:05:29.466326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 21:05:29.466332 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 21:05:29.466339 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 21:05:29.466389 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 21:05:29.466397 kernel: i8042: PNP: No PS/2 controller found. Feb 13 21:05:29.466440 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 21:05:29.466483 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 21:05:29.466526 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-02-13T21:05:28 UTC (1739480728) Feb 13 21:05:29.466569 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 21:05:29.466577 kernel: intel_pstate: Intel P-state driver initializing Feb 13 21:05:29.466584 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 21:05:29.466590 kernel: intel_pstate: HWP enabled Feb 13 21:05:29.466596 kernel: NET: Registered PF_INET6 protocol family Feb 13 21:05:29.466601 kernel: Segment Routing with IPv6 Feb 13 21:05:29.466607 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 21:05:29.466613 kernel: NET: Registered PF_PACKET protocol family Feb 13 21:05:29.466618 kernel: Key type dns_resolver registered Feb 13 21:05:29.466658 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 21:05:29.466664 kernel: IPI shorthand broadcast: enabled Feb 13 21:05:29.466686 kernel: sched_clock: Marking stable (2758001165, 1450386165)->(4679400914, -471013584) Feb 13 21:05:29.466691 kernel: registered taskstats version 1 Feb 13 21:05:29.466697 kernel: Loading compiled-in X.509 certificates Feb 13 21:05:29.466703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 21:05:29.466709 kernel: Key type .fscrypt registered Feb 13 21:05:29.466714 kernel: Key type fscrypt-provisioning registered Feb 13 21:05:29.466720 kernel: ima: Allocated hash algorithm: sha1 Feb 13 21:05:29.466725 kernel: ima: No architecture policies found Feb 13 21:05:29.466731 kernel: clk: Disabling unused clocks Feb 13 21:05:29.466738 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 21:05:29.466744 kernel: Write protecting the kernel read-only data: 38912k Feb 13 21:05:29.466750 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 21:05:29.466755 kernel: Run /init as init process Feb 13 21:05:29.466761 kernel: with arguments: Feb 13 21:05:29.466766 kernel: /init Feb 13 21:05:29.466772 kernel: with environment: Feb 13 21:05:29.466778 kernel: HOME=/ Feb 13 21:05:29.466783 kernel: TERM=linux Feb 13 21:05:29.466790 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 21:05:29.466797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:05:29.466804 systemd[1]: Detected architecture x86-64. Feb 13 21:05:29.466810 systemd[1]: Running in initrd. Feb 13 21:05:29.466816 systemd[1]: No hostname configured, using default hostname. Feb 13 21:05:29.466822 systemd[1]: Hostname set to . Feb 13 21:05:29.466827 systemd[1]: Initializing machine ID from random generator. Feb 13 21:05:29.466835 systemd[1]: Queued start job for default target initrd.target. Feb 13 21:05:29.466841 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:05:29.466847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:05:29.466853 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 21:05:29.466859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:05:29.466865 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 21:05:29.466871 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 21:05:29.466877 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 21:05:29.466885 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 21:05:29.466891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:05:29.466897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:05:29.466903 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:05:29.466909 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:05:29.466915 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:05:29.466921 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:05:29.466928 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:05:29.466934 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:05:29.466940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 21:05:29.466946 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 21:05:29.466953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:05:29.466959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:05:29.466965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:05:29.466971 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:05:29.466978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 21:05:29.466983 kernel: tsc: Refined TSC clocksource calibration: 3407.987 MHz Feb 13 21:05:29.466989 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc76df96, max_idle_ns: 440795240193 ns Feb 13 21:05:29.466995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:05:29.467001 kernel: clocksource: Switched to clocksource tsc Feb 13 21:05:29.467007 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 21:05:29.467013 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 21:05:29.467019 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:05:29.467036 systemd-journald[268]: Collecting audit messages is disabled. Feb 13 21:05:29.467052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:05:29.467059 systemd-journald[268]: Journal started Feb 13 21:05:29.467073 systemd-journald[268]: Runtime Journal (/run/log/journal/c2b32003106344f48c7b5a666f581e12) is 8.0M, max 636.6M, 628.6M free. Feb 13 21:05:29.469427 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 21:05:29.498290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:05:29.498302 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 21:05:29.506098 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 21:05:29.514135 kernel: Bridge firewalling registered Feb 13 21:05:29.514223 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:05:29.523036 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 21:05:29.523130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:05:29.523216 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 21:05:29.523299 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:05:29.544837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:05:29.606861 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:05:29.620420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:05:29.642599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:29.674478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:05:29.694368 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:05:29.715355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:05:29.754064 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:05:29.780957 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:05:29.782639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:05:29.802602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:05:29.812011 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:29.823727 systemd-resolved[295]: Positive Trust Anchors: Feb 13 21:05:29.823735 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:05:29.823771 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:05:29.954737 kernel: SCSI subsystem initialized Feb 13 21:05:29.954755 kernel: Loading iSCSI transport class v2.0-870. Feb 13 21:05:29.954767 kernel: iscsi: registered transport (tcp) Feb 13 21:05:29.954788 dracut-cmdline[309]: dracut-dracut-053 Feb 13 21:05:29.954788 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:05:30.015846 kernel: iscsi: registered transport (qla4xxx) Feb 13 21:05:30.015877 kernel: QLogic iSCSI HBA Driver Feb 13 21:05:29.826021 systemd-resolved[295]: Defaulting to hostname 'linux'. Feb 13 21:05:29.839872 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 21:05:29.850782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:05:29.857821 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:05:30.001189 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 21:05:30.041911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 21:05:30.135962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 21:05:30.135980 kernel: device-mapper: uevent: version 1.0.3 Feb 13 21:05:30.144774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 21:05:30.180656 kernel: raid6: avx2x4 gen() 46501 MB/s Feb 13 21:05:30.201656 kernel: raid6: avx2x2 gen() 53143 MB/s Feb 13 21:05:30.227779 kernel: raid6: avx2x1 gen() 44720 MB/s Feb 13 21:05:30.227797 kernel: raid6: using algorithm avx2x2 gen() 53143 MB/s Feb 13 21:05:30.254859 kernel: raid6: .... xor() 32460 MB/s, rmw enabled Feb 13 21:05:30.254877 kernel: raid6: using avx2x2 recovery algorithm Feb 13 21:05:30.275679 kernel: xor: automatically using best checksumming function avx Feb 13 21:05:30.372669 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 21:05:30.378787 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:05:30.394974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:05:30.405076 systemd-udevd[495]: Using default interface naming scheme 'v255'. Feb 13 21:05:30.417007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:05:30.432716 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 21:05:30.487145 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Feb 13 21:05:30.505000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:05:30.527889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:05:30.613137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:05:30.650717 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 21:05:30.650734 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 21:05:30.650748 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 21:05:30.623796 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 21:05:30.682447 kernel: libata version 3.00 loaded. Feb 13 21:05:30.682465 kernel: PTP clock support registered Feb 13 21:05:30.682478 kernel: ACPI: bus type USB registered Feb 13 21:05:30.682491 kernel: usbcore: registered new interface driver usbfs Feb 13 21:05:30.682504 kernel: usbcore: registered new interface driver hub Feb 13 21:05:30.682516 kernel: usbcore: registered new device driver usb Feb 13 21:05:30.682533 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 21:05:30.652451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:05:30.860180 kernel: AES CTR mode by8 optimization enabled Feb 13 21:05:30.860270 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 21:05:30.860608 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:05:30.860798 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 13 21:05:30.860955 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 21:05:30.861123 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 21:05:30.861282 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 21:05:30.861454 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 21:05:30.861608 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 21:05:30.861817 kernel: scsi host0: ahci Feb 13 21:05:30.861975 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 21:05:30.862126 kernel: scsi host1: ahci Feb 13 21:05:30.862288 kernel: hub 1-0:1.0: USB hub found Feb 13 21:05:30.862469 kernel: scsi host2: ahci Feb 13 21:05:30.862619 kernel: hub 1-0:1.0: 16 ports detected Feb 13 21:05:30.862868 kernel: scsi host3: ahci Feb 13 21:05:30.863033 kernel: hub 2-0:1.0: USB hub found Feb 13 21:05:30.863202 kernel: scsi host4: ahci Feb 13 21:05:30.863359 kernel: hub 2-0:1.0: 10 ports detected Feb 13 21:05:30.863523 kernel: scsi host5: ahci Feb 13 21:05:30.863732 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 21:05:30.863750 kernel: scsi host6: ahci Feb 13 21:05:30.863896 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 21:05:30.863912 kernel: scsi host7: ahci Feb 13 21:05:30.864061 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Feb 13 21:05:30.864078 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Feb 13 21:05:30.864098 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Feb 13 21:05:30.864115 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Feb 13 21:05:30.864128 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Feb 13 21:05:30.864141 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Feb 13 21:05:30.864155 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Feb 13 21:05:30.864168 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Feb 13 21:05:30.864181 kernel: pps pps0: new PPS source ptp0 Feb 13 21:05:30.652516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:30.898402 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 13 21:05:30.931525 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:05:30.931824 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d2 Feb 13 21:05:30.931999 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 13 21:05:30.932233 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:05:30.941656 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 13 21:05:31.431764 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:05:31.431846 kernel: pps pps1: new PPS source ptp1 Feb 13 21:05:31.431912 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 13 21:05:31.431985 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 21:05:31.432051 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d3 Feb 13 21:05:31.432118 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 13 21:05:31.432182 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 21:05:31.432245 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 21:05:31.591994 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592005 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592015 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:05:31.592026 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592034 kernel: hub 1-14:1.0: USB hub found Feb 13 21:05:31.592128 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 21:05:31.592137 kernel: hub 1-14:1.0: 4 ports detected Feb 13 21:05:31.592211 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592220 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592227 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 13 21:05:31.592234 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:05:31.592242 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:05:31.592313 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 21:05:31.592323 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Feb 13 21:05:31.592390 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:05:31.592398 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 21:05:31.592406 kernel: ata1.00: Features: NCQ-prio Feb 13 21:05:31.592413 kernel: ata2.00: Features: NCQ-prio Feb 13 21:05:31.592421 kernel: ata1.00: configured for UDMA/133 Feb 13 21:05:31.592428 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:05:31.592498 kernel: ata2.00: configured for UDMA/133 Feb 13 21:05:31.592508 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 21:05:31.592572 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 13 21:05:31.592645 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:31.592654 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:05:31.592661 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:05:31.592727 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 21:05:31.592789 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 13 21:05:31.592856 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 13 21:05:31.592919 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 13 21:05:31.592986 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 13 21:05:31.593045 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 13 21:05:31.593104 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 21:05:31.593162 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 21:05:31.593220 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:05:31.593278 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 21:05:31.593338 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Feb 13 21:05:31.593396 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Feb 13 21:05:31.593454 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:31.593463 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 21:05:31.593470 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 13 21:05:31.593527 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 21:05:31.593535 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:05:31.593599 kernel: GPT:9289727 != 937703087 Feb 13 21:05:31.593609 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 13 21:05:31.999093 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 21:05:31.999167 kernel: GPT:9289727 != 937703087 Feb 13 21:05:31.999204 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 21:05:31.999257 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:31.999306 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 13 21:05:31.999781 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 21:05:32.000168 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 21:05:32.000811 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (559) Feb 13 21:05:32.000864 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (550) Feb 13 21:05:32.000903 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 21:05:32.000941 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:32.000977 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:32.001014 kernel: usbcore: registered new interface driver usbhid Feb 13 21:05:32.001051 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:32.001086 kernel: usbhid: USB HID core driver Feb 13 21:05:32.001123 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:32.001166 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 21:05:32.001204 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 21:05:32.001647 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 21:05:32.001705 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 21:05:32.002101 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 21:05:32.002465 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 13 21:05:32.002838 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 21:05:32.003211 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 13 21:05:30.950451 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:05:32.029853 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 13 21:05:30.965193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:05:30.965291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:31.007744 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:05:31.028788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:05:32.071788 disk-uuid[709]: Primary Header is updated. Feb 13 21:05:32.071788 disk-uuid[709]: Secondary Entries is updated. Feb 13 21:05:32.071788 disk-uuid[709]: Secondary Header is updated. Feb 13 21:05:31.040949 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 21:05:31.042213 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:05:31.042237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:05:31.042262 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:05:31.042765 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 21:05:31.093768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:31.104840 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:05:31.121799 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:05:31.139361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:31.497940 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Feb 13 21:05:31.529486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Feb 13 21:05:31.550780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 21:05:31.564966 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:05:31.575696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Feb 13 21:05:31.593751 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 21:05:32.624050 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 21:05:32.631569 disk-uuid[710]: The operation has completed successfully. Feb 13 21:05:32.639915 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 21:05:32.665977 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 21:05:32.666024 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 21:05:32.753777 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 21:05:32.779688 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 21:05:32.779749 sh[737]: Success Feb 13 21:05:32.814148 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 21:05:32.843903 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 21:05:32.852873 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 21:05:32.897691 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 21:05:32.897712 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:32.914184 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 21:05:32.921203 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 21:05:32.927053 kernel: BTRFS info (device dm-0): using free space tree Feb 13 21:05:32.940671 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 21:05:32.942225 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 21:05:32.952098 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 21:05:32.965837 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 21:05:33.037880 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:33.037893 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:33.037901 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:05:33.037912 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:05:33.037920 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:05:33.037927 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:33.037891 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 21:05:33.050075 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 21:05:33.084794 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 21:05:33.094956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:05:33.127800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:05:33.140067 systemd-networkd[921]: lo: Link UP Feb 13 21:05:33.148147 ignition[890]: Ignition 2.20.0 Feb 13 21:05:33.140069 systemd-networkd[921]: lo: Gained carrier Feb 13 21:05:33.148151 ignition[890]: Stage: fetch-offline Feb 13 21:05:33.142391 systemd-networkd[921]: Enumeration completed Feb 13 21:05:33.148169 ignition[890]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:33.142471 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:05:33.148174 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:33.143071 systemd-networkd[921]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.148225 ignition[890]: parsed url from cmdline: "" Feb 13 21:05:33.150348 unknown[890]: fetched base config from "system" Feb 13 21:05:33.148227 ignition[890]: no config URL provided Feb 13 21:05:33.150352 unknown[890]: fetched user config from "system" Feb 13 21:05:33.148230 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:05:33.159818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:05:33.148252 ignition[890]: parsing config with SHA512: ecd6ef01da5249ff9c7217eaa7dc8aec3db6e78a10d9b3f90531a89c0510e468f617a438aa3ce586ce9da2bced074b70497daa552f0c7cfecebcf3f31a2a4e12 Feb 13 21:05:33.170768 systemd-networkd[921]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.150543 ignition[890]: fetch-offline: fetch-offline passed Feb 13 21:05:33.178962 systemd[1]: Reached target network.target - Network. Feb 13 21:05:33.150546 ignition[890]: POST message to Packet Timeline Feb 13 21:05:33.191732 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 21:05:33.367825 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 21:05:33.150549 ignition[890]: POST Status error: resource requires networking Feb 13 21:05:33.199029 systemd-networkd[921]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.150586 ignition[890]: Ignition finished successfully Feb 13 21:05:33.202835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 21:05:33.217611 ignition[933]: Ignition 2.20.0 Feb 13 21:05:33.366548 systemd-networkd[921]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:05:33.217621 ignition[933]: Stage: kargs Feb 13 21:05:33.217930 ignition[933]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:33.217951 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:33.219289 ignition[933]: kargs: kargs passed Feb 13 21:05:33.219295 ignition[933]: POST message to Packet Timeline Feb 13 21:05:33.219319 ignition[933]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:33.220117 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57981->[::1]:53: read: connection refused Feb 13 21:05:33.421252 ignition[933]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 21:05:33.422331 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38994->[::1]:53: read: connection refused Feb 13 21:05:33.575743 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 21:05:33.576376 systemd-networkd[921]: eno1: Link UP Feb 13 21:05:33.576579 systemd-networkd[921]: eno2: Link UP Feb 13 21:05:33.576736 systemd-networkd[921]: enp2s0f0np0: Link UP Feb 13 21:05:33.576908 systemd-networkd[921]: enp2s0f0np0: Gained carrier Feb 13 21:05:33.586941 systemd-networkd[921]: enp2s0f1np1: Link UP Feb 13 21:05:33.615820 systemd-networkd[921]: enp2s0f0np0: DHCPv4 address 147.28.180.173/31, gateway 147.28.180.172 acquired from 145.40.83.140 Feb 13 21:05:33.822719 ignition[933]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 21:05:33.823916 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42473->[::1]:53: read: connection refused Feb 13 21:05:34.406308 systemd-networkd[921]: enp2s0f1np1: Gained carrier Feb 13 21:05:34.624213 ignition[933]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 21:05:34.625399 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36440->[::1]:53: read: connection refused Feb 13 21:05:35.430128 systemd-networkd[921]: enp2s0f0np0: Gained IPv6LL Feb 13 21:05:36.226922 ignition[933]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 21:05:36.228125 ignition[933]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54546->[::1]:53: read: connection refused Feb 13 21:05:36.390125 systemd-networkd[921]: enp2s0f1np1: Gained IPv6LL Feb 13 21:05:39.431601 ignition[933]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 21:05:40.010304 ignition[933]: GET result: OK Feb 13 21:05:40.385763 ignition[933]: Ignition finished successfully Feb 13 21:05:40.390922 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 21:05:40.416907 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 21:05:40.423011 ignition[948]: Ignition 2.20.0 Feb 13 21:05:40.423015 ignition[948]: Stage: disks Feb 13 21:05:40.423122 ignition[948]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:40.423128 ignition[948]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:40.423640 ignition[948]: disks: disks passed Feb 13 21:05:40.423643 ignition[948]: POST message to Packet Timeline Feb 13 21:05:40.423654 ignition[948]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:40.814988 ignition[948]: GET result: OK Feb 13 21:05:41.236676 ignition[948]: Ignition finished successfully Feb 13 21:05:41.238852 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 21:05:41.255084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 21:05:41.272978 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 21:05:41.294031 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:05:41.315011 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:05:41.334932 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:05:41.354887 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 21:05:41.395101 systemd-fsck[965]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 21:05:41.405981 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 21:05:41.426816 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 21:05:41.523375 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 21:05:41.540829 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 21:05:41.533037 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 21:05:41.561774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:05:41.600209 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (976) Feb 13 21:05:41.600223 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:41.600231 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:41.563653 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 21:05:41.636856 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:05:41.636875 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:05:41.636889 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:05:41.641896 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 21:05:41.653178 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 13 21:05:41.674771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 21:05:41.719810 coreos-metadata[993]: Feb 13 21:05:41.695 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:05:41.674789 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:05:41.760703 coreos-metadata[994]: Feb 13 21:05:41.695 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:05:41.675708 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:05:41.694841 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 21:05:41.788762 initrd-setup-root[1008]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 21:05:41.738759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 21:05:41.809757 initrd-setup-root[1015]: cut: /sysroot/etc/group: No such file or directory Feb 13 21:05:41.819738 initrd-setup-root[1022]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 21:05:41.830814 initrd-setup-root[1029]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 21:05:41.830217 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 21:05:41.839861 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 21:05:41.887831 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:41.885866 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 21:05:41.896221 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 21:05:41.907572 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 21:05:41.930850 ignition[1096]: INFO : Ignition 2.20.0 Feb 13 21:05:41.930850 ignition[1096]: INFO : Stage: mount Feb 13 21:05:41.930850 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:41.930850 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:41.930850 ignition[1096]: INFO : mount: mount passed Feb 13 21:05:41.930850 ignition[1096]: INFO : POST message to Packet Timeline Feb 13 21:05:41.930850 ignition[1096]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:42.092922 coreos-metadata[993]: Feb 13 21:05:42.092 INFO Fetch successful Feb 13 21:05:42.169869 coreos-metadata[993]: Feb 13 21:05:42.169 INFO wrote hostname ci-4186.1.1-a-9675b630d5 to /sysroot/etc/hostname Feb 13 21:05:42.171097 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:05:42.383122 coreos-metadata[994]: Feb 13 21:05:42.382 INFO Fetch successful Feb 13 21:05:42.462502 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 21:05:42.462560 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 13 21:05:42.516046 ignition[1096]: INFO : GET result: OK Feb 13 21:05:42.860890 ignition[1096]: INFO : Ignition finished successfully Feb 13 21:05:42.863385 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 21:05:42.896821 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 21:05:42.907300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:05:42.962359 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1119) Feb 13 21:05:42.962384 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:05:42.970454 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:05:42.976351 kernel: BTRFS info (device sda6): using free space tree Feb 13 21:05:42.991346 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 21:05:42.991362 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 21:05:42.993383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:05:43.018548 ignition[1136]: INFO : Ignition 2.20.0 Feb 13 21:05:43.018548 ignition[1136]: INFO : Stage: files Feb 13 21:05:43.032847 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:43.032847 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:43.032847 ignition[1136]: DEBUG : files: compiled without relabeling support, skipping Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 21:05:43.032847 ignition[1136]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 21:05:43.032847 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 21:05:43.023027 unknown[1136]: wrote ssh authorized keys file for user: core Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.165714 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.413969 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 21:05:43.658720 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 21:05:43.847137 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 21:05:43.847137 ignition[1136]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:05:43.877938 ignition[1136]: INFO : files: files passed Feb 13 21:05:43.877938 ignition[1136]: INFO : POST message to Packet Timeline Feb 13 21:05:43.877938 ignition[1136]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:44.439221 ignition[1136]: INFO : GET result: OK Feb 13 21:05:44.795240 ignition[1136]: INFO : Ignition finished successfully Feb 13 21:05:44.798522 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 21:05:44.830883 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 21:05:44.831349 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 21:05:44.850200 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 21:05:44.850275 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 21:05:44.912930 initrd-setup-root-after-ignition[1176]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:05:44.912930 initrd-setup-root-after-ignition[1176]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:05:44.889976 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:05:44.950866 initrd-setup-root-after-ignition[1181]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:05:44.903915 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 21:05:44.936970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 21:05:45.008041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 21:05:45.008096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 21:05:45.026099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 21:05:45.036958 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 21:05:45.064102 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 21:05:45.073971 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 21:05:45.150742 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:05:45.183132 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 21:05:45.211225 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:05:45.223197 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:05:45.244333 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 21:05:45.262254 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 21:05:45.262674 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:05:45.292484 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 21:05:45.314240 systemd[1]: Stopped target basic.target - Basic System. Feb 13 21:05:45.333369 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 21:05:45.352231 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:05:45.374359 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 21:05:45.395243 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 21:05:45.415231 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:05:45.436279 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 21:05:45.458404 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 21:05:45.478214 systemd[1]: Stopped target swap.target - Swaps. Feb 13 21:05:45.497264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 21:05:45.497686 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:05:45.522356 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:05:45.542387 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:05:45.563123 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 21:05:45.563584 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:05:45.586129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 21:05:45.586522 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 21:05:45.618236 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 21:05:45.618704 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:05:45.638416 systemd[1]: Stopped target paths.target - Path Units. Feb 13 21:05:45.656112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 21:05:45.659933 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:05:45.677249 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 21:05:45.696384 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 21:05:45.715222 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 21:05:45.715522 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:05:45.735261 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 21:05:45.735549 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:05:45.758478 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 21:05:45.758901 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:05:45.779326 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 21:05:45.891849 ignition[1201]: INFO : Ignition 2.20.0 Feb 13 21:05:45.891849 ignition[1201]: INFO : Stage: umount Feb 13 21:05:45.891849 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:05:45.891849 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 21:05:45.891849 ignition[1201]: INFO : umount: umount passed Feb 13 21:05:45.891849 ignition[1201]: INFO : POST message to Packet Timeline Feb 13 21:05:45.891849 ignition[1201]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 21:05:45.779721 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 21:05:45.798321 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 21:05:45.798727 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 21:05:45.829891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 21:05:45.848730 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 21:05:45.848863 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:05:45.882974 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 21:05:45.899868 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 21:05:45.900243 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:05:45.918160 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 21:05:45.918476 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:05:45.952312 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 21:05:45.957366 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 21:05:45.957610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 21:05:46.068699 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 21:05:46.068989 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 21:05:46.382283 ignition[1201]: INFO : GET result: OK Feb 13 21:05:46.931198 ignition[1201]: INFO : Ignition finished successfully Feb 13 21:05:46.932253 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 21:05:46.932365 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 21:05:46.950317 systemd[1]: Stopped target network.target - Network. Feb 13 21:05:46.965830 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 21:05:46.965950 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 21:05:46.984023 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 21:05:46.984183 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 21:05:47.002055 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 21:05:47.002210 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 21:05:47.020033 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 21:05:47.020201 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 21:05:47.039025 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 21:05:47.039195 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 21:05:47.058410 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 21:05:47.067784 systemd-networkd[921]: enp2s0f1np1: DHCPv6 lease lost Feb 13 21:05:47.076866 systemd-networkd[921]: enp2s0f0np0: DHCPv6 lease lost Feb 13 21:05:47.077101 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 21:05:47.096693 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 21:05:47.096972 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 21:05:47.116988 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 21:05:47.117342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 21:05:47.137874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 21:05:47.137994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:05:47.168816 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 21:05:47.196780 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 21:05:47.196827 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:05:47.215963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 21:05:47.216055 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:05:47.234043 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 21:05:47.234202 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 21:05:47.254033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 21:05:47.254203 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:05:47.273262 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:05:47.296014 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 21:05:47.296415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:05:47.331892 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 21:05:47.332040 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 21:05:47.336232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 21:05:47.336336 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:05:47.365893 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 21:05:47.366020 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:05:47.396174 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 21:05:47.396330 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 21:05:47.426048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:05:47.426219 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:05:47.471977 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 21:05:47.496738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 21:05:47.496881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:05:47.519929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:05:47.520052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:47.539793 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 21:05:47.759752 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Feb 13 21:05:47.540012 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 21:05:47.618785 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 21:05:47.619042 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 21:05:47.636050 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 21:05:47.672046 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 21:05:47.694674 systemd[1]: Switching root. Feb 13 21:05:47.812804 systemd-journald[268]: Journal stopped Feb 13 21:05:49.547418 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 21:05:49.547433 kernel: SELinux: policy capability open_perms=1 Feb 13 21:05:49.547441 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 21:05:49.547447 kernel: SELinux: policy capability always_check_network=0 Feb 13 21:05:49.547452 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 21:05:49.547458 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 21:05:49.547464 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 21:05:49.547469 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 21:05:49.547475 kernel: audit: type=1403 audit(1739480747.993:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 21:05:49.547482 systemd[1]: Successfully loaded SELinux policy in 75.042ms. Feb 13 21:05:49.547490 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.971ms. Feb 13 21:05:49.547497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:05:49.547503 systemd[1]: Detected architecture x86-64. Feb 13 21:05:49.547509 systemd[1]: Detected first boot. Feb 13 21:05:49.547516 systemd[1]: Hostname set to . Feb 13 21:05:49.547524 systemd[1]: Initializing machine ID from random generator. Feb 13 21:05:49.547530 zram_generator::config[1252]: No configuration found. Feb 13 21:05:49.547537 systemd[1]: Populated /etc with preset unit settings. Feb 13 21:05:49.547543 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 21:05:49.547550 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 21:05:49.547556 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 21:05:49.547564 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 21:05:49.547570 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 21:05:49.547577 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 21:05:49.547583 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 21:05:49.547590 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 21:05:49.547596 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 21:05:49.547603 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 21:05:49.547610 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 21:05:49.547617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:05:49.547626 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:05:49.547633 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 21:05:49.547641 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 21:05:49.547647 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 21:05:49.547654 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:05:49.547660 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Feb 13 21:05:49.547667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:05:49.547674 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 21:05:49.547681 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 21:05:49.547688 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 21:05:49.547696 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 21:05:49.547703 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:05:49.547709 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:05:49.547716 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:05:49.547724 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:05:49.547730 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 21:05:49.547737 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 21:05:49.547744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:05:49.547750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:05:49.547757 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:05:49.547765 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 21:05:49.547772 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 21:05:49.547779 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 21:05:49.547786 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 21:05:49.547793 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:05:49.547803 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 21:05:49.547821 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 21:05:49.547838 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 21:05:49.547855 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 21:05:49.547870 systemd[1]: Reached target machines.target - Containers. Feb 13 21:05:49.547878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 21:05:49.547885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:05:49.547892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:05:49.547898 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 21:05:49.547905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:05:49.547913 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:05:49.547920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:05:49.547927 kernel: ACPI: bus type drm_connector registered Feb 13 21:05:49.547933 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 21:05:49.547940 kernel: fuse: init (API version 7.39) Feb 13 21:05:49.547946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:05:49.547952 kernel: loop: module loaded Feb 13 21:05:49.547959 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 21:05:49.547966 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 21:05:49.547975 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 21:05:49.547981 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 21:05:49.547988 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 21:05:49.547995 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:05:49.548010 systemd-journald[1356]: Collecting audit messages is disabled. Feb 13 21:05:49.548026 systemd-journald[1356]: Journal started Feb 13 21:05:49.548040 systemd-journald[1356]: Runtime Journal (/run/log/journal/71b81fecee81441e95b77edb896193ad) is 8.0M, max 636.6M, 628.6M free. Feb 13 21:05:48.402475 systemd[1]: Queued start job for default target multi-user.target. Feb 13 21:05:48.417547 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 21:05:48.417842 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 21:05:49.560627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:05:49.581653 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 21:05:49.592630 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 21:05:49.622707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:05:49.644822 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 21:05:49.644978 systemd[1]: Stopped verity-setup.service. Feb 13 21:05:49.677564 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:05:49.677730 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:05:49.688048 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 21:05:49.697921 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 21:05:49.707894 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 21:05:49.718896 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 21:05:49.728875 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 21:05:49.738852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 21:05:49.748942 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 21:05:49.759970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:05:49.771019 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 21:05:49.771163 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 21:05:49.782108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:05:49.782284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:05:49.795484 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:05:49.795868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:05:49.807507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:05:49.807927 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:05:49.819487 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 21:05:49.819898 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 21:05:49.831489 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:05:49.831857 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:05:49.843506 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:05:49.854465 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 21:05:49.866484 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 21:05:49.878479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:05:49.898619 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 21:05:49.914843 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 21:05:49.926505 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 21:05:49.935788 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 21:05:49.935815 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:05:49.948105 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 21:05:49.967877 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 21:05:49.979473 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 21:05:49.988881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:05:49.990283 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 21:05:50.001352 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 21:05:50.013814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:05:50.014583 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 21:05:50.019768 systemd-journald[1356]: Time spent on flushing to /var/log/journal/71b81fecee81441e95b77edb896193ad is 13.271ms for 1391 entries. Feb 13 21:05:50.019768 systemd-journald[1356]: System Journal (/var/log/journal/71b81fecee81441e95b77edb896193ad) is 8.0M, max 195.6M, 187.6M free. Feb 13 21:05:50.043316 systemd-journald[1356]: Received client request to flush runtime journal. Feb 13 21:05:50.032747 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:05:50.041167 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:05:50.051361 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 21:05:50.071034 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 21:05:50.081670 kernel: loop0: detected capacity change from 0 to 8 Feb 13 21:05:50.086304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 21:05:50.092633 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 21:05:50.103829 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 21:05:50.114785 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 21:05:50.125804 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 21:05:50.133655 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 21:05:50.143019 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 21:05:50.153794 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 21:05:50.164792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:05:50.174821 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 21:05:50.189668 kernel: loop2: detected capacity change from 0 to 141000 Feb 13 21:05:50.192907 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 21:05:50.213827 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 21:05:50.225555 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:05:50.237511 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 21:05:50.238208 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 21:05:50.248512 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Feb 13 21:05:50.248522 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Feb 13 21:05:50.250390 udevadm[1394]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 21:05:50.252053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:05:50.262629 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 21:05:50.324668 kernel: loop4: detected capacity change from 0 to 8 Feb 13 21:05:50.331629 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 21:05:50.352145 ldconfig[1382]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 21:05:50.352629 kernel: loop6: detected capacity change from 0 to 141000 Feb 13 21:05:50.353773 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 21:05:50.371678 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 21:05:50.384688 (sd-merge)[1411]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Feb 13 21:05:50.384940 (sd-merge)[1411]: Merged extensions into '/usr'. Feb 13 21:05:50.388301 systemd[1]: Reloading requested from client PID 1388 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 21:05:50.388311 systemd[1]: Reloading... Feb 13 21:05:50.411635 zram_generator::config[1436]: No configuration found. Feb 13 21:05:50.483945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:05:50.522458 systemd[1]: Reloading finished in 133 ms. Feb 13 21:05:50.548553 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 21:05:50.559986 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 21:05:50.586960 systemd[1]: Starting ensure-sysext.service... Feb 13 21:05:50.594709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:05:50.607970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:05:50.622472 systemd[1]: Reloading requested from client PID 1493 ('systemctl') (unit ensure-sysext.service)... Feb 13 21:05:50.622495 systemd[1]: Reloading... Feb 13 21:05:50.622938 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 21:05:50.623295 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 21:05:50.624393 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 21:05:50.624794 systemd-tmpfiles[1494]: ACLs are not supported, ignoring. Feb 13 21:05:50.624870 systemd-tmpfiles[1494]: ACLs are not supported, ignoring. Feb 13 21:05:50.628872 systemd-tmpfiles[1494]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:05:50.628877 systemd-tmpfiles[1494]: Skipping /boot Feb 13 21:05:50.634826 systemd-tmpfiles[1494]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:05:50.634830 systemd-tmpfiles[1494]: Skipping /boot Feb 13 21:05:50.639572 systemd-udevd[1495]: Using default interface naming scheme 'v255'. Feb 13 21:05:50.656649 zram_generator::config[1522]: No configuration found. Feb 13 21:05:50.682634 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 21:05:50.682697 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1565) Feb 13 21:05:50.692635 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 21:05:50.710536 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 21:05:50.710636 kernel: IPMI message handler: version 39.2 Feb 13 21:05:50.710672 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 21:05:50.716692 kernel: ACPI: button: Power Button [PWRF] Feb 13 21:05:50.731635 kernel: ipmi device interface Feb 13 21:05:50.731693 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 21:05:50.762258 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 21:05:50.762399 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 21:05:50.762618 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 21:05:50.762743 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 21:05:50.776267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:05:50.824789 kernel: ipmi_si: IPMI System Interface driver Feb 13 21:05:50.824858 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 21:05:50.838894 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 21:05:50.838914 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 21:05:50.838926 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 21:05:50.875341 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 21:05:50.875431 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 21:05:50.875443 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 21:05:50.875524 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 21:05:50.875536 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 21:05:50.840928 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Feb 13 21:05:50.841000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Feb 13 21:05:50.892946 systemd[1]: Reloading finished in 270 ms. Feb 13 21:05:50.905631 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 13 21:05:50.919976 kernel: intel_rapl_common: Found RAPL domain package Feb 13 21:05:50.920016 kernel: intel_rapl_common: Found RAPL domain core Feb 13 21:05:50.924172 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:05:50.925515 kernel: intel_rapl_common: Found RAPL domain uncore Feb 13 21:05:50.926633 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 21:05:50.943719 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 21:05:50.959918 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:05:50.980023 systemd[1]: Finished ensure-sysext.service. Feb 13 21:05:50.982682 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 13 21:05:51.001352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:05:51.014665 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 21:05:51.015784 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 21:05:51.022630 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 21:05:51.029594 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 21:05:51.034180 augenrules[1697]: No rules Feb 13 21:05:51.040827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:05:51.041485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:05:51.051243 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:05:51.061255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:05:51.072242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:05:51.081784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:05:51.082357 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 21:05:51.094346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 21:05:51.106797 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:05:51.108000 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:05:51.109085 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 21:05:51.135333 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 21:05:51.147340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:05:51.156723 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:05:51.157258 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 21:05:51.167883 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 21:05:51.167992 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 21:05:51.168172 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 21:05:51.168322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:05:51.168407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:05:51.168561 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:05:51.168648 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:05:51.168797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:05:51.168880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:05:51.169029 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:05:51.169110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:05:51.169257 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 21:05:51.169407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 21:05:51.174203 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 21:05:51.175281 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 21:05:51.175324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:05:51.175369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:05:51.176030 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 21:05:51.176891 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 21:05:51.176923 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 21:05:51.182459 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 21:05:51.184117 lvm[1726]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:05:51.198082 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 21:05:51.226576 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 21:05:51.240700 systemd-resolved[1710]: Positive Trust Anchors: Feb 13 21:05:51.240706 systemd-resolved[1710]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:05:51.240730 systemd-resolved[1710]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:05:51.243080 systemd-resolved[1710]: Using system hostname 'ci-4186.1.1-a-9675b630d5'. Feb 13 21:05:51.246026 systemd-networkd[1709]: lo: Link UP Feb 13 21:05:51.246029 systemd-networkd[1709]: lo: Gained carrier Feb 13 21:05:51.248473 systemd-networkd[1709]: bond0: netdev ready Feb 13 21:05:51.249466 systemd-networkd[1709]: Enumeration completed Feb 13 21:05:51.252957 systemd-networkd[1709]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:80.network. Feb 13 21:05:51.297815 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 21:05:51.308901 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:05:51.318689 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:05:51.327793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:05:51.339748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:05:51.349658 systemd[1]: Reached target network.target - Network. Feb 13 21:05:51.357655 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:05:51.368660 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:05:51.377712 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 21:05:51.388675 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 21:05:51.399662 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 21:05:51.411653 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 21:05:51.411668 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:05:51.419654 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 21:05:51.428734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 21:05:51.438706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 21:05:51.449673 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:05:51.458299 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 21:05:51.468352 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 21:05:51.478141 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 21:05:51.500830 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 21:05:51.502604 lvm[1747]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:05:51.512334 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 21:05:51.522960 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 21:05:51.532887 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:05:51.540869 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:05:51.548827 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:05:51.548850 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:05:51.557778 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 21:05:51.569281 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 21:05:51.582684 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 21:05:51.582673 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 21:05:51.595920 coreos-metadata[1750]: Feb 13 21:05:51.595 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:05:51.596832 coreos-metadata[1750]: Feb 13 21:05:51.596 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 13 21:05:51.597679 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 13 21:05:51.598426 systemd-networkd[1709]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:81.network. Feb 13 21:05:51.613082 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 21:05:51.623388 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 21:05:51.623937 dbus-daemon[1751]: [system] SELinux support is enabled Feb 13 21:05:51.625005 jq[1755]: false Feb 13 21:05:51.632745 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 21:05:51.633425 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 21:05:51.640719 extend-filesystems[1756]: Found loop4 Feb 13 21:05:51.640719 extend-filesystems[1756]: Found loop5 Feb 13 21:05:51.640719 extend-filesystems[1756]: Found loop6 Feb 13 21:05:51.696862 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 21:05:51.696879 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1565) Feb 13 21:05:51.696889 extend-filesystems[1756]: Found loop7 Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda1 Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda2 Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda3 Feb 13 21:05:51.696889 extend-filesystems[1756]: Found usr Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda4 Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda6 Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda7 Feb 13 21:05:51.696889 extend-filesystems[1756]: Found sda9 Feb 13 21:05:51.696889 extend-filesystems[1756]: Checking size of /dev/sda9 Feb 13 21:05:51.696889 extend-filesystems[1756]: Resized partition /dev/sda9 Feb 13 21:05:51.879820 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 21:05:51.879933 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 13 21:05:51.879944 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 21:05:51.879955 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 21:05:51.879964 kernel: bond0: active interface up! Feb 13 21:05:51.643412 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 21:05:51.880018 extend-filesystems[1766]: resize2fs 1.47.1 (20-May-2024) Feb 13 21:05:51.667697 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 21:05:51.685413 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 21:05:51.712223 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 21:05:51.731555 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Feb 13 21:05:51.896098 sshd_keygen[1779]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 21:05:51.743496 systemd-networkd[1709]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 21:05:51.896219 update_engine[1788]: I20250213 21:05:51.784413 1788 main.cc:92] Flatcar Update Engine starting Feb 13 21:05:51.896219 update_engine[1788]: I20250213 21:05:51.785152 1788 update_check_scheduler.cc:74] Next update check in 6m10s Feb 13 21:05:51.744799 systemd-networkd[1709]: enp2s0f0np0: Link UP Feb 13 21:05:51.896380 jq[1789]: true Feb 13 21:05:51.745077 systemd-networkd[1709]: enp2s0f0np0: Gained carrier Feb 13 21:05:51.762081 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 21:05:51.762764 systemd-networkd[1709]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:a2:80.network. Feb 13 21:05:51.762946 systemd-networkd[1709]: enp2s0f1np1: Link UP Feb 13 21:05:51.763115 systemd-networkd[1709]: enp2s0f1np1: Gained carrier Feb 13 21:05:51.775831 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 21:05:51.776774 systemd-logind[1776]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 21:05:51.776786 systemd-logind[1776]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 21:05:51.776795 systemd-logind[1776]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 21:05:51.777099 systemd-logind[1776]: New seat seat0. Feb 13 21:05:51.781796 systemd-networkd[1709]: bond0: Link UP Feb 13 21:05:51.782001 systemd-networkd[1709]: bond0: Gained carrier Feb 13 21:05:51.782152 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:05:51.782464 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:05:51.782621 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:05:51.782731 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:05:51.784494 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 21:05:51.807955 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 21:05:51.816346 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 21:05:51.843894 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 21:05:51.887830 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 21:05:51.887925 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 21:05:51.888070 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 21:05:51.888158 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 21:05:51.906322 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 21:05:51.906411 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 21:05:51.917844 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 21:05:51.941569 (ntainerd)[1793]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 21:05:51.943149 jq[1792]: true Feb 13 21:05:51.945759 dbus-daemon[1751]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 21:05:51.950884 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 21:05:51.950986 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Feb 13 21:05:51.951205 tar[1791]: linux-amd64/LICENSE Feb 13 21:05:51.951340 tar[1791]: linux-amd64/helm Feb 13 21:05:51.954176 systemd[1]: Started update-engine.service - Update Engine. Feb 13 21:05:51.975666 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 21:05:51.982902 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 21:05:51.990711 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 21:05:51.990829 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 21:05:51.995197 bash[1823]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:05:52.001727 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 21:05:52.001809 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 21:05:52.027797 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 21:05:52.039712 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 21:05:52.049944 locksmithd[1830]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 21:05:52.050946 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 21:05:52.051037 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 21:05:52.070894 systemd[1]: Starting sshkeys.service... Feb 13 21:05:52.078455 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 21:05:52.090930 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 21:05:52.102564 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 21:05:52.112983 containerd[1793]: time="2025-02-13T21:05:52.112941411Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 21:05:52.114034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 21:05:52.125378 coreos-metadata[1844]: Feb 13 21:05:52.125 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 21:05:52.125815 containerd[1793]: time="2025-02-13T21:05:52.125789436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127018 containerd[1793]: time="2025-02-13T21:05:52.126933587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127056 containerd[1793]: time="2025-02-13T21:05:52.127019589Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 21:05:52.127056 containerd[1793]: time="2025-02-13T21:05:52.127038021Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 21:05:52.127157 containerd[1793]: time="2025-02-13T21:05:52.127147049Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 21:05:52.127187 containerd[1793]: time="2025-02-13T21:05:52.127160944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127224 containerd[1793]: time="2025-02-13T21:05:52.127213053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127258 containerd[1793]: time="2025-02-13T21:05:52.127224608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127348 containerd[1793]: time="2025-02-13T21:05:52.127337479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127378 containerd[1793]: time="2025-02-13T21:05:52.127348118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127378 containerd[1793]: time="2025-02-13T21:05:52.127361077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127378 containerd[1793]: time="2025-02-13T21:05:52.127371266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127478 containerd[1793]: time="2025-02-13T21:05:52.127438163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127589 containerd[1793]: time="2025-02-13T21:05:52.127579408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127620 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 21:05:52.127679 containerd[1793]: time="2025-02-13T21:05:52.127668952Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:05:52.127715 containerd[1793]: time="2025-02-13T21:05:52.127681294Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 21:05:52.127756 containerd[1793]: time="2025-02-13T21:05:52.127747075Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 21:05:52.127794 containerd[1793]: time="2025-02-13T21:05:52.127785330Z" level=info msg="metadata content store policy set" policy=shared Feb 13 21:05:52.137469 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Feb 13 21:05:52.139310 containerd[1793]: time="2025-02-13T21:05:52.139296623Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 21:05:52.139342 containerd[1793]: time="2025-02-13T21:05:52.139331216Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 21:05:52.139377 containerd[1793]: time="2025-02-13T21:05:52.139346551Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 21:05:52.139377 containerd[1793]: time="2025-02-13T21:05:52.139361697Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 21:05:52.139422 containerd[1793]: time="2025-02-13T21:05:52.139375487Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 21:05:52.139479 containerd[1793]: time="2025-02-13T21:05:52.139468699Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 21:05:52.139632 containerd[1793]: time="2025-02-13T21:05:52.139619108Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 21:05:52.139697 containerd[1793]: time="2025-02-13T21:05:52.139687424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 21:05:52.139734 containerd[1793]: time="2025-02-13T21:05:52.139699296Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 21:05:52.139734 containerd[1793]: time="2025-02-13T21:05:52.139713241Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 21:05:52.139734 containerd[1793]: time="2025-02-13T21:05:52.139726667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139804 containerd[1793]: time="2025-02-13T21:05:52.139740133Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139804 containerd[1793]: time="2025-02-13T21:05:52.139752443Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139804 containerd[1793]: time="2025-02-13T21:05:52.139765365Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139804 containerd[1793]: time="2025-02-13T21:05:52.139781659Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139804 containerd[1793]: time="2025-02-13T21:05:52.139794149Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139805888Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139817458Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139836560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139850773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139862874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139875172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139887457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139899082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.139918 containerd[1793]: time="2025-02-13T21:05:52.139910385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.139924389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.139940026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.139953844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.139965930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.139977103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.139988398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.140001956Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.140019978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.140035851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140130 containerd[1793]: time="2025-02-13T21:05:52.140046362Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 21:05:52.140526 containerd[1793]: time="2025-02-13T21:05:52.140515174Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 21:05:52.140556 containerd[1793]: time="2025-02-13T21:05:52.140532061Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 21:05:52.140556 containerd[1793]: time="2025-02-13T21:05:52.140542966Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 21:05:52.140611 containerd[1793]: time="2025-02-13T21:05:52.140556359Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 21:05:52.140611 containerd[1793]: time="2025-02-13T21:05:52.140566682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140611 containerd[1793]: time="2025-02-13T21:05:52.140579763Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 21:05:52.140611 containerd[1793]: time="2025-02-13T21:05:52.140589835Z" level=info msg="NRI interface is disabled by configuration." Feb 13 21:05:52.140611 containerd[1793]: time="2025-02-13T21:05:52.140602383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 21:05:52.140849 containerd[1793]: time="2025-02-13T21:05:52.140817236Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 21:05:52.140964 containerd[1793]: time="2025-02-13T21:05:52.140853307Z" level=info msg="Connect containerd service" Feb 13 21:05:52.140964 containerd[1793]: time="2025-02-13T21:05:52.140877341Z" level=info msg="using legacy CRI server" Feb 13 21:05:52.140964 containerd[1793]: time="2025-02-13T21:05:52.140883915Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 21:05:52.141041 containerd[1793]: time="2025-02-13T21:05:52.140983145Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 21:05:52.141317 containerd[1793]: time="2025-02-13T21:05:52.141304995Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 21:05:52.141424 containerd[1793]: time="2025-02-13T21:05:52.141404111Z" level=info msg="Start subscribing containerd event" Feb 13 21:05:52.141455 containerd[1793]: time="2025-02-13T21:05:52.141431445Z" level=info msg="Start recovering state" Feb 13 21:05:52.141502 containerd[1793]: time="2025-02-13T21:05:52.141494121Z" level=info msg="Start event monitor" Feb 13 21:05:52.141532 containerd[1793]: time="2025-02-13T21:05:52.141497703Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 21:05:52.141532 containerd[1793]: time="2025-02-13T21:05:52.141501817Z" level=info msg="Start snapshots syncer" Feb 13 21:05:52.141532 containerd[1793]: time="2025-02-13T21:05:52.141522171Z" level=info msg="Start cni network conf syncer for default" Feb 13 21:05:52.141532 containerd[1793]: time="2025-02-13T21:05:52.141528217Z" level=info msg="Start streaming server" Feb 13 21:05:52.141615 containerd[1793]: time="2025-02-13T21:05:52.141537877Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 21:05:52.141615 containerd[1793]: time="2025-02-13T21:05:52.141576681Z" level=info msg="containerd successfully booted in 0.029392s" Feb 13 21:05:52.146883 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 21:05:52.155041 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 21:05:52.162628 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 21:05:52.183531 extend-filesystems[1766]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 21:05:52.183531 extend-filesystems[1766]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 21:05:52.183531 extend-filesystems[1766]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 21:05:52.224661 extend-filesystems[1756]: Resized filesystem in /dev/sda9 Feb 13 21:05:52.224661 extend-filesystems[1756]: Found sdb Feb 13 21:05:52.184081 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 21:05:52.184177 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 21:05:52.255531 tar[1791]: linux-amd64/README.md Feb 13 21:05:52.270793 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 21:05:52.596980 coreos-metadata[1750]: Feb 13 21:05:52.596 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 21:05:53.605756 systemd-networkd[1709]: bond0: Gained IPv6LL Feb 13 21:05:53.606085 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:05:53.606217 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:05:53.606447 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:05:53.608121 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 21:05:53.619146 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 21:05:53.646760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:05:53.657396 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 21:05:53.676063 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 21:05:54.358213 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Feb 13 21:05:54.358584 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Feb 13 21:05:54.437591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:05:54.443629 kernel: mlx5_core 0000:02:00.0: lag map: port 1:2 port 2:2 Feb 13 21:05:54.465836 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:05:54.470630 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Feb 13 21:05:54.985503 kubelet[1888]: E0213 21:05:54.985416 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:05:54.986395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:05:54.986469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:05:55.599916 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 21:05:55.616974 systemd[1]: Started sshd@0-147.28.180.173:22-139.178.89.65:59386.service - OpenSSH per-connection server daemon (139.178.89.65:59386). Feb 13 21:05:55.659453 sshd[1905]: Accepted publickey for core from 139.178.89.65 port 59386 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:05:55.660704 sshd-session[1905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:05:55.666812 systemd-logind[1776]: New session 1 of user core. Feb 13 21:05:55.667983 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 21:05:55.694336 coreos-metadata[1844]: Feb 13 21:05:55.694 INFO Fetch successful Feb 13 21:05:55.694870 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 21:05:55.708495 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 21:05:55.735893 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 21:05:55.744568 unknown[1844]: wrote ssh authorized keys file for user: core Feb 13 21:05:55.761095 (systemd)[1909]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 21:05:55.775242 update-ssh-keys[1910]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:05:55.775572 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 21:05:55.787561 systemd[1]: Finished sshkeys.service. Feb 13 21:05:55.839224 systemd[1909]: Queued start job for default target default.target. Feb 13 21:05:55.858484 systemd[1909]: Created slice app.slice - User Application Slice. Feb 13 21:05:55.858503 systemd[1909]: Reached target paths.target - Paths. Feb 13 21:05:55.858513 systemd[1909]: Reached target timers.target - Timers. Feb 13 21:05:55.859298 systemd[1909]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 21:05:55.865510 systemd[1909]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 21:05:55.865541 systemd[1909]: Reached target sockets.target - Sockets. Feb 13 21:05:55.865551 systemd[1909]: Reached target basic.target - Basic System. Feb 13 21:05:55.865573 systemd[1909]: Reached target default.target - Main User Target. Feb 13 21:05:55.865589 systemd[1909]: Startup finished in 101ms. Feb 13 21:05:55.865643 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 21:05:55.876709 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 21:05:55.889581 coreos-metadata[1750]: Feb 13 21:05:55.889 INFO Fetch successful Feb 13 21:05:55.950626 systemd[1]: Started sshd@1-147.28.180.173:22-139.178.89.65:46012.service - OpenSSH per-connection server daemon (139.178.89.65:46012). Feb 13 21:05:55.962770 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 21:05:55.973955 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Feb 13 21:05:56.024540 sshd[1928]: Accepted publickey for core from 139.178.89.65 port 46012 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:05:56.025163 sshd-session[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:05:56.027587 systemd-logind[1776]: New session 2 of user core. Feb 13 21:05:56.039751 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 21:05:56.097764 sshd[1932]: Connection closed by 139.178.89.65 port 46012 Feb 13 21:05:56.097956 sshd-session[1928]: pam_unix(sshd:session): session closed for user core Feb 13 21:05:56.112659 systemd[1]: sshd@1-147.28.180.173:22-139.178.89.65:46012.service: Deactivated successfully. Feb 13 21:05:56.113762 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 21:05:56.114699 systemd-logind[1776]: Session 2 logged out. Waiting for processes to exit. Feb 13 21:05:56.115597 systemd[1]: Started sshd@2-147.28.180.173:22-139.178.89.65:46018.service - OpenSSH per-connection server daemon (139.178.89.65:46018). Feb 13 21:05:56.127880 systemd-logind[1776]: Removed session 2. Feb 13 21:05:56.162220 sshd[1937]: Accepted publickey for core from 139.178.89.65 port 46018 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:05:56.162817 sshd-session[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:05:56.165397 systemd-logind[1776]: New session 3 of user core. Feb 13 21:05:56.182899 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 21:05:56.255974 sshd[1939]: Connection closed by 139.178.89.65 port 46018 Feb 13 21:05:56.256747 sshd-session[1937]: pam_unix(sshd:session): session closed for user core Feb 13 21:05:56.263142 systemd[1]: sshd@2-147.28.180.173:22-139.178.89.65:46018.service: Deactivated successfully. Feb 13 21:05:56.267127 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 21:05:56.270482 systemd-logind[1776]: Session 3 logged out. Waiting for processes to exit. Feb 13 21:05:56.273254 systemd-logind[1776]: Removed session 3. Feb 13 21:05:56.320973 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Feb 13 21:05:56.334499 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 21:05:56.344458 systemd[1]: Startup finished in 2.941s (kernel) + 19.161s (initrd) + 8.425s (userspace) = 30.527s. Feb 13 21:05:56.386547 agetty[1859]: failed to open credentials directory Feb 13 21:05:56.386686 agetty[1860]: failed to open credentials directory Feb 13 21:05:56.396368 login[1860]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:05:56.399611 login[1859]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:05:56.399637 systemd-logind[1776]: New session 4 of user core. Feb 13 21:05:56.416846 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 21:05:56.418630 systemd-logind[1776]: New session 5 of user core. Feb 13 21:05:56.419634 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 21:05:59.410119 systemd[1]: Started sshd@3-147.28.180.173:22-109.206.236.167:59988.service - OpenSSH per-connection server daemon (109.206.236.167:59988). Feb 13 21:06:00.111022 systemd[1]: Started sshd@4-147.28.180.173:22-109.206.236.167:40930.service - OpenSSH per-connection server daemon (109.206.236.167:40930). Feb 13 21:06:00.448950 sshd[1971]: Invalid user g from 109.206.236.167 port 59988 Feb 13 21:06:00.644991 sshd[1971]: Connection closed by invalid user g 109.206.236.167 port 59988 [preauth] Feb 13 21:06:00.647048 systemd[1]: sshd@3-147.28.180.173:22-109.206.236.167:59988.service: Deactivated successfully. Feb 13 21:06:00.940097 sshd[1974]: Invalid user testuser from 109.206.236.167 port 40930 Feb 13 21:06:01.198914 sshd[1974]: Connection closed by invalid user testuser 109.206.236.167 port 40930 [preauth] Feb 13 21:06:01.202103 systemd[1]: sshd@4-147.28.180.173:22-109.206.236.167:40930.service: Deactivated successfully. Feb 13 21:06:05.238748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 21:06:05.250878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:06:05.500720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:06:05.504986 (kubelet)[1988]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:06:05.525646 kubelet[1988]: E0213 21:06:05.525549 1988 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:06:05.527491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:06:05.527567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:06:06.282920 systemd[1]: Started sshd@5-147.28.180.173:22-139.178.89.65:51840.service - OpenSSH per-connection server daemon (139.178.89.65:51840). Feb 13 21:06:06.317826 sshd[2005]: Accepted publickey for core from 139.178.89.65 port 51840 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:06:06.318448 sshd-session[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:06:06.321161 systemd-logind[1776]: New session 6 of user core. Feb 13 21:06:06.333878 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 21:06:06.384321 sshd[2007]: Connection closed by 139.178.89.65 port 51840 Feb 13 21:06:06.384477 sshd-session[2005]: pam_unix(sshd:session): session closed for user core Feb 13 21:06:06.394347 systemd[1]: sshd@5-147.28.180.173:22-139.178.89.65:51840.service: Deactivated successfully. Feb 13 21:06:06.395162 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 21:06:06.395933 systemd-logind[1776]: Session 6 logged out. Waiting for processes to exit. Feb 13 21:06:06.396760 systemd[1]: Started sshd@6-147.28.180.173:22-139.178.89.65:51844.service - OpenSSH per-connection server daemon (139.178.89.65:51844). Feb 13 21:06:06.397267 systemd-logind[1776]: Removed session 6. Feb 13 21:06:06.440229 sshd[2012]: Accepted publickey for core from 139.178.89.65 port 51844 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:06:06.441055 sshd-session[2012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:06:06.444403 systemd-logind[1776]: New session 7 of user core. Feb 13 21:06:06.458047 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 21:06:06.511650 sshd[2014]: Connection closed by 139.178.89.65 port 51844 Feb 13 21:06:06.511802 sshd-session[2012]: pam_unix(sshd:session): session closed for user core Feb 13 21:06:06.525426 systemd[1]: sshd@6-147.28.180.173:22-139.178.89.65:51844.service: Deactivated successfully. Feb 13 21:06:06.526264 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 21:06:06.527058 systemd-logind[1776]: Session 7 logged out. Waiting for processes to exit. Feb 13 21:06:06.527930 systemd[1]: Started sshd@7-147.28.180.173:22-139.178.89.65:51852.service - OpenSSH per-connection server daemon (139.178.89.65:51852). Feb 13 21:06:06.528537 systemd-logind[1776]: Removed session 7. Feb 13 21:06:06.564082 sshd[2019]: Accepted publickey for core from 139.178.89.65 port 51852 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:06:06.564746 sshd-session[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:06:06.567256 systemd-logind[1776]: New session 8 of user core. Feb 13 21:06:06.579881 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 21:06:06.637670 sshd[2021]: Connection closed by 139.178.89.65 port 51852 Feb 13 21:06:06.638354 sshd-session[2019]: pam_unix(sshd:session): session closed for user core Feb 13 21:06:06.668151 systemd[1]: sshd@7-147.28.180.173:22-139.178.89.65:51852.service: Deactivated successfully. Feb 13 21:06:06.671644 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 21:06:06.674827 systemd-logind[1776]: Session 8 logged out. Waiting for processes to exit. Feb 13 21:06:06.695442 systemd[1]: Started sshd@8-147.28.180.173:22-139.178.89.65:51864.service - OpenSSH per-connection server daemon (139.178.89.65:51864). Feb 13 21:06:06.697821 systemd-logind[1776]: Removed session 8. Feb 13 21:06:06.762377 sshd[2026]: Accepted publickey for core from 139.178.89.65 port 51864 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:06:06.763056 sshd-session[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:06:06.765876 systemd-logind[1776]: New session 9 of user core. Feb 13 21:06:06.776882 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 21:06:06.832268 sudo[2029]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 21:06:06.832412 sudo[2029]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:06:06.852381 sudo[2029]: pam_unix(sudo:session): session closed for user root Feb 13 21:06:06.856549 sshd[2028]: Connection closed by 139.178.89.65 port 51864 Feb 13 21:06:06.856825 sshd-session[2026]: pam_unix(sshd:session): session closed for user core Feb 13 21:06:06.867859 systemd[1]: sshd@8-147.28.180.173:22-139.178.89.65:51864.service: Deactivated successfully. Feb 13 21:06:06.868917 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 21:06:06.869947 systemd-logind[1776]: Session 9 logged out. Waiting for processes to exit. Feb 13 21:06:06.870987 systemd[1]: Started sshd@9-147.28.180.173:22-139.178.89.65:51870.service - OpenSSH per-connection server daemon (139.178.89.65:51870). Feb 13 21:06:06.871844 systemd-logind[1776]: Removed session 9. Feb 13 21:06:06.907274 sshd[2034]: Accepted publickey for core from 139.178.89.65 port 51870 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:06:06.907876 sshd-session[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:06:06.910392 systemd-logind[1776]: New session 10 of user core. Feb 13 21:06:06.924956 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 21:06:06.979901 sudo[2038]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 21:06:06.980504 sudo[2038]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:06:06.989156 sudo[2038]: pam_unix(sudo:session): session closed for user root Feb 13 21:06:07.003550 sudo[2037]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 21:06:07.004354 sudo[2037]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:06:07.031938 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 21:06:07.047326 augenrules[2060]: No rules Feb 13 21:06:07.047749 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 21:06:07.047867 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 21:06:07.048505 sudo[2037]: pam_unix(sudo:session): session closed for user root Feb 13 21:06:07.049460 sshd[2036]: Connection closed by 139.178.89.65 port 51870 Feb 13 21:06:07.049660 sshd-session[2034]: pam_unix(sshd:session): session closed for user core Feb 13 21:06:07.052290 systemd[1]: sshd@9-147.28.180.173:22-139.178.89.65:51870.service: Deactivated successfully. Feb 13 21:06:07.053240 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 21:06:07.053689 systemd-logind[1776]: Session 10 logged out. Waiting for processes to exit. Feb 13 21:06:07.054971 systemd[1]: Started sshd@10-147.28.180.173:22-139.178.89.65:51884.service - OpenSSH per-connection server daemon (139.178.89.65:51884). Feb 13 21:06:07.055565 systemd-logind[1776]: Removed session 10. Feb 13 21:06:07.102430 sshd[2068]: Accepted publickey for core from 139.178.89.65 port 51884 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:06:07.103376 sshd-session[2068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:06:07.106729 systemd-logind[1776]: New session 11 of user core. Feb 13 21:06:07.120848 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 21:06:07.174948 sudo[2071]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 21:06:07.175093 sudo[2071]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:06:07.468991 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 21:06:07.469044 (dockerd)[2097]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 21:06:07.719245 dockerd[2097]: time="2025-02-13T21:06:07.719155866Z" level=info msg="Starting up" Feb 13 21:06:07.819387 dockerd[2097]: time="2025-02-13T21:06:07.819337712Z" level=info msg="Loading containers: start." Feb 13 21:06:07.949706 kernel: Initializing XFRM netlink socket Feb 13 21:06:07.964384 systemd-timesyncd[1711]: Network configuration changed, trying to establish connection. Feb 13 21:06:08.008224 systemd-networkd[1709]: docker0: Link UP Feb 13 21:06:08.033702 dockerd[2097]: time="2025-02-13T21:06:08.033618054Z" level=info msg="Loading containers: done." Feb 13 21:06:08.045094 dockerd[2097]: time="2025-02-13T21:06:08.045049095Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 21:06:08.045163 dockerd[2097]: time="2025-02-13T21:06:08.045094122Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 21:06:08.045163 dockerd[2097]: time="2025-02-13T21:06:08.045147995Z" level=info msg="Daemon has completed initialization" Feb 13 21:06:08.045161 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck523595468-merged.mount: Deactivated successfully. Feb 13 21:06:08.060148 dockerd[2097]: time="2025-02-13T21:06:08.060098787Z" level=info msg="API listen on /run/docker.sock" Feb 13 21:06:08.060182 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 21:06:08.223705 systemd-timesyncd[1711]: Contacted time server [2607:f298:5:101d:f816:3eff:fefd:8817]:123 (2.flatcar.pool.ntp.org). Feb 13 21:06:08.223757 systemd-timesyncd[1711]: Initial clock synchronization to Thu 2025-02-13 21:06:08.455889 UTC. Feb 13 21:06:08.652372 containerd[1793]: time="2025-02-13T21:06:08.652239603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 21:06:09.276583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428913843.mount: Deactivated successfully. Feb 13 21:06:10.211860 containerd[1793]: time="2025-02-13T21:06:10.211804083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:10.212072 containerd[1793]: time="2025-02-13T21:06:10.211963930Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 21:06:10.212425 containerd[1793]: time="2025-02-13T21:06:10.212384881Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:10.214020 containerd[1793]: time="2025-02-13T21:06:10.213978872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:10.215143 containerd[1793]: time="2025-02-13T21:06:10.215103076Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 1.562789603s" Feb 13 21:06:10.215143 containerd[1793]: time="2025-02-13T21:06:10.215119393Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 21:06:10.215476 containerd[1793]: time="2025-02-13T21:06:10.215430559Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 21:06:11.599547 containerd[1793]: time="2025-02-13T21:06:11.599493766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:11.599771 containerd[1793]: time="2025-02-13T21:06:11.599707936Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 21:06:11.600118 containerd[1793]: time="2025-02-13T21:06:11.600078232Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:11.602380 containerd[1793]: time="2025-02-13T21:06:11.602337137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:11.602952 containerd[1793]: time="2025-02-13T21:06:11.602910490Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.387462966s" Feb 13 21:06:11.602952 containerd[1793]: time="2025-02-13T21:06:11.602926379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 21:06:11.603237 containerd[1793]: time="2025-02-13T21:06:11.603181922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 21:06:12.730614 containerd[1793]: time="2025-02-13T21:06:12.730559867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:12.730871 containerd[1793]: time="2025-02-13T21:06:12.730742593Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 21:06:12.731256 containerd[1793]: time="2025-02-13T21:06:12.731215679Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:12.732910 containerd[1793]: time="2025-02-13T21:06:12.732871227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:12.734062 containerd[1793]: time="2025-02-13T21:06:12.734018662Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.13081678s" Feb 13 21:06:12.734062 containerd[1793]: time="2025-02-13T21:06:12.734033975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 21:06:12.734323 containerd[1793]: time="2025-02-13T21:06:12.734281911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 21:06:13.469668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219319228.mount: Deactivated successfully. Feb 13 21:06:13.665266 containerd[1793]: time="2025-02-13T21:06:13.665211767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:13.665466 containerd[1793]: time="2025-02-13T21:06:13.665422817Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 21:06:13.665865 containerd[1793]: time="2025-02-13T21:06:13.665825189Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:13.666765 containerd[1793]: time="2025-02-13T21:06:13.666724105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:13.667198 containerd[1793]: time="2025-02-13T21:06:13.667156086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 932.858951ms" Feb 13 21:06:13.667198 containerd[1793]: time="2025-02-13T21:06:13.667168563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 21:06:13.667443 containerd[1793]: time="2025-02-13T21:06:13.667409709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 21:06:14.182058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2855421864.mount: Deactivated successfully. Feb 13 21:06:14.732232 containerd[1793]: time="2025-02-13T21:06:14.732175596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:14.732442 containerd[1793]: time="2025-02-13T21:06:14.732312036Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 21:06:14.732814 containerd[1793]: time="2025-02-13T21:06:14.732793161Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:14.734681 containerd[1793]: time="2025-02-13T21:06:14.734666261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:14.735282 containerd[1793]: time="2025-02-13T21:06:14.735268949Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.067845679s" Feb 13 21:06:14.735315 containerd[1793]: time="2025-02-13T21:06:14.735284415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 21:06:14.735576 containerd[1793]: time="2025-02-13T21:06:14.735565952Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 21:06:15.186229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466426458.mount: Deactivated successfully. Feb 13 21:06:15.187097 containerd[1793]: time="2025-02-13T21:06:15.187080198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:15.187304 containerd[1793]: time="2025-02-13T21:06:15.187283259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 21:06:15.187532 containerd[1793]: time="2025-02-13T21:06:15.187520592Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:15.188685 containerd[1793]: time="2025-02-13T21:06:15.188671360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:15.189155 containerd[1793]: time="2025-02-13T21:06:15.189143409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 453.562423ms" Feb 13 21:06:15.189201 containerd[1793]: time="2025-02-13T21:06:15.189157883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 21:06:15.189500 containerd[1793]: time="2025-02-13T21:06:15.189489266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 21:06:15.720209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 21:06:15.739888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:06:15.740806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998427540.mount: Deactivated successfully. Feb 13 21:06:15.974634 systemd[1]: Started sshd@11-147.28.180.173:22-109.206.236.167:50098.service - OpenSSH per-connection server daemon (109.206.236.167:50098). Feb 13 21:06:15.980047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:06:15.983302 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:06:16.005184 kubelet[2468]: E0213 21:06:16.005164 2468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:06:16.007346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:06:16.007472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:06:16.740754 sshd[2460]: Invalid user g from 109.206.236.167 port 50098 Feb 13 21:06:16.784997 containerd[1793]: time="2025-02-13T21:06:16.784974413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:16.785243 containerd[1793]: time="2025-02-13T21:06:16.785221725Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 21:06:16.785649 containerd[1793]: time="2025-02-13T21:06:16.785636874Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:16.787371 containerd[1793]: time="2025-02-13T21:06:16.787359893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:16.788132 containerd[1793]: time="2025-02-13T21:06:16.788088776Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.5985847s" Feb 13 21:06:16.788132 containerd[1793]: time="2025-02-13T21:06:16.788105516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 21:06:16.930831 sshd[2460]: Connection closed by invalid user g 109.206.236.167 port 50098 [preauth] Feb 13 21:06:16.934138 systemd[1]: sshd@11-147.28.180.173:22-109.206.236.167:50098.service: Deactivated successfully. Feb 13 21:06:18.380032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:06:18.392972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:06:18.408672 systemd[1]: Reloading requested from client PID 2575 ('systemctl') (unit session-11.scope)... Feb 13 21:06:18.408694 systemd[1]: Reloading... Feb 13 21:06:18.444697 zram_generator::config[2614]: No configuration found. Feb 13 21:06:18.511723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:06:18.573707 systemd[1]: Reloading finished in 164 ms. Feb 13 21:06:18.626981 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 21:06:18.627161 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 21:06:18.627677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:06:18.632674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:06:18.861627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:06:18.864260 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 21:06:18.885622 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:06:18.885622 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 21:06:18.885622 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:06:18.885865 kubelet[2680]: I0213 21:06:18.885620 2680 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 21:06:19.068663 kubelet[2680]: I0213 21:06:19.068616 2680 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 21:06:19.068663 kubelet[2680]: I0213 21:06:19.068632 2680 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 21:06:19.068855 kubelet[2680]: I0213 21:06:19.068820 2680 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 21:06:19.092687 kubelet[2680]: E0213 21:06:19.092664 2680 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.180.173:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.180.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:06:19.093354 kubelet[2680]: I0213 21:06:19.093345 2680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:06:19.101376 kubelet[2680]: E0213 21:06:19.101325 2680 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 21:06:19.101376 kubelet[2680]: I0213 21:06:19.101355 2680 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 21:06:19.112001 kubelet[2680]: I0213 21:06:19.111956 2680 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 21:06:19.112114 kubelet[2680]: I0213 21:06:19.112068 2680 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 21:06:19.112207 kubelet[2680]: I0213 21:06:19.112083 2680 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-a-9675b630d5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 21:06:19.112207 kubelet[2680]: I0213 21:06:19.112180 2680 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 21:06:19.112207 kubelet[2680]: I0213 21:06:19.112186 2680 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 21:06:19.112796 kubelet[2680]: I0213 21:06:19.112761 2680 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:06:19.115965 kubelet[2680]: I0213 21:06:19.115916 2680 kubelet.go:446] "Attempting to sync node with API server" Feb 13 21:06:19.115965 kubelet[2680]: I0213 21:06:19.115924 2680 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 21:06:19.115965 kubelet[2680]: I0213 21:06:19.115934 2680 kubelet.go:352] "Adding apiserver pod source" Feb 13 21:06:19.115965 kubelet[2680]: I0213 21:06:19.115939 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 21:06:19.117884 kubelet[2680]: W0213 21:06:19.117833 2680 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-9675b630d5&limit=500&resourceVersion=0": dial tcp 147.28.180.173:6443: connect: connection refused Feb 13 21:06:19.117884 kubelet[2680]: W0213 21:06:19.117836 2680 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.180.173:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.180.173:6443: connect: connection refused Feb 13 21:06:19.117884 kubelet[2680]: E0213 21:06:19.117861 2680 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-9675b630d5&limit=500&resourceVersion=0\": dial tcp 147.28.180.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:06:19.117884 kubelet[2680]: E0213 21:06:19.117871 2680 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.180.173:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.180.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:06:19.118497 kubelet[2680]: I0213 21:06:19.118464 2680 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 21:06:19.118798 kubelet[2680]: I0213 21:06:19.118762 2680 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 21:06:19.118825 kubelet[2680]: W0213 21:06:19.118804 2680 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 21:06:19.120399 kubelet[2680]: I0213 21:06:19.120364 2680 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 21:06:19.120399 kubelet[2680]: I0213 21:06:19.120380 2680 server.go:1287] "Started kubelet" Feb 13 21:06:19.120518 kubelet[2680]: I0213 21:06:19.120495 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 21:06:19.120558 kubelet[2680]: I0213 21:06:19.120519 2680 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 21:06:19.120855 kubelet[2680]: I0213 21:06:19.120810 2680 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 21:06:19.121234 kubelet[2680]: I0213 21:06:19.121228 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 21:06:19.121290 kubelet[2680]: I0213 21:06:19.121281 2680 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 21:06:19.121290 kubelet[2680]: E0213 21:06:19.121283 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:19.121346 kubelet[2680]: I0213 21:06:19.121306 2680 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 21:06:19.121346 kubelet[2680]: I0213 21:06:19.121311 2680 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 21:06:19.124425 kubelet[2680]: I0213 21:06:19.124302 2680 server.go:490] "Adding debug handlers to kubelet server" Feb 13 21:06:19.124425 kubelet[2680]: I0213 21:06:19.124406 2680 reconciler.go:26] "Reconciler: start to sync state" Feb 13 21:06:19.124555 kubelet[2680]: E0213 21:06:19.124505 2680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-9675b630d5?timeout=10s\": dial tcp 147.28.180.173:6443: connect: connection refused" interval="200ms" Feb 13 21:06:19.124723 kubelet[2680]: W0213 21:06:19.124690 2680 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.180.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.173:6443: connect: connection refused Feb 13 21:06:19.124770 kubelet[2680]: E0213 21:06:19.124733 2680 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.180.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.180.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:06:19.124912 kubelet[2680]: E0213 21:06:19.124814 2680 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 21:06:19.126832 kubelet[2680]: I0213 21:06:19.126824 2680 factory.go:221] Registration of the containerd container factory successfully Feb 13 21:06:19.126832 kubelet[2680]: I0213 21:06:19.126831 2680 factory.go:221] Registration of the systemd container factory successfully Feb 13 21:06:19.126883 kubelet[2680]: I0213 21:06:19.126868 2680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 21:06:19.127529 kubelet[2680]: E0213 21:06:19.126574 2680 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.173:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.173:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.1-a-9675b630d5.1823e08d6d16df92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-a-9675b630d5,UID:ci-4186.1.1-a-9675b630d5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-a-9675b630d5,},FirstTimestamp:2025-02-13 21:06:19.120369554 +0000 UTC m=+0.254059669,LastTimestamp:2025-02-13 21:06:19.120369554 +0000 UTC m=+0.254059669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-a-9675b630d5,}" Feb 13 21:06:19.133015 kubelet[2680]: I0213 21:06:19.132965 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 21:06:19.133643 kubelet[2680]: I0213 21:06:19.133605 2680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 21:06:19.133643 kubelet[2680]: I0213 21:06:19.133620 2680 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 21:06:19.133643 kubelet[2680]: I0213 21:06:19.133639 2680 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 21:06:19.133797 kubelet[2680]: I0213 21:06:19.133666 2680 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 21:06:19.133797 kubelet[2680]: E0213 21:06:19.133744 2680 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 21:06:19.134201 kubelet[2680]: W0213 21:06:19.134157 2680 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.180.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.173:6443: connect: connection refused Feb 13 21:06:19.134267 kubelet[2680]: E0213 21:06:19.134212 2680 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.180.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.180.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:06:19.145016 kubelet[2680]: I0213 21:06:19.144982 2680 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 21:06:19.145016 kubelet[2680]: I0213 21:06:19.144989 2680 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 21:06:19.145016 kubelet[2680]: I0213 21:06:19.144999 2680 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:06:19.146191 kubelet[2680]: I0213 21:06:19.146182 2680 policy_none.go:49] "None policy: Start" Feb 13 21:06:19.146223 kubelet[2680]: I0213 21:06:19.146193 2680 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 21:06:19.146223 kubelet[2680]: I0213 21:06:19.146201 2680 state_mem.go:35] "Initializing new in-memory state store" Feb 13 21:06:19.149383 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 21:06:19.172424 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 21:06:19.183211 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 21:06:19.184126 kubelet[2680]: I0213 21:06:19.184088 2680 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 21:06:19.184221 kubelet[2680]: I0213 21:06:19.184208 2680 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 21:06:19.184260 kubelet[2680]: I0213 21:06:19.184215 2680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 21:06:19.184321 kubelet[2680]: I0213 21:06:19.184309 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 21:06:19.184595 kubelet[2680]: E0213 21:06:19.184585 2680 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 21:06:19.184619 kubelet[2680]: E0213 21:06:19.184609 2680 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:19.258375 systemd[1]: Created slice kubepods-burstable-podda3cd39f4e3cb3a1e4698ab4216fbbda.slice - libcontainer container kubepods-burstable-podda3cd39f4e3cb3a1e4698ab4216fbbda.slice. Feb 13 21:06:19.277512 kubelet[2680]: E0213 21:06:19.277432 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.285047 systemd[1]: Created slice kubepods-burstable-pod60d9b67da2345274ea30dcca720e1870.slice - libcontainer container kubepods-burstable-pod60d9b67da2345274ea30dcca720e1870.slice. Feb 13 21:06:19.287950 kubelet[2680]: I0213 21:06:19.287903 2680 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.288647 kubelet[2680]: E0213 21:06:19.288573 2680 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.180.173:6443/api/v1/nodes\": dial tcp 147.28.180.173:6443: connect: connection refused" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.289380 kubelet[2680]: E0213 21:06:19.289339 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.295760 systemd[1]: Created slice kubepods-burstable-podb2fedec72f1d1b5bfed47789f111e42d.slice - libcontainer container kubepods-burstable-podb2fedec72f1d1b5bfed47789f111e42d.slice. Feb 13 21:06:19.299822 kubelet[2680]: E0213 21:06:19.299720 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.325829 kubelet[2680]: I0213 21:06:19.325704 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2fedec72f1d1b5bfed47789f111e42d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" (UID: \"b2fedec72f1d1b5bfed47789f111e42d\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326040 kubelet[2680]: I0213 21:06:19.325859 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60d9b67da2345274ea30dcca720e1870-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-9675b630d5\" (UID: \"60d9b67da2345274ea30dcca720e1870\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326040 kubelet[2680]: I0213 21:06:19.325925 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2fedec72f1d1b5bfed47789f111e42d-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" (UID: \"b2fedec72f1d1b5bfed47789f111e42d\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326040 kubelet[2680]: I0213 21:06:19.325977 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2fedec72f1d1b5bfed47789f111e42d-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" (UID: \"b2fedec72f1d1b5bfed47789f111e42d\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326040 kubelet[2680]: I0213 21:06:19.326023 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326420 kubelet[2680]: E0213 21:06:19.326021 2680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-9675b630d5?timeout=10s\": dial tcp 147.28.180.173:6443: connect: connection refused" interval="400ms" Feb 13 21:06:19.326420 kubelet[2680]: I0213 21:06:19.326074 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326420 kubelet[2680]: I0213 21:06:19.326121 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326420 kubelet[2680]: I0213 21:06:19.326166 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.326420 kubelet[2680]: I0213 21:06:19.326210 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.492575 kubelet[2680]: I0213 21:06:19.492508 2680 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.492837 kubelet[2680]: E0213 21:06:19.492795 2680 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.180.173:6443/api/v1/nodes\": dial tcp 147.28.180.173:6443: connect: connection refused" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.579328 containerd[1793]: time="2025-02-13T21:06:19.579260939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-9675b630d5,Uid:da3cd39f4e3cb3a1e4698ab4216fbbda,Namespace:kube-system,Attempt:0,}" Feb 13 21:06:19.591059 containerd[1793]: time="2025-02-13T21:06:19.591038519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-9675b630d5,Uid:60d9b67da2345274ea30dcca720e1870,Namespace:kube-system,Attempt:0,}" Feb 13 21:06:19.601569 containerd[1793]: time="2025-02-13T21:06:19.601490504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-9675b630d5,Uid:b2fedec72f1d1b5bfed47789f111e42d,Namespace:kube-system,Attempt:0,}" Feb 13 21:06:19.728093 kubelet[2680]: E0213 21:06:19.727968 2680 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-9675b630d5?timeout=10s\": dial tcp 147.28.180.173:6443: connect: connection refused" interval="800ms" Feb 13 21:06:19.894286 kubelet[2680]: I0213 21:06:19.894192 2680 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.894503 kubelet[2680]: E0213 21:06:19.894429 2680 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.180.173:6443/api/v1/nodes\": dial tcp 147.28.180.173:6443: connect: connection refused" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:19.954467 kubelet[2680]: W0213 21:06:19.954398 2680 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.180.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-9675b630d5&limit=500&resourceVersion=0": dial tcp 147.28.180.173:6443: connect: connection refused Feb 13 21:06:19.954467 kubelet[2680]: E0213 21:06:19.954453 2680 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.180.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-9675b630d5&limit=500&resourceVersion=0\": dial tcp 147.28.180.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:06:20.083661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758485246.mount: Deactivated successfully. Feb 13 21:06:20.084830 containerd[1793]: time="2025-02-13T21:06:20.084812618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:06:20.085046 containerd[1793]: time="2025-02-13T21:06:20.085027211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 21:06:20.085930 containerd[1793]: time="2025-02-13T21:06:20.085915333Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:06:20.086166 containerd[1793]: time="2025-02-13T21:06:20.086150152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:06:20.086473 containerd[1793]: time="2025-02-13T21:06:20.086460467Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:06:20.104551 containerd[1793]: time="2025-02-13T21:06:20.104525714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:06:20.104774 containerd[1793]: time="2025-02-13T21:06:20.104738949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:06:20.106338 containerd[1793]: time="2025-02-13T21:06:20.106310730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 504.592107ms" Feb 13 21:06:20.106601 containerd[1793]: time="2025-02-13T21:06:20.106589175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:06:20.107454 containerd[1793]: time="2025-02-13T21:06:20.107439832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.090376ms" Feb 13 21:06:20.108304 containerd[1793]: time="2025-02-13T21:06:20.108269705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.178452ms" Feb 13 21:06:20.202680 containerd[1793]: time="2025-02-13T21:06:20.202593815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:20.202680 containerd[1793]: time="2025-02-13T21:06:20.202620326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:20.202680 containerd[1793]: time="2025-02-13T21:06:20.202632013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:20.202680 containerd[1793]: time="2025-02-13T21:06:20.202595246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:20.202680 containerd[1793]: time="2025-02-13T21:06:20.202622180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:20.202680 containerd[1793]: time="2025-02-13T21:06:20.202637110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:20.202680 containerd[1793]: time="2025-02-13T21:06:20.202673452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:20.202861 containerd[1793]: time="2025-02-13T21:06:20.202680058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:20.203100 containerd[1793]: time="2025-02-13T21:06:20.203010743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:20.203100 containerd[1793]: time="2025-02-13T21:06:20.203034550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:20.203100 containerd[1793]: time="2025-02-13T21:06:20.203044798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:20.203358 containerd[1793]: time="2025-02-13T21:06:20.203337680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:20.230833 systemd[1]: Started cri-containerd-1be1a2c2cee49cc788b0e82f0e49f2607f79244a0d86ebb545b1652d489ae956.scope - libcontainer container 1be1a2c2cee49cc788b0e82f0e49f2607f79244a0d86ebb545b1652d489ae956. Feb 13 21:06:20.231739 systemd[1]: Started cri-containerd-cd11a3e88a2f87d2e115a5f3e7e3b98622090e2ba46171c46d2bde42eae9ab72.scope - libcontainer container cd11a3e88a2f87d2e115a5f3e7e3b98622090e2ba46171c46d2bde42eae9ab72. Feb 13 21:06:20.232646 systemd[1]: Started cri-containerd-f4b90eacd8538ab85717aa8ec9ecdd5a94b35cab609a320b8244766bbdea4fb9.scope - libcontainer container f4b90eacd8538ab85717aa8ec9ecdd5a94b35cab609a320b8244766bbdea4fb9. Feb 13 21:06:20.258530 containerd[1793]: time="2025-02-13T21:06:20.258503336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-9675b630d5,Uid:b2fedec72f1d1b5bfed47789f111e42d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1be1a2c2cee49cc788b0e82f0e49f2607f79244a0d86ebb545b1652d489ae956\"" Feb 13 21:06:20.260133 containerd[1793]: time="2025-02-13T21:06:20.260116212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-9675b630d5,Uid:da3cd39f4e3cb3a1e4698ab4216fbbda,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd11a3e88a2f87d2e115a5f3e7e3b98622090e2ba46171c46d2bde42eae9ab72\"" Feb 13 21:06:20.260433 containerd[1793]: time="2025-02-13T21:06:20.260417778Z" level=info msg="CreateContainer within sandbox \"1be1a2c2cee49cc788b0e82f0e49f2607f79244a0d86ebb545b1652d489ae956\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 21:06:20.260978 containerd[1793]: time="2025-02-13T21:06:20.260962878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-9675b630d5,Uid:60d9b67da2345274ea30dcca720e1870,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4b90eacd8538ab85717aa8ec9ecdd5a94b35cab609a320b8244766bbdea4fb9\"" Feb 13 21:06:20.261215 containerd[1793]: time="2025-02-13T21:06:20.261201609Z" level=info msg="CreateContainer within sandbox \"cd11a3e88a2f87d2e115a5f3e7e3b98622090e2ba46171c46d2bde42eae9ab72\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 21:06:20.261884 containerd[1793]: time="2025-02-13T21:06:20.261873172Z" level=info msg="CreateContainer within sandbox \"f4b90eacd8538ab85717aa8ec9ecdd5a94b35cab609a320b8244766bbdea4fb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 21:06:20.266546 containerd[1793]: time="2025-02-13T21:06:20.266507454Z" level=info msg="CreateContainer within sandbox \"1be1a2c2cee49cc788b0e82f0e49f2607f79244a0d86ebb545b1652d489ae956\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cb77bb52add41a0194092ca1f5b33d07de97c4f91813f626e795c312a830066f\"" Feb 13 21:06:20.266890 containerd[1793]: time="2025-02-13T21:06:20.266878299Z" level=info msg="StartContainer for \"cb77bb52add41a0194092ca1f5b33d07de97c4f91813f626e795c312a830066f\"" Feb 13 21:06:20.267951 containerd[1793]: time="2025-02-13T21:06:20.267937889Z" level=info msg="CreateContainer within sandbox \"cd11a3e88a2f87d2e115a5f3e7e3b98622090e2ba46171c46d2bde42eae9ab72\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8de322ada0f03d5cdb35116b46cd585fcc253a3853e0d2290213aa19d0f0856f\"" Feb 13 21:06:20.268177 containerd[1793]: time="2025-02-13T21:06:20.268150881Z" level=info msg="StartContainer for \"8de322ada0f03d5cdb35116b46cd585fcc253a3853e0d2290213aa19d0f0856f\"" Feb 13 21:06:20.269192 containerd[1793]: time="2025-02-13T21:06:20.269180963Z" level=info msg="CreateContainer within sandbox \"f4b90eacd8538ab85717aa8ec9ecdd5a94b35cab609a320b8244766bbdea4fb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c446452f85d4b77b84dc1ac3faeecb3e34132a79023fb1c720dc348975de3aa\"" Feb 13 21:06:20.269369 containerd[1793]: time="2025-02-13T21:06:20.269356135Z" level=info msg="StartContainer for \"1c446452f85d4b77b84dc1ac3faeecb3e34132a79023fb1c720dc348975de3aa\"" Feb 13 21:06:20.290969 systemd[1]: Started cri-containerd-cb77bb52add41a0194092ca1f5b33d07de97c4f91813f626e795c312a830066f.scope - libcontainer container cb77bb52add41a0194092ca1f5b33d07de97c4f91813f626e795c312a830066f. Feb 13 21:06:20.293138 systemd[1]: Started cri-containerd-1c446452f85d4b77b84dc1ac3faeecb3e34132a79023fb1c720dc348975de3aa.scope - libcontainer container 1c446452f85d4b77b84dc1ac3faeecb3e34132a79023fb1c720dc348975de3aa. Feb 13 21:06:20.293841 systemd[1]: Started cri-containerd-8de322ada0f03d5cdb35116b46cd585fcc253a3853e0d2290213aa19d0f0856f.scope - libcontainer container 8de322ada0f03d5cdb35116b46cd585fcc253a3853e0d2290213aa19d0f0856f. Feb 13 21:06:20.314422 containerd[1793]: time="2025-02-13T21:06:20.314391737Z" level=info msg="StartContainer for \"cb77bb52add41a0194092ca1f5b33d07de97c4f91813f626e795c312a830066f\" returns successfully" Feb 13 21:06:20.314940 containerd[1793]: time="2025-02-13T21:06:20.314926526Z" level=info msg="StartContainer for \"1c446452f85d4b77b84dc1ac3faeecb3e34132a79023fb1c720dc348975de3aa\" returns successfully" Feb 13 21:06:20.316855 containerd[1793]: time="2025-02-13T21:06:20.316835922Z" level=info msg="StartContainer for \"8de322ada0f03d5cdb35116b46cd585fcc253a3853e0d2290213aa19d0f0856f\" returns successfully" Feb 13 21:06:20.696113 kubelet[2680]: I0213 21:06:20.695983 2680 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:21.051418 kubelet[2680]: E0213 21:06:21.051363 2680 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:21.155254 kubelet[2680]: E0213 21:06:21.154720 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:21.168552 kubelet[2680]: E0213 21:06:21.168509 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:21.170816 kubelet[2680]: E0213 21:06:21.170770 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:21.172190 kubelet[2680]: I0213 21:06:21.172149 2680 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:21.172190 kubelet[2680]: E0213 21:06:21.172163 2680 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4186.1.1-a-9675b630d5\": node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.183231 kubelet[2680]: E0213 21:06:21.183218 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.283882 kubelet[2680]: E0213 21:06:21.283786 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.384684 kubelet[2680]: E0213 21:06:21.384446 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.485461 kubelet[2680]: E0213 21:06:21.485393 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.586615 kubelet[2680]: E0213 21:06:21.586500 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.687591 kubelet[2680]: E0213 21:06:21.687362 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.787621 kubelet[2680]: E0213 21:06:21.787527 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.887764 kubelet[2680]: E0213 21:06:21.887673 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:21.988270 kubelet[2680]: E0213 21:06:21.988030 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:22.088622 kubelet[2680]: E0213 21:06:22.088523 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:22.175826 kubelet[2680]: E0213 21:06:22.175768 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:22.176061 kubelet[2680]: E0213 21:06:22.175943 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:22.176343 kubelet[2680]: E0213 21:06:22.176286 2680 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-9675b630d5\" not found" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:22.188894 kubelet[2680]: E0213 21:06:22.188842 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:22.290015 kubelet[2680]: E0213 21:06:22.289867 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:22.390472 kubelet[2680]: E0213 21:06:22.390368 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:22.491328 kubelet[2680]: E0213 21:06:22.491237 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:22.592150 kubelet[2680]: E0213 21:06:22.591959 2680 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:22.721612 kubelet[2680]: I0213 21:06:22.721504 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:22.736683 kubelet[2680]: W0213 21:06:22.736617 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:06:22.737020 kubelet[2680]: I0213 21:06:22.736856 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:22.742742 kubelet[2680]: W0213 21:06:22.742664 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:06:22.742924 kubelet[2680]: I0213 21:06:22.742861 2680 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:22.748621 kubelet[2680]: W0213 21:06:22.748530 2680 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:06:23.118282 kubelet[2680]: I0213 21:06:23.118225 2680 apiserver.go:52] "Watching apiserver" Feb 13 21:06:23.222752 kubelet[2680]: I0213 21:06:23.222656 2680 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 21:06:23.322513 systemd[1]: Reloading requested from client PID 2998 ('systemctl') (unit session-11.scope)... Feb 13 21:06:23.322521 systemd[1]: Reloading... Feb 13 21:06:23.366740 zram_generator::config[3037]: No configuration found. Feb 13 21:06:23.436547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:06:23.506318 systemd[1]: Reloading finished in 183 ms. Feb 13 21:06:23.528816 kubelet[2680]: I0213 21:06:23.528755 2680 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:06:23.528871 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:06:23.553388 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 21:06:23.553904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:06:23.570967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:06:23.833197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:06:23.843994 (kubelet)[3104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 21:06:23.876747 kubelet[3104]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:06:23.876747 kubelet[3104]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 21:06:23.876747 kubelet[3104]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:06:23.877010 kubelet[3104]: I0213 21:06:23.876760 3104 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 21:06:23.882003 kubelet[3104]: I0213 21:06:23.881958 3104 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 21:06:23.882003 kubelet[3104]: I0213 21:06:23.881973 3104 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 21:06:23.882195 kubelet[3104]: I0213 21:06:23.882156 3104 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 21:06:23.883086 kubelet[3104]: I0213 21:06:23.883046 3104 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 21:06:23.884645 kubelet[3104]: I0213 21:06:23.884598 3104 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:06:23.886526 kubelet[3104]: E0213 21:06:23.886481 3104 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 21:06:23.886526 kubelet[3104]: I0213 21:06:23.886496 3104 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 21:06:23.895415 kubelet[3104]: I0213 21:06:23.895398 3104 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 21:06:23.895558 kubelet[3104]: I0213 21:06:23.895537 3104 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 21:06:23.895695 kubelet[3104]: I0213 21:06:23.895561 3104 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-a-9675b630d5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 21:06:23.895757 kubelet[3104]: I0213 21:06:23.895701 3104 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 21:06:23.895757 kubelet[3104]: I0213 21:06:23.895710 3104 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 21:06:23.895757 kubelet[3104]: I0213 21:06:23.895743 3104 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:06:23.895891 kubelet[3104]: I0213 21:06:23.895885 3104 kubelet.go:446] "Attempting to sync node with API server" Feb 13 21:06:23.895920 kubelet[3104]: I0213 21:06:23.895895 3104 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 21:06:23.895920 kubelet[3104]: I0213 21:06:23.895907 3104 kubelet.go:352] "Adding apiserver pod source" Feb 13 21:06:23.895920 kubelet[3104]: I0213 21:06:23.895915 3104 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 21:06:23.896415 kubelet[3104]: I0213 21:06:23.896397 3104 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 21:06:23.896737 kubelet[3104]: I0213 21:06:23.896725 3104 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 21:06:23.897095 kubelet[3104]: I0213 21:06:23.897085 3104 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 21:06:23.897140 kubelet[3104]: I0213 21:06:23.897113 3104 server.go:1287] "Started kubelet" Feb 13 21:06:23.897178 kubelet[3104]: I0213 21:06:23.897160 3104 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 21:06:23.897308 kubelet[3104]: I0213 21:06:23.897253 3104 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 21:06:23.897497 kubelet[3104]: I0213 21:06:23.897476 3104 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 21:06:23.898179 kubelet[3104]: I0213 21:06:23.898166 3104 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 21:06:23.898250 kubelet[3104]: I0213 21:06:23.898229 3104 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 21:06:23.898293 kubelet[3104]: I0213 21:06:23.898262 3104 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 21:06:23.898293 kubelet[3104]: E0213 21:06:23.898251 3104 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-9675b630d5\" not found" Feb 13 21:06:23.898361 kubelet[3104]: I0213 21:06:23.898294 3104 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 21:06:23.899789 kubelet[3104]: I0213 21:06:23.899766 3104 reconciler.go:26] "Reconciler: start to sync state" Feb 13 21:06:23.900710 kubelet[3104]: I0213 21:06:23.900691 3104 factory.go:221] Registration of the systemd container factory successfully Feb 13 21:06:23.900805 kubelet[3104]: I0213 21:06:23.900787 3104 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 21:06:23.900993 kubelet[3104]: E0213 21:06:23.900976 3104 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 21:06:23.901372 kubelet[3104]: I0213 21:06:23.901357 3104 server.go:490] "Adding debug handlers to kubelet server" Feb 13 21:06:23.902182 kubelet[3104]: I0213 21:06:23.902171 3104 factory.go:221] Registration of the containerd container factory successfully Feb 13 21:06:23.906264 kubelet[3104]: I0213 21:06:23.906235 3104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 21:06:23.907337 kubelet[3104]: I0213 21:06:23.907302 3104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 21:06:23.907511 kubelet[3104]: I0213 21:06:23.907492 3104 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 21:06:23.907575 kubelet[3104]: I0213 21:06:23.907526 3104 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 21:06:23.907575 kubelet[3104]: I0213 21:06:23.907550 3104 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 21:06:23.907656 kubelet[3104]: E0213 21:06:23.907597 3104 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 21:06:23.920804 kubelet[3104]: I0213 21:06:23.920763 3104 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 21:06:23.920804 kubelet[3104]: I0213 21:06:23.920776 3104 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 21:06:23.920804 kubelet[3104]: I0213 21:06:23.920791 3104 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:06:23.920925 kubelet[3104]: I0213 21:06:23.920911 3104 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 21:06:23.920948 kubelet[3104]: I0213 21:06:23.920919 3104 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 21:06:23.920948 kubelet[3104]: I0213 21:06:23.920934 3104 policy_none.go:49] "None policy: Start" Feb 13 21:06:23.920948 kubelet[3104]: I0213 21:06:23.920941 3104 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 21:06:23.920948 kubelet[3104]: I0213 21:06:23.920948 3104 state_mem.go:35] "Initializing new in-memory state store" Feb 13 21:06:23.921032 kubelet[3104]: I0213 21:06:23.921024 3104 state_mem.go:75] "Updated machine memory state" Feb 13 21:06:23.923673 kubelet[3104]: I0213 21:06:23.923624 3104 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 21:06:23.923793 kubelet[3104]: I0213 21:06:23.923754 3104 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 21:06:23.923793 kubelet[3104]: I0213 21:06:23.923763 3104 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 21:06:23.923883 kubelet[3104]: I0213 21:06:23.923872 3104 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 21:06:23.924383 kubelet[3104]: E0213 21:06:23.924371 3104 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 21:06:24.008157 kubelet[3104]: I0213 21:06:24.008105 3104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.008157 kubelet[3104]: I0213 21:06:24.008110 3104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.008157 kubelet[3104]: I0213 21:06:24.008144 3104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.012414 kubelet[3104]: W0213 21:06:24.012393 3104 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:06:24.012502 kubelet[3104]: E0213 21:06:24.012441 3104 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.012711 kubelet[3104]: W0213 21:06:24.012702 3104 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:06:24.012758 kubelet[3104]: W0213 21:06:24.012719 3104 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:06:24.012758 kubelet[3104]: E0213 21:06:24.012732 3104 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186.1.1-a-9675b630d5\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.012758 kubelet[3104]: E0213 21:06:24.012750 3104 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.026772 kubelet[3104]: I0213 21:06:24.026711 3104 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.031821 kubelet[3104]: I0213 21:06:24.031799 3104 kubelet_node_status.go:125] "Node was previously registered" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.031918 kubelet[3104]: I0213 21:06:24.031887 3104 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.101883 kubelet[3104]: I0213 21:06:24.101615 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60d9b67da2345274ea30dcca720e1870-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-9675b630d5\" (UID: \"60d9b67da2345274ea30dcca720e1870\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.101883 kubelet[3104]: I0213 21:06:24.101764 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2fedec72f1d1b5bfed47789f111e42d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" (UID: \"b2fedec72f1d1b5bfed47789f111e42d\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.101883 kubelet[3104]: I0213 21:06:24.101861 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.102260 kubelet[3104]: I0213 21:06:24.101915 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.102260 kubelet[3104]: I0213 21:06:24.101967 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.102260 kubelet[3104]: I0213 21:06:24.102020 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.102260 kubelet[3104]: I0213 21:06:24.102068 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2fedec72f1d1b5bfed47789f111e42d-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" (UID: \"b2fedec72f1d1b5bfed47789f111e42d\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.102260 kubelet[3104]: I0213 21:06:24.102113 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2fedec72f1d1b5bfed47789f111e42d-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" (UID: \"b2fedec72f1d1b5bfed47789f111e42d\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.102753 kubelet[3104]: I0213 21:06:24.102162 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da3cd39f4e3cb3a1e4698ab4216fbbda-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-9675b630d5\" (UID: \"da3cd39f4e3cb3a1e4698ab4216fbbda\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.897012 kubelet[3104]: I0213 21:06:24.896983 3104 apiserver.go:52] "Watching apiserver" Feb 13 21:06:24.899353 kubelet[3104]: I0213 21:06:24.899335 3104 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 21:06:24.911330 kubelet[3104]: I0213 21:06:24.911313 3104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.914147 kubelet[3104]: W0213 21:06:24.914133 3104 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:06:24.914214 kubelet[3104]: E0213 21:06:24.914175 3104 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.1-a-9675b630d5\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" Feb 13 21:06:24.922761 kubelet[3104]: I0213 21:06:24.922725 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.1-a-9675b630d5" podStartSLOduration=2.922714452 podStartE2EDuration="2.922714452s" podCreationTimestamp="2025-02-13 21:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:06:24.922713066 +0000 UTC m=+1.073896860" watchObservedRunningTime="2025-02-13 21:06:24.922714452 +0000 UTC m=+1.073898243" Feb 13 21:06:24.931022 kubelet[3104]: I0213 21:06:24.930993 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.1-a-9675b630d5" podStartSLOduration=2.9309817110000003 podStartE2EDuration="2.930981711s" podCreationTimestamp="2025-02-13 21:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:06:24.930886191 +0000 UTC m=+1.082069986" watchObservedRunningTime="2025-02-13 21:06:24.930981711 +0000 UTC m=+1.082165505" Feb 13 21:06:24.931136 kubelet[3104]: I0213 21:06:24.931047 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-9675b630d5" podStartSLOduration=2.931043601 podStartE2EDuration="2.931043601s" podCreationTimestamp="2025-02-13 21:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:06:24.926661731 +0000 UTC m=+1.077845532" watchObservedRunningTime="2025-02-13 21:06:24.931043601 +0000 UTC m=+1.082227393" Feb 13 21:06:27.899077 sudo[2071]: pam_unix(sudo:session): session closed for user root Feb 13 21:06:27.899866 sshd[2070]: Connection closed by 139.178.89.65 port 51884 Feb 13 21:06:27.900016 sshd-session[2068]: pam_unix(sshd:session): session closed for user core Feb 13 21:06:27.901472 systemd[1]: sshd@10-147.28.180.173:22-139.178.89.65:51884.service: Deactivated successfully. Feb 13 21:06:27.902388 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 21:06:27.902472 systemd[1]: session-11.scope: Consumed 3.102s CPU time, 161.6M memory peak, 0B memory swap peak. Feb 13 21:06:27.903177 systemd-logind[1776]: Session 11 logged out. Waiting for processes to exit. Feb 13 21:06:27.903898 systemd-logind[1776]: Removed session 11. Feb 13 21:06:28.231074 kubelet[3104]: I0213 21:06:28.230983 3104 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 21:06:28.231277 kubelet[3104]: I0213 21:06:28.231269 3104 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 21:06:28.231299 containerd[1793]: time="2025-02-13T21:06:28.231148308Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 21:06:28.271817 systemd[1]: Created slice kubepods-besteffort-pod8ba37665_8627_4be6_942b_1021ca2a54b1.slice - libcontainer container kubepods-besteffort-pod8ba37665_8627_4be6_942b_1021ca2a54b1.slice. Feb 13 21:06:28.332451 kubelet[3104]: I0213 21:06:28.332330 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ba37665-8627-4be6-942b-1021ca2a54b1-kube-proxy\") pod \"kube-proxy-n56h7\" (UID: \"8ba37665-8627-4be6-942b-1021ca2a54b1\") " pod="kube-system/kube-proxy-n56h7" Feb 13 21:06:28.332451 kubelet[3104]: I0213 21:06:28.332430 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wkw\" (UniqueName: \"kubernetes.io/projected/8ba37665-8627-4be6-942b-1021ca2a54b1-kube-api-access-82wkw\") pod \"kube-proxy-n56h7\" (UID: \"8ba37665-8627-4be6-942b-1021ca2a54b1\") " pod="kube-system/kube-proxy-n56h7" Feb 13 21:06:28.332860 kubelet[3104]: I0213 21:06:28.332516 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ba37665-8627-4be6-942b-1021ca2a54b1-xtables-lock\") pod \"kube-proxy-n56h7\" (UID: \"8ba37665-8627-4be6-942b-1021ca2a54b1\") " pod="kube-system/kube-proxy-n56h7" Feb 13 21:06:28.332860 kubelet[3104]: I0213 21:06:28.332569 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ba37665-8627-4be6-942b-1021ca2a54b1-lib-modules\") pod \"kube-proxy-n56h7\" (UID: \"8ba37665-8627-4be6-942b-1021ca2a54b1\") " pod="kube-system/kube-proxy-n56h7" Feb 13 21:06:28.446166 kubelet[3104]: E0213 21:06:28.446071 3104 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 21:06:28.446166 kubelet[3104]: E0213 21:06:28.446136 3104 projected.go:194] Error preparing data for projected volume kube-api-access-82wkw for pod kube-system/kube-proxy-n56h7: configmap "kube-root-ca.crt" not found Feb 13 21:06:28.446489 kubelet[3104]: E0213 21:06:28.446281 3104 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ba37665-8627-4be6-942b-1021ca2a54b1-kube-api-access-82wkw podName:8ba37665-8627-4be6-942b-1021ca2a54b1 nodeName:}" failed. No retries permitted until 2025-02-13 21:06:28.946218088 +0000 UTC m=+5.097401951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-82wkw" (UniqueName: "kubernetes.io/projected/8ba37665-8627-4be6-942b-1021ca2a54b1-kube-api-access-82wkw") pod "kube-proxy-n56h7" (UID: "8ba37665-8627-4be6-942b-1021ca2a54b1") : configmap "kube-root-ca.crt" not found Feb 13 21:06:29.188585 containerd[1793]: time="2025-02-13T21:06:29.188466667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n56h7,Uid:8ba37665-8627-4be6-942b-1021ca2a54b1,Namespace:kube-system,Attempt:0,}" Feb 13 21:06:29.199752 containerd[1793]: time="2025-02-13T21:06:29.199665909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:29.199752 containerd[1793]: time="2025-02-13T21:06:29.199706239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:29.199752 containerd[1793]: time="2025-02-13T21:06:29.199727434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:29.199992 containerd[1793]: time="2025-02-13T21:06:29.199972572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:29.216835 systemd[1]: Started cri-containerd-45ec649e85d0ce405cd9ce825d6efa0f06a2b2e24712d3ac01ac3fd34605c06a.scope - libcontainer container 45ec649e85d0ce405cd9ce825d6efa0f06a2b2e24712d3ac01ac3fd34605c06a. Feb 13 21:06:29.226261 containerd[1793]: time="2025-02-13T21:06:29.226237156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n56h7,Uid:8ba37665-8627-4be6-942b-1021ca2a54b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ec649e85d0ce405cd9ce825d6efa0f06a2b2e24712d3ac01ac3fd34605c06a\"" Feb 13 21:06:29.227388 containerd[1793]: time="2025-02-13T21:06:29.227375421Z" level=info msg="CreateContainer within sandbox \"45ec649e85d0ce405cd9ce825d6efa0f06a2b2e24712d3ac01ac3fd34605c06a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 21:06:29.232659 containerd[1793]: time="2025-02-13T21:06:29.232637173Z" level=info msg="CreateContainer within sandbox \"45ec649e85d0ce405cd9ce825d6efa0f06a2b2e24712d3ac01ac3fd34605c06a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c6a65c0c0b313a52672b6ab574d34179d8623338e45871f324ec9da9d35bdaa8\"" Feb 13 21:06:29.232919 containerd[1793]: time="2025-02-13T21:06:29.232869942Z" level=info msg="StartContainer for \"c6a65c0c0b313a52672b6ab574d34179d8623338e45871f324ec9da9d35bdaa8\"" Feb 13 21:06:29.253781 systemd[1]: Started cri-containerd-c6a65c0c0b313a52672b6ab574d34179d8623338e45871f324ec9da9d35bdaa8.scope - libcontainer container c6a65c0c0b313a52672b6ab574d34179d8623338e45871f324ec9da9d35bdaa8. Feb 13 21:06:29.272868 containerd[1793]: time="2025-02-13T21:06:29.272842421Z" level=info msg="StartContainer for \"c6a65c0c0b313a52672b6ab574d34179d8623338e45871f324ec9da9d35bdaa8\" returns successfully" Feb 13 21:06:29.380532 systemd[1]: Created slice kubepods-besteffort-pod159122a3_beaf_4423_b0fc_9eb388e9c35c.slice - libcontainer container kubepods-besteffort-pod159122a3_beaf_4423_b0fc_9eb388e9c35c.slice. Feb 13 21:06:29.442310 kubelet[3104]: I0213 21:06:29.442188 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/159122a3-beaf-4423-b0fc-9eb388e9c35c-var-lib-calico\") pod \"tigera-operator-7d68577dc5-zbfpv\" (UID: \"159122a3-beaf-4423-b0fc-9eb388e9c35c\") " pod="tigera-operator/tigera-operator-7d68577dc5-zbfpv" Feb 13 21:06:29.442310 kubelet[3104]: I0213 21:06:29.442248 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkm22\" (UniqueName: \"kubernetes.io/projected/159122a3-beaf-4423-b0fc-9eb388e9c35c-kube-api-access-bkm22\") pod \"tigera-operator-7d68577dc5-zbfpv\" (UID: \"159122a3-beaf-4423-b0fc-9eb388e9c35c\") " pod="tigera-operator/tigera-operator-7d68577dc5-zbfpv" Feb 13 21:06:29.684059 containerd[1793]: time="2025-02-13T21:06:29.683973825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-zbfpv,Uid:159122a3-beaf-4423-b0fc-9eb388e9c35c,Namespace:tigera-operator,Attempt:0,}" Feb 13 21:06:29.698656 containerd[1793]: time="2025-02-13T21:06:29.698564470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:29.698656 containerd[1793]: time="2025-02-13T21:06:29.698596309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:29.698656 containerd[1793]: time="2025-02-13T21:06:29.698604132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:29.698754 containerd[1793]: time="2025-02-13T21:06:29.698667490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:29.715781 systemd[1]: Started cri-containerd-af86002eed5f9e85cfc6f3dbef42daaf4e8e544f41966080326b82ed2476dbdc.scope - libcontainer container af86002eed5f9e85cfc6f3dbef42daaf4e8e544f41966080326b82ed2476dbdc. Feb 13 21:06:29.737779 containerd[1793]: time="2025-02-13T21:06:29.737756710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-zbfpv,Uid:159122a3-beaf-4423-b0fc-9eb388e9c35c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"af86002eed5f9e85cfc6f3dbef42daaf4e8e544f41966080326b82ed2476dbdc\"" Feb 13 21:06:29.738532 containerd[1793]: time="2025-02-13T21:06:29.738519061Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 21:06:30.101275 kubelet[3104]: I0213 21:06:30.101140 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n56h7" podStartSLOduration=2.101090524 podStartE2EDuration="2.101090524s" podCreationTimestamp="2025-02-13 21:06:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:06:29.946308387 +0000 UTC m=+6.097492246" watchObservedRunningTime="2025-02-13 21:06:30.101090524 +0000 UTC m=+6.252274366" Feb 13 21:06:31.947645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988609159.mount: Deactivated successfully. Feb 13 21:06:33.153442 containerd[1793]: time="2025-02-13T21:06:33.153416804Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:33.153654 containerd[1793]: time="2025-02-13T21:06:33.153606791Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 21:06:33.154001 containerd[1793]: time="2025-02-13T21:06:33.153991370Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:33.154993 containerd[1793]: time="2025-02-13T21:06:33.154982628Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:33.155827 containerd[1793]: time="2025-02-13T21:06:33.155785948Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.417250444s" Feb 13 21:06:33.155827 containerd[1793]: time="2025-02-13T21:06:33.155801468Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 21:06:33.156749 containerd[1793]: time="2025-02-13T21:06:33.156734586Z" level=info msg="CreateContainer within sandbox \"af86002eed5f9e85cfc6f3dbef42daaf4e8e544f41966080326b82ed2476dbdc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 21:06:33.177938 containerd[1793]: time="2025-02-13T21:06:33.177922202Z" level=info msg="CreateContainer within sandbox \"af86002eed5f9e85cfc6f3dbef42daaf4e8e544f41966080326b82ed2476dbdc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"db0223efc8a27b4bb3cb7c58b366243aea61bbee472e7e06ea99dc7ffd9d6fa3\"" Feb 13 21:06:33.178113 containerd[1793]: time="2025-02-13T21:06:33.178103627Z" level=info msg="StartContainer for \"db0223efc8a27b4bb3cb7c58b366243aea61bbee472e7e06ea99dc7ffd9d6fa3\"" Feb 13 21:06:33.197931 systemd[1]: Started cri-containerd-db0223efc8a27b4bb3cb7c58b366243aea61bbee472e7e06ea99dc7ffd9d6fa3.scope - libcontainer container db0223efc8a27b4bb3cb7c58b366243aea61bbee472e7e06ea99dc7ffd9d6fa3. Feb 13 21:06:33.208749 containerd[1793]: time="2025-02-13T21:06:33.208695904Z" level=info msg="StartContainer for \"db0223efc8a27b4bb3cb7c58b366243aea61bbee472e7e06ea99dc7ffd9d6fa3\" returns successfully" Feb 13 21:06:33.951744 kubelet[3104]: I0213 21:06:33.951713 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-zbfpv" podStartSLOduration=1.533819965 podStartE2EDuration="4.951701781s" podCreationTimestamp="2025-02-13 21:06:29 +0000 UTC" firstStartedPulling="2025-02-13 21:06:29.738301359 +0000 UTC m=+5.889485151" lastFinishedPulling="2025-02-13 21:06:33.156183164 +0000 UTC m=+9.307366967" observedRunningTime="2025-02-13 21:06:33.951591688 +0000 UTC m=+10.102775485" watchObservedRunningTime="2025-02-13 21:06:33.951701781 +0000 UTC m=+10.102885575" Feb 13 21:06:34.714912 systemd[1]: Started sshd@12-147.28.180.173:22-109.206.236.167:46998.service - OpenSSH per-connection server daemon (109.206.236.167:46998). Feb 13 21:06:35.574466 sshd[3577]: Invalid user ec2-user from 109.206.236.167 port 46998 Feb 13 21:06:35.768634 sshd[3577]: Connection closed by invalid user ec2-user 109.206.236.167 port 46998 [preauth] Feb 13 21:06:35.769464 systemd[1]: sshd@12-147.28.180.173:22-109.206.236.167:46998.service: Deactivated successfully. Feb 13 21:06:36.004706 systemd[1]: Created slice kubepods-besteffort-pod791ddaf9_79c0_419b_8118_21d7d4c1a96d.slice - libcontainer container kubepods-besteffort-pod791ddaf9_79c0_419b_8118_21d7d4c1a96d.slice. Feb 13 21:06:36.049961 systemd[1]: Created slice kubepods-besteffort-pod7fb0f7b1_61f2_47e3_8864_bf019aae3bc7.slice - libcontainer container kubepods-besteffort-pod7fb0f7b1_61f2_47e3_8864_bf019aae3bc7.slice. Feb 13 21:06:36.094865 kubelet[3104]: I0213 21:06:36.094756 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-cni-log-dir\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.094865 kubelet[3104]: I0213 21:06:36.094846 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/791ddaf9-79c0-419b-8118-21d7d4c1a96d-typha-certs\") pod \"calico-typha-7db8b79dd5-5fhnq\" (UID: \"791ddaf9-79c0-419b-8118-21d7d4c1a96d\") " pod="calico-system/calico-typha-7db8b79dd5-5fhnq" Feb 13 21:06:36.096148 kubelet[3104]: I0213 21:06:36.094900 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-lib-modules\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096148 kubelet[3104]: I0213 21:06:36.094951 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-node-certs\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096148 kubelet[3104]: I0213 21:06:36.094995 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-policysync\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096148 kubelet[3104]: I0213 21:06:36.095036 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-cni-net-dir\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096148 kubelet[3104]: I0213 21:06:36.095079 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58pd\" (UniqueName: \"kubernetes.io/projected/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-kube-api-access-s58pd\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096909 kubelet[3104]: I0213 21:06:36.095273 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-var-lib-calico\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096909 kubelet[3104]: I0213 21:06:36.095382 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-flexvol-driver-host\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096909 kubelet[3104]: I0213 21:06:36.095445 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-tigera-ca-bundle\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.096909 kubelet[3104]: I0213 21:06:36.095498 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/791ddaf9-79c0-419b-8118-21d7d4c1a96d-tigera-ca-bundle\") pod \"calico-typha-7db8b79dd5-5fhnq\" (UID: \"791ddaf9-79c0-419b-8118-21d7d4c1a96d\") " pod="calico-system/calico-typha-7db8b79dd5-5fhnq" Feb 13 21:06:36.096909 kubelet[3104]: I0213 21:06:36.095565 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdk2\" (UniqueName: \"kubernetes.io/projected/791ddaf9-79c0-419b-8118-21d7d4c1a96d-kube-api-access-qpdk2\") pod \"calico-typha-7db8b79dd5-5fhnq\" (UID: \"791ddaf9-79c0-419b-8118-21d7d4c1a96d\") " pod="calico-system/calico-typha-7db8b79dd5-5fhnq" Feb 13 21:06:36.097492 kubelet[3104]: I0213 21:06:36.095657 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-var-run-calico\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.097492 kubelet[3104]: I0213 21:06:36.095720 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-cni-bin-dir\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.097492 kubelet[3104]: I0213 21:06:36.095797 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fb0f7b1-61f2-47e3-8864-bf019aae3bc7-xtables-lock\") pod \"calico-node-clkg8\" (UID: \"7fb0f7b1-61f2-47e3-8864-bf019aae3bc7\") " pod="calico-system/calico-node-clkg8" Feb 13 21:06:36.170038 kubelet[3104]: E0213 21:06:36.169948 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:36.197377 kubelet[3104]: I0213 21:06:36.197268 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/22f50a6b-4846-46c1-8c41-99e176056718-registration-dir\") pod \"csi-node-driver-dv5v9\" (UID: \"22f50a6b-4846-46c1-8c41-99e176056718\") " pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:36.197377 kubelet[3104]: I0213 21:06:36.197359 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqq2\" (UniqueName: \"kubernetes.io/projected/22f50a6b-4846-46c1-8c41-99e176056718-kube-api-access-kmqq2\") pod \"csi-node-driver-dv5v9\" (UID: \"22f50a6b-4846-46c1-8c41-99e176056718\") " pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:36.197848 kubelet[3104]: I0213 21:06:36.197707 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/22f50a6b-4846-46c1-8c41-99e176056718-socket-dir\") pod \"csi-node-driver-dv5v9\" (UID: \"22f50a6b-4846-46c1-8c41-99e176056718\") " pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:36.198057 kubelet[3104]: I0213 21:06:36.197994 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22f50a6b-4846-46c1-8c41-99e176056718-kubelet-dir\") pod \"csi-node-driver-dv5v9\" (UID: \"22f50a6b-4846-46c1-8c41-99e176056718\") " pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:36.198269 kubelet[3104]: I0213 21:06:36.198197 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/22f50a6b-4846-46c1-8c41-99e176056718-varrun\") pod \"csi-node-driver-dv5v9\" (UID: \"22f50a6b-4846-46c1-8c41-99e176056718\") " pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:36.199278 kubelet[3104]: E0213 21:06:36.199222 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.199278 kubelet[3104]: W0213 21:06:36.199260 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.199671 kubelet[3104]: E0213 21:06:36.199313 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.199914 kubelet[3104]: E0213 21:06:36.199868 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.199914 kubelet[3104]: W0213 21:06:36.199904 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.200279 kubelet[3104]: E0213 21:06:36.199948 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.200509 kubelet[3104]: E0213 21:06:36.200461 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.200509 kubelet[3104]: W0213 21:06:36.200498 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.200865 kubelet[3104]: E0213 21:06:36.200530 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.201140 kubelet[3104]: E0213 21:06:36.201090 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.201282 kubelet[3104]: W0213 21:06:36.201144 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.201282 kubelet[3104]: E0213 21:06:36.201198 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.201760 kubelet[3104]: E0213 21:06:36.201726 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.201760 kubelet[3104]: W0213 21:06:36.201751 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.202007 kubelet[3104]: E0213 21:06:36.201782 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.202428 kubelet[3104]: E0213 21:06:36.202376 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.202428 kubelet[3104]: W0213 21:06:36.202418 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.202825 kubelet[3104]: E0213 21:06:36.202527 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.203237 kubelet[3104]: E0213 21:06:36.203187 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.203435 kubelet[3104]: W0213 21:06:36.203237 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.203832 kubelet[3104]: E0213 21:06:36.203714 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.204573 kubelet[3104]: E0213 21:06:36.204522 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.204829 kubelet[3104]: W0213 21:06:36.204575 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.204829 kubelet[3104]: E0213 21:06:36.204682 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.205792 kubelet[3104]: E0213 21:06:36.205433 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.205792 kubelet[3104]: W0213 21:06:36.205495 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.205792 kubelet[3104]: E0213 21:06:36.205548 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.208725 kubelet[3104]: E0213 21:06:36.208669 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.208984 kubelet[3104]: W0213 21:06:36.208727 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.208984 kubelet[3104]: E0213 21:06:36.208799 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.209561 kubelet[3104]: E0213 21:06:36.209510 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.209832 kubelet[3104]: W0213 21:06:36.209558 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.209832 kubelet[3104]: E0213 21:06:36.209622 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.210464 kubelet[3104]: E0213 21:06:36.210422 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.210695 kubelet[3104]: W0213 21:06:36.210469 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.210695 kubelet[3104]: E0213 21:06:36.210519 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.217795 kubelet[3104]: E0213 21:06:36.217769 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.217795 kubelet[3104]: W0213 21:06:36.217792 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.217972 kubelet[3104]: E0213 21:06:36.217815 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.219968 kubelet[3104]: E0213 21:06:36.219935 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.220128 kubelet[3104]: W0213 21:06:36.219963 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.220128 kubelet[3104]: E0213 21:06:36.220004 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.300081 kubelet[3104]: E0213 21:06:36.300029 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.300081 kubelet[3104]: W0213 21:06:36.300047 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.300081 kubelet[3104]: E0213 21:06:36.300064 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.300326 kubelet[3104]: E0213 21:06:36.300311 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.300326 kubelet[3104]: W0213 21:06:36.300324 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.300471 kubelet[3104]: E0213 21:06:36.300356 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.300598 kubelet[3104]: E0213 21:06:36.300588 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.300655 kubelet[3104]: W0213 21:06:36.300598 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.300655 kubelet[3104]: E0213 21:06:36.300612 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.300927 kubelet[3104]: E0213 21:06:36.300883 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.300927 kubelet[3104]: W0213 21:06:36.300898 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.300927 kubelet[3104]: E0213 21:06:36.300914 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.301170 kubelet[3104]: E0213 21:06:36.301129 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.301170 kubelet[3104]: W0213 21:06:36.301140 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.301170 kubelet[3104]: E0213 21:06:36.301155 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.301383 kubelet[3104]: E0213 21:06:36.301332 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.301383 kubelet[3104]: W0213 21:06:36.301342 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.301383 kubelet[3104]: E0213 21:06:36.301354 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.301734 kubelet[3104]: E0213 21:06:36.301659 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.301734 kubelet[3104]: W0213 21:06:36.301683 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.301734 kubelet[3104]: E0213 21:06:36.301713 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.301942 kubelet[3104]: E0213 21:06:36.301924 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.301942 kubelet[3104]: W0213 21:06:36.301938 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.302081 kubelet[3104]: E0213 21:06:36.301955 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.302162 kubelet[3104]: E0213 21:06:36.302144 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.302162 kubelet[3104]: W0213 21:06:36.302160 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.302286 kubelet[3104]: E0213 21:06:36.302209 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.302353 kubelet[3104]: E0213 21:06:36.302314 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.302353 kubelet[3104]: W0213 21:06:36.302323 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.302353 kubelet[3104]: E0213 21:06:36.302344 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.302555 kubelet[3104]: E0213 21:06:36.302542 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.302555 kubelet[3104]: W0213 21:06:36.302552 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.302700 kubelet[3104]: E0213 21:06:36.302565 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.302804 kubelet[3104]: E0213 21:06:36.302789 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.302804 kubelet[3104]: W0213 21:06:36.302799 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.302944 kubelet[3104]: E0213 21:06:36.302812 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.303034 kubelet[3104]: E0213 21:06:36.303020 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.303034 kubelet[3104]: W0213 21:06:36.303030 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.303122 kubelet[3104]: E0213 21:06:36.303041 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.303276 kubelet[3104]: E0213 21:06:36.303266 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.303276 kubelet[3104]: W0213 21:06:36.303275 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.303354 kubelet[3104]: E0213 21:06:36.303322 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.303467 kubelet[3104]: E0213 21:06:36.303457 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.303503 kubelet[3104]: W0213 21:06:36.303467 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.303503 kubelet[3104]: E0213 21:06:36.303486 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.303678 kubelet[3104]: E0213 21:06:36.303635 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.303678 kubelet[3104]: W0213 21:06:36.303650 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.303793 kubelet[3104]: E0213 21:06:36.303712 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.303832 kubelet[3104]: E0213 21:06:36.303809 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.303832 kubelet[3104]: W0213 21:06:36.303817 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.303895 kubelet[3104]: E0213 21:06:36.303847 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.304050 kubelet[3104]: E0213 21:06:36.304009 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.304050 kubelet[3104]: W0213 21:06:36.304019 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.304050 kubelet[3104]: E0213 21:06:36.304031 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.304256 kubelet[3104]: E0213 21:06:36.304214 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.304256 kubelet[3104]: W0213 21:06:36.304223 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.304256 kubelet[3104]: E0213 21:06:36.304237 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.304460 kubelet[3104]: E0213 21:06:36.304414 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.304460 kubelet[3104]: W0213 21:06:36.304424 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.304460 kubelet[3104]: E0213 21:06:36.304435 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.304721 kubelet[3104]: E0213 21:06:36.304706 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.304778 kubelet[3104]: W0213 21:06:36.304720 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.304778 kubelet[3104]: E0213 21:06:36.304737 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.304940 kubelet[3104]: E0213 21:06:36.304899 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.304940 kubelet[3104]: W0213 21:06:36.304909 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.304940 kubelet[3104]: E0213 21:06:36.304923 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.305187 kubelet[3104]: E0213 21:06:36.305146 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.305187 kubelet[3104]: W0213 21:06:36.305158 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.305187 kubelet[3104]: E0213 21:06:36.305185 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.305337 kubelet[3104]: E0213 21:06:36.305326 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.305337 kubelet[3104]: W0213 21:06:36.305336 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.305409 kubelet[3104]: E0213 21:06:36.305347 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.305527 kubelet[3104]: E0213 21:06:36.305516 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.305527 kubelet[3104]: W0213 21:06:36.305526 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.305599 kubelet[3104]: E0213 21:06:36.305536 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.308113 containerd[1793]: time="2025-02-13T21:06:36.308043748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7db8b79dd5-5fhnq,Uid:791ddaf9-79c0-419b-8118-21d7d4c1a96d,Namespace:calico-system,Attempt:0,}" Feb 13 21:06:36.310655 kubelet[3104]: E0213 21:06:36.310642 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 21:06:36.310655 kubelet[3104]: W0213 21:06:36.310651 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 21:06:36.310754 kubelet[3104]: E0213 21:06:36.310661 3104 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 21:06:36.338331 containerd[1793]: time="2025-02-13T21:06:36.338275048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:36.338541 containerd[1793]: time="2025-02-13T21:06:36.338523563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:36.338541 containerd[1793]: time="2025-02-13T21:06:36.338534594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:36.338604 containerd[1793]: time="2025-02-13T21:06:36.338585700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:36.351715 containerd[1793]: time="2025-02-13T21:06:36.351679669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-clkg8,Uid:7fb0f7b1-61f2-47e3-8864-bf019aae3bc7,Namespace:calico-system,Attempt:0,}" Feb 13 21:06:36.359810 systemd[1]: Started cri-containerd-e621a8f319ac07a3b0274cf3399b82c4f5640de97570a8329b52c210d6a40264.scope - libcontainer container e621a8f319ac07a3b0274cf3399b82c4f5640de97570a8329b52c210d6a40264. Feb 13 21:06:36.361024 containerd[1793]: time="2025-02-13T21:06:36.360658449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:36.361024 containerd[1793]: time="2025-02-13T21:06:36.360931779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:36.361024 containerd[1793]: time="2025-02-13T21:06:36.360939915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:36.361024 containerd[1793]: time="2025-02-13T21:06:36.360979872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:36.366427 systemd[1]: Started cri-containerd-6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e.scope - libcontainer container 6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e. Feb 13 21:06:36.376069 containerd[1793]: time="2025-02-13T21:06:36.376043486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-clkg8,Uid:7fb0f7b1-61f2-47e3-8864-bf019aae3bc7,Namespace:calico-system,Attempt:0,} returns sandbox id \"6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e\"" Feb 13 21:06:36.376730 containerd[1793]: time="2025-02-13T21:06:36.376719477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 21:06:36.381588 containerd[1793]: time="2025-02-13T21:06:36.381571563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7db8b79dd5-5fhnq,Uid:791ddaf9-79c0-419b-8118-21d7d4c1a96d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e621a8f319ac07a3b0274cf3399b82c4f5640de97570a8329b52c210d6a40264\"" Feb 13 21:06:36.817062 update_engine[1788]: I20250213 21:06:36.816905 1788 update_attempter.cc:509] Updating boot flags... Feb 13 21:06:36.852638 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3728) Feb 13 21:06:36.878638 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3728) Feb 13 21:06:36.905639 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3728) Feb 13 21:06:37.913153 kubelet[3104]: E0213 21:06:37.913081 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:38.357816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238970833.mount: Deactivated successfully. Feb 13 21:06:38.398967 containerd[1793]: time="2025-02-13T21:06:38.398943443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:38.399254 containerd[1793]: time="2025-02-13T21:06:38.399107172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 21:06:38.399461 containerd[1793]: time="2025-02-13T21:06:38.399449871Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:38.401424 containerd[1793]: time="2025-02-13T21:06:38.401407568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:38.401835 containerd[1793]: time="2025-02-13T21:06:38.401820208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.025084282s" Feb 13 21:06:38.401864 containerd[1793]: time="2025-02-13T21:06:38.401836428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 21:06:38.402338 containerd[1793]: time="2025-02-13T21:06:38.402329203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 21:06:38.402915 containerd[1793]: time="2025-02-13T21:06:38.402901709Z" level=info msg="CreateContainer within sandbox \"6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 21:06:38.408077 containerd[1793]: time="2025-02-13T21:06:38.408027541Z" level=info msg="CreateContainer within sandbox \"6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2\"" Feb 13 21:06:38.408327 containerd[1793]: time="2025-02-13T21:06:38.408283285Z" level=info msg="StartContainer for \"bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2\"" Feb 13 21:06:38.431835 systemd[1]: Started cri-containerd-bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2.scope - libcontainer container bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2. Feb 13 21:06:38.445390 containerd[1793]: time="2025-02-13T21:06:38.445366650Z" level=info msg="StartContainer for \"bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2\" returns successfully" Feb 13 21:06:38.450931 systemd[1]: cri-containerd-bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2.scope: Deactivated successfully. Feb 13 21:06:38.686867 containerd[1793]: time="2025-02-13T21:06:38.686761916Z" level=info msg="shim disconnected" id=bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2 namespace=k8s.io Feb 13 21:06:38.686867 containerd[1793]: time="2025-02-13T21:06:38.686828004Z" level=warning msg="cleaning up after shim disconnected" id=bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2 namespace=k8s.io Feb 13 21:06:38.686867 containerd[1793]: time="2025-02-13T21:06:38.686836925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:06:39.342757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf4b08509ee7c85fb71485b7125c76260040f0d163b67f9cf6f09181fb3247a2-rootfs.mount: Deactivated successfully. Feb 13 21:06:39.908329 kubelet[3104]: E0213 21:06:39.908200 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:40.760471 containerd[1793]: time="2025-02-13T21:06:40.760447506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:40.760688 containerd[1793]: time="2025-02-13T21:06:40.760599615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 21:06:40.760917 containerd[1793]: time="2025-02-13T21:06:40.760874712Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:40.762000 containerd[1793]: time="2025-02-13T21:06:40.761960617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:40.762695 containerd[1793]: time="2025-02-13T21:06:40.762664527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.360321335s" Feb 13 21:06:40.762695 containerd[1793]: time="2025-02-13T21:06:40.762679146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 21:06:40.763153 containerd[1793]: time="2025-02-13T21:06:40.763108646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 21:06:40.765973 containerd[1793]: time="2025-02-13T21:06:40.765923047Z" level=info msg="CreateContainer within sandbox \"e621a8f319ac07a3b0274cf3399b82c4f5640de97570a8329b52c210d6a40264\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 21:06:40.770271 containerd[1793]: time="2025-02-13T21:06:40.770228373Z" level=info msg="CreateContainer within sandbox \"e621a8f319ac07a3b0274cf3399b82c4f5640de97570a8329b52c210d6a40264\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9176ea9592dd100d655df372eb9c2e6a253220bbaea74fa20fbd5f8ef0cea215\"" Feb 13 21:06:40.770450 containerd[1793]: time="2025-02-13T21:06:40.770410854Z" level=info msg="StartContainer for \"9176ea9592dd100d655df372eb9c2e6a253220bbaea74fa20fbd5f8ef0cea215\"" Feb 13 21:06:40.797900 systemd[1]: Started cri-containerd-9176ea9592dd100d655df372eb9c2e6a253220bbaea74fa20fbd5f8ef0cea215.scope - libcontainer container 9176ea9592dd100d655df372eb9c2e6a253220bbaea74fa20fbd5f8ef0cea215. Feb 13 21:06:40.821256 containerd[1793]: time="2025-02-13T21:06:40.821207985Z" level=info msg="StartContainer for \"9176ea9592dd100d655df372eb9c2e6a253220bbaea74fa20fbd5f8ef0cea215\" returns successfully" Feb 13 21:06:40.988057 kubelet[3104]: I0213 21:06:40.987950 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7db8b79dd5-5fhnq" podStartSLOduration=1.606876335 podStartE2EDuration="5.987913016s" podCreationTimestamp="2025-02-13 21:06:35 +0000 UTC" firstStartedPulling="2025-02-13 21:06:36.382024266 +0000 UTC m=+12.533208058" lastFinishedPulling="2025-02-13 21:06:40.763060944 +0000 UTC m=+16.914244739" observedRunningTime="2025-02-13 21:06:40.987477006 +0000 UTC m=+17.138660890" watchObservedRunningTime="2025-02-13 21:06:40.987913016 +0000 UTC m=+17.139096855" Feb 13 21:06:41.910033 kubelet[3104]: E0213 21:06:41.910009 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:41.969186 kubelet[3104]: I0213 21:06:41.969128 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:06:43.908224 kubelet[3104]: E0213 21:06:43.908171 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:44.574974 containerd[1793]: time="2025-02-13T21:06:44.574921212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:44.575233 containerd[1793]: time="2025-02-13T21:06:44.575179342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 21:06:44.575499 containerd[1793]: time="2025-02-13T21:06:44.575464308Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:44.576493 containerd[1793]: time="2025-02-13T21:06:44.576454631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:44.576916 containerd[1793]: time="2025-02-13T21:06:44.576867067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.813745238s" Feb 13 21:06:44.576916 containerd[1793]: time="2025-02-13T21:06:44.576882715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 21:06:44.577767 containerd[1793]: time="2025-02-13T21:06:44.577755239Z" level=info msg="CreateContainer within sandbox \"6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 21:06:44.582356 containerd[1793]: time="2025-02-13T21:06:44.582313109Z" level=info msg="CreateContainer within sandbox \"6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec\"" Feb 13 21:06:44.582549 containerd[1793]: time="2025-02-13T21:06:44.582514390Z" level=info msg="StartContainer for \"e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec\"" Feb 13 21:06:44.612882 systemd[1]: Started cri-containerd-e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec.scope - libcontainer container e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec. Feb 13 21:06:44.674385 containerd[1793]: time="2025-02-13T21:06:44.674320537Z" level=info msg="StartContainer for \"e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec\" returns successfully" Feb 13 21:06:45.204494 containerd[1793]: time="2025-02-13T21:06:45.204472021Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 21:06:45.205403 systemd[1]: cri-containerd-e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec.scope: Deactivated successfully. Feb 13 21:06:45.217236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec-rootfs.mount: Deactivated successfully. Feb 13 21:06:45.311171 kubelet[3104]: I0213 21:06:45.311116 3104 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 21:06:45.336030 systemd[1]: Created slice kubepods-burstable-pod322cd9eb_1852_4c5c_aae9_cdfcee960ce3.slice - libcontainer container kubepods-burstable-pod322cd9eb_1852_4c5c_aae9_cdfcee960ce3.slice. Feb 13 21:06:45.338726 systemd[1]: Created slice kubepods-burstable-poda02339f1_24d3_4002_b90e_3dd190621c61.slice - libcontainer container kubepods-burstable-poda02339f1_24d3_4002_b90e_3dd190621c61.slice. Feb 13 21:06:45.341010 systemd[1]: Created slice kubepods-besteffort-pod37d2fc1a_4796_4ba1_89bd_a5b0ae45374e.slice - libcontainer container kubepods-besteffort-pod37d2fc1a_4796_4ba1_89bd_a5b0ae45374e.slice. Feb 13 21:06:45.343744 systemd[1]: Created slice kubepods-besteffort-pod0b74a159_92ea_4d91_b1cc_e328f7cf493e.slice - libcontainer container kubepods-besteffort-pod0b74a159_92ea_4d91_b1cc_e328f7cf493e.slice. Feb 13 21:06:45.346158 systemd[1]: Created slice kubepods-besteffort-pod4552908e_a949_4d18_be80_ec54e1c8e06d.slice - libcontainer container kubepods-besteffort-pod4552908e_a949_4d18_be80_ec54e1c8e06d.slice. Feb 13 21:06:45.371264 kubelet[3104]: I0213 21:06:45.371237 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nncq\" (UniqueName: \"kubernetes.io/projected/37d2fc1a-4796-4ba1-89bd-a5b0ae45374e-kube-api-access-4nncq\") pod \"calico-apiserver-dfcf67d5f-gwq2t\" (UID: \"37d2fc1a-4796-4ba1-89bd-a5b0ae45374e\") " pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:45.371264 kubelet[3104]: I0213 21:06:45.371267 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr44d\" (UniqueName: \"kubernetes.io/projected/4552908e-a949-4d18-be80-ec54e1c8e06d-kube-api-access-fr44d\") pod \"calico-kube-controllers-76848bdf96-56fb4\" (UID: \"4552908e-a949-4d18-be80-ec54e1c8e06d\") " pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:45.371372 kubelet[3104]: I0213 21:06:45.371280 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cd2z\" (UniqueName: \"kubernetes.io/projected/0b74a159-92ea-4d91-b1cc-e328f7cf493e-kube-api-access-5cd2z\") pod \"calico-apiserver-dfcf67d5f-gh88q\" (UID: \"0b74a159-92ea-4d91-b1cc-e328f7cf493e\") " pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:45.371372 kubelet[3104]: I0213 21:06:45.371292 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cd9eb-1852-4c5c-aae9-cdfcee960ce3-config-volume\") pod \"coredns-668d6bf9bc-pfzwq\" (UID: \"322cd9eb-1852-4c5c-aae9-cdfcee960ce3\") " pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:45.371372 kubelet[3104]: I0213 21:06:45.371301 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhjfp\" (UniqueName: \"kubernetes.io/projected/a02339f1-24d3-4002-b90e-3dd190621c61-kube-api-access-mhjfp\") pod \"coredns-668d6bf9bc-wr725\" (UID: \"a02339f1-24d3-4002-b90e-3dd190621c61\") " pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:45.371372 kubelet[3104]: I0213 21:06:45.371328 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a02339f1-24d3-4002-b90e-3dd190621c61-config-volume\") pod \"coredns-668d6bf9bc-wr725\" (UID: \"a02339f1-24d3-4002-b90e-3dd190621c61\") " pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:45.371372 kubelet[3104]: I0213 21:06:45.371338 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37d2fc1a-4796-4ba1-89bd-a5b0ae45374e-calico-apiserver-certs\") pod \"calico-apiserver-dfcf67d5f-gwq2t\" (UID: \"37d2fc1a-4796-4ba1-89bd-a5b0ae45374e\") " pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:45.371471 kubelet[3104]: I0213 21:06:45.371347 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b74a159-92ea-4d91-b1cc-e328f7cf493e-calico-apiserver-certs\") pod \"calico-apiserver-dfcf67d5f-gh88q\" (UID: \"0b74a159-92ea-4d91-b1cc-e328f7cf493e\") " pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:45.371471 kubelet[3104]: I0213 21:06:45.371356 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4552908e-a949-4d18-be80-ec54e1c8e06d-tigera-ca-bundle\") pod \"calico-kube-controllers-76848bdf96-56fb4\" (UID: \"4552908e-a949-4d18-be80-ec54e1c8e06d\") " pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:45.371471 kubelet[3104]: I0213 21:06:45.371367 3104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmfkk\" (UniqueName: \"kubernetes.io/projected/322cd9eb-1852-4c5c-aae9-cdfcee960ce3-kube-api-access-bmfkk\") pod \"coredns-668d6bf9bc-pfzwq\" (UID: \"322cd9eb-1852-4c5c-aae9-cdfcee960ce3\") " pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:45.638800 containerd[1793]: time="2025-02-13T21:06:45.638740025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:0,}" Feb 13 21:06:45.640764 containerd[1793]: time="2025-02-13T21:06:45.640711956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:0,}" Feb 13 21:06:45.643705 containerd[1793]: time="2025-02-13T21:06:45.643662863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 21:06:45.645429 containerd[1793]: time="2025-02-13T21:06:45.645387394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 21:06:45.648386 containerd[1793]: time="2025-02-13T21:06:45.648347084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:0,}" Feb 13 21:06:45.871545 containerd[1793]: time="2025-02-13T21:06:45.871510406Z" level=info msg="shim disconnected" id=e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec namespace=k8s.io Feb 13 21:06:45.871545 containerd[1793]: time="2025-02-13T21:06:45.871541666Z" level=warning msg="cleaning up after shim disconnected" id=e90c9168db07823cda5686d08470c594f2ae6ccbafdbdba0e92cc6505931d8ec namespace=k8s.io Feb 13 21:06:45.871545 containerd[1793]: time="2025-02-13T21:06:45.871546928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:06:45.909818 containerd[1793]: time="2025-02-13T21:06:45.909748577Z" level=error msg="Failed to destroy network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.909892 containerd[1793]: time="2025-02-13T21:06:45.909830915Z" level=error msg="Failed to destroy network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910021 containerd[1793]: time="2025-02-13T21:06:45.910001923Z" level=error msg="encountered an error cleaning up failed sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910061 containerd[1793]: time="2025-02-13T21:06:45.910034424Z" level=error msg="encountered an error cleaning up failed sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910087 containerd[1793]: time="2025-02-13T21:06:45.910055377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910087 containerd[1793]: time="2025-02-13T21:06:45.910066625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910176 kubelet[3104]: E0213 21:06:45.910147 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910224 kubelet[3104]: E0213 21:06:45.910185 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:45.910224 kubelet[3104]: E0213 21:06:45.910198 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:45.910224 kubelet[3104]: E0213 21:06:45.910147 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910311 kubelet[3104]: E0213 21:06:45.910230 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podUID="4552908e-a949-4d18-be80-ec54e1c8e06d" Feb 13 21:06:45.910311 kubelet[3104]: E0213 21:06:45.910242 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:45.910311 kubelet[3104]: E0213 21:06:45.910258 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:45.910413 kubelet[3104]: E0213 21:06:45.910285 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podUID="0b74a159-92ea-4d91-b1cc-e328f7cf493e" Feb 13 21:06:45.910579 containerd[1793]: time="2025-02-13T21:06:45.910564221Z" level=error msg="Failed to destroy network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910757 containerd[1793]: time="2025-02-13T21:06:45.910745145Z" level=error msg="encountered an error cleaning up failed sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910789 containerd[1793]: time="2025-02-13T21:06:45.910772605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910863 kubelet[3104]: E0213 21:06:45.910850 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.910895 kubelet[3104]: E0213 21:06:45.910870 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:45.910895 kubelet[3104]: E0213 21:06:45.910881 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:45.910933 kubelet[3104]: E0213 21:06:45.910899 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pfzwq" podUID="322cd9eb-1852-4c5c-aae9-cdfcee960ce3" Feb 13 21:06:45.910969 containerd[1793]: time="2025-02-13T21:06:45.910951496Z" level=error msg="Failed to destroy network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911118 containerd[1793]: time="2025-02-13T21:06:45.911105618Z" level=error msg="encountered an error cleaning up failed sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911143 containerd[1793]: time="2025-02-13T21:06:45.911131096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911210 kubelet[3104]: E0213 21:06:45.911197 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911238 kubelet[3104]: E0213 21:06:45.911217 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:45.911238 kubelet[3104]: E0213 21:06:45.911228 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:45.911273 kubelet[3104]: E0213 21:06:45.911245 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podUID="37d2fc1a-4796-4ba1-89bd-a5b0ae45374e" Feb 13 21:06:45.911508 containerd[1793]: time="2025-02-13T21:06:45.911495281Z" level=error msg="Failed to destroy network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911645 containerd[1793]: time="2025-02-13T21:06:45.911632847Z" level=error msg="encountered an error cleaning up failed sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911700 containerd[1793]: time="2025-02-13T21:06:45.911672176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911773 kubelet[3104]: E0213 21:06:45.911759 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.911826 kubelet[3104]: E0213 21:06:45.911797 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:45.911826 kubelet[3104]: E0213 21:06:45.911815 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:45.911883 kubelet[3104]: E0213 21:06:45.911840 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wr725" podUID="a02339f1-24d3-4002-b90e-3dd190621c61" Feb 13 21:06:45.912038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739-shm.mount: Deactivated successfully. Feb 13 21:06:45.912114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d-shm.mount: Deactivated successfully. Feb 13 21:06:45.915815 systemd[1]: Created slice kubepods-besteffort-pod22f50a6b_4846_46c1_8c41_99e176056718.slice - libcontainer container kubepods-besteffort-pod22f50a6b_4846_46c1_8c41_99e176056718.slice. Feb 13 21:06:45.916998 containerd[1793]: time="2025-02-13T21:06:45.916985106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:0,}" Feb 13 21:06:45.948765 containerd[1793]: time="2025-02-13T21:06:45.948709012Z" level=error msg="Failed to destroy network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.948917 containerd[1793]: time="2025-02-13T21:06:45.948876910Z" level=error msg="encountered an error cleaning up failed sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.948917 containerd[1793]: time="2025-02-13T21:06:45.948912141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.949032 kubelet[3104]: E0213 21:06:45.949011 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:45.949072 kubelet[3104]: E0213 21:06:45.949046 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:45.949072 kubelet[3104]: E0213 21:06:45.949060 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:45.949111 kubelet[3104]: E0213 21:06:45.949084 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:45.985334 containerd[1793]: time="2025-02-13T21:06:45.985239771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 21:06:45.985592 kubelet[3104]: I0213 21:06:45.985540 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f" Feb 13 21:06:45.986453 containerd[1793]: time="2025-02-13T21:06:45.986439071Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:06:45.986604 kubelet[3104]: I0213 21:06:45.986591 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739" Feb 13 21:06:45.986647 containerd[1793]: time="2025-02-13T21:06:45.986616882Z" level=info msg="Ensure that sandbox 607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f in task-service has been cleanup successfully" Feb 13 21:06:45.986789 containerd[1793]: time="2025-02-13T21:06:45.986773908Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:06:45.986813 containerd[1793]: time="2025-02-13T21:06:45.986790028Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:06:45.986909 containerd[1793]: time="2025-02-13T21:06:45.986891600Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:06:45.987060 containerd[1793]: time="2025-02-13T21:06:45.987044219Z" level=info msg="Ensure that sandbox abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739 in task-service has been cleanup successfully" Feb 13 21:06:45.987116 containerd[1793]: time="2025-02-13T21:06:45.987065817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:1,}" Feb 13 21:06:45.987161 containerd[1793]: time="2025-02-13T21:06:45.987146751Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:06:45.987196 containerd[1793]: time="2025-02-13T21:06:45.987158962Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:06:45.987260 kubelet[3104]: I0213 21:06:45.987248 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069" Feb 13 21:06:45.987359 containerd[1793]: time="2025-02-13T21:06:45.987345358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:1,}" Feb 13 21:06:45.987569 containerd[1793]: time="2025-02-13T21:06:45.987559645Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:06:45.987692 containerd[1793]: time="2025-02-13T21:06:45.987681878Z" level=info msg="Ensure that sandbox d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069 in task-service has been cleanup successfully" Feb 13 21:06:45.987744 kubelet[3104]: I0213 21:06:45.987732 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d" Feb 13 21:06:45.987783 containerd[1793]: time="2025-02-13T21:06:45.987758929Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:06:45.987783 containerd[1793]: time="2025-02-13T21:06:45.987766294Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:06:45.987980 containerd[1793]: time="2025-02-13T21:06:45.987969957Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:06:45.988015 containerd[1793]: time="2025-02-13T21:06:45.987979842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:1,}" Feb 13 21:06:45.988066 containerd[1793]: time="2025-02-13T21:06:45.988057183Z" level=info msg="Ensure that sandbox 806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d in task-service has been cleanup successfully" Feb 13 21:06:45.988133 containerd[1793]: time="2025-02-13T21:06:45.988125434Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:06:45.988158 containerd[1793]: time="2025-02-13T21:06:45.988133037Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:06:45.988181 kubelet[3104]: I0213 21:06:45.988172 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057" Feb 13 21:06:45.988412 containerd[1793]: time="2025-02-13T21:06:45.988403517Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:06:45.988443 containerd[1793]: time="2025-02-13T21:06:45.988413699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 21:06:45.988494 containerd[1793]: time="2025-02-13T21:06:45.988486285Z" level=info msg="Ensure that sandbox e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057 in task-service has been cleanup successfully" Feb 13 21:06:45.988573 containerd[1793]: time="2025-02-13T21:06:45.988565044Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:06:45.988595 containerd[1793]: time="2025-02-13T21:06:45.988573135Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:06:45.988614 kubelet[3104]: I0213 21:06:45.988588 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad" Feb 13 21:06:45.988772 containerd[1793]: time="2025-02-13T21:06:45.988761049Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:06:45.988797 containerd[1793]: time="2025-02-13T21:06:45.988781149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 21:06:45.988860 containerd[1793]: time="2025-02-13T21:06:45.988852196Z" level=info msg="Ensure that sandbox 1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad in task-service has been cleanup successfully" Feb 13 21:06:45.988937 containerd[1793]: time="2025-02-13T21:06:45.988929085Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:06:45.988953 containerd[1793]: time="2025-02-13T21:06:45.988936963Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:06:45.989100 containerd[1793]: time="2025-02-13T21:06:45.989087349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:1,}" Feb 13 21:06:46.029888 containerd[1793]: time="2025-02-13T21:06:46.029852157Z" level=error msg="Failed to destroy network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030033 containerd[1793]: time="2025-02-13T21:06:46.030018469Z" level=error msg="Failed to destroy network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030273 containerd[1793]: time="2025-02-13T21:06:46.030177194Z" level=error msg="encountered an error cleaning up failed sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030273 containerd[1793]: time="2025-02-13T21:06:46.030195185Z" level=error msg="encountered an error cleaning up failed sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030273 containerd[1793]: time="2025-02-13T21:06:46.030217628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030273 containerd[1793]: time="2025-02-13T21:06:46.030230849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030548 kubelet[3104]: E0213 21:06:46.030357 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030548 kubelet[3104]: E0213 21:06:46.030408 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:46.030548 kubelet[3104]: E0213 21:06:46.030431 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:46.030548 kubelet[3104]: E0213 21:06:46.030445 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.030694 kubelet[3104]: E0213 21:06:46.030473 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pfzwq" podUID="322cd9eb-1852-4c5c-aae9-cdfcee960ce3" Feb 13 21:06:46.030694 kubelet[3104]: E0213 21:06:46.030500 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:46.030694 kubelet[3104]: E0213 21:06:46.030533 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:46.030792 kubelet[3104]: E0213 21:06:46.030561 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podUID="0b74a159-92ea-4d91-b1cc-e328f7cf493e" Feb 13 21:06:46.030863 containerd[1793]: time="2025-02-13T21:06:46.030841943Z" level=error msg="Failed to destroy network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031099 containerd[1793]: time="2025-02-13T21:06:46.031043485Z" level=error msg="encountered an error cleaning up failed sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031099 containerd[1793]: time="2025-02-13T21:06:46.031078576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031169 containerd[1793]: time="2025-02-13T21:06:46.031155051Z" level=error msg="Failed to destroy network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031195 kubelet[3104]: E0213 21:06:46.031159 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031195 kubelet[3104]: E0213 21:06:46.031180 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:46.031195 kubelet[3104]: E0213 21:06:46.031190 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:46.031249 kubelet[3104]: E0213 21:06:46.031209 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wr725" podUID="a02339f1-24d3-4002-b90e-3dd190621c61" Feb 13 21:06:46.031280 containerd[1793]: time="2025-02-13T21:06:46.031248315Z" level=error msg="Failed to destroy network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031341 containerd[1793]: time="2025-02-13T21:06:46.031313489Z" level=error msg="encountered an error cleaning up failed sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031386 containerd[1793]: time="2025-02-13T21:06:46.031348095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031440 kubelet[3104]: E0213 21:06:46.031427 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031463 kubelet[3104]: E0213 21:06:46.031447 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:46.031463 kubelet[3104]: E0213 21:06:46.031458 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:46.031509 containerd[1793]: time="2025-02-13T21:06:46.031431020Z" level=error msg="encountered an error cleaning up failed sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031509 containerd[1793]: time="2025-02-13T21:06:46.031460278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031567 kubelet[3104]: E0213 21:06:46.031474 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:46.031567 kubelet[3104]: E0213 21:06:46.031511 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031567 kubelet[3104]: E0213 21:06:46.031530 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:46.031655 kubelet[3104]: E0213 21:06:46.031540 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:46.031655 kubelet[3104]: E0213 21:06:46.031559 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podUID="37d2fc1a-4796-4ba1-89bd-a5b0ae45374e" Feb 13 21:06:46.031706 containerd[1793]: time="2025-02-13T21:06:46.031573737Z" level=error msg="Failed to destroy network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031727 containerd[1793]: time="2025-02-13T21:06:46.031708639Z" level=error msg="encountered an error cleaning up failed sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031745 containerd[1793]: time="2025-02-13T21:06:46.031729932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031792 kubelet[3104]: E0213 21:06:46.031784 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:46.031812 kubelet[3104]: E0213 21:06:46.031796 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:46.031812 kubelet[3104]: E0213 21:06:46.031805 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:46.031850 kubelet[3104]: E0213 21:06:46.031820 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podUID="4552908e-a949-4d18-be80-ec54e1c8e06d" Feb 13 21:06:46.586393 systemd[1]: run-netns-cni\x2d67657e0b\x2d26d5\x2d1764\x2da47d\x2d92ed632ebb45.mount: Deactivated successfully. Feb 13 21:06:46.586463 systemd[1]: run-netns-cni\x2d3bf17881\x2ddbb0\x2d1b65\x2d6ae0\x2da471f4137da8.mount: Deactivated successfully. Feb 13 21:06:46.586511 systemd[1]: run-netns-cni\x2dfbfc3f8c\x2d0186\x2d03cd\x2ddec9\x2d3c1c537a1281.mount: Deactivated successfully. Feb 13 21:06:46.586559 systemd[1]: run-netns-cni\x2d187fc96c\x2d6f25\x2de65c\x2d20dd\x2d13234bc52857.mount: Deactivated successfully. Feb 13 21:06:46.586612 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057-shm.mount: Deactivated successfully. Feb 13 21:06:46.586672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069-shm.mount: Deactivated successfully. Feb 13 21:06:46.586728 systemd[1]: run-netns-cni\x2d75436ea0\x2d6654\x2d54dc\x2da17d\x2d34b06621c6a5.mount: Deactivated successfully. Feb 13 21:06:46.586777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad-shm.mount: Deactivated successfully. Feb 13 21:06:46.990956 kubelet[3104]: I0213 21:06:46.990875 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7" Feb 13 21:06:46.991229 containerd[1793]: time="2025-02-13T21:06:46.991115519Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:06:46.991396 containerd[1793]: time="2025-02-13T21:06:46.991241636Z" level=info msg="Ensure that sandbox d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7 in task-service has been cleanup successfully" Feb 13 21:06:46.991396 containerd[1793]: time="2025-02-13T21:06:46.991347933Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:06:46.991396 containerd[1793]: time="2025-02-13T21:06:46.991356459Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:06:46.991455 kubelet[3104]: I0213 21:06:46.991446 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd" Feb 13 21:06:46.991515 containerd[1793]: time="2025-02-13T21:06:46.991501185Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:06:46.991560 containerd[1793]: time="2025-02-13T21:06:46.991550173Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:06:46.991580 containerd[1793]: time="2025-02-13T21:06:46.991561715Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:06:46.991750 containerd[1793]: time="2025-02-13T21:06:46.991709350Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:06:46.991807 containerd[1793]: time="2025-02-13T21:06:46.991798166Z" level=info msg="Ensure that sandbox 492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd in task-service has been cleanup successfully" Feb 13 21:06:46.991849 containerd[1793]: time="2025-02-13T21:06:46.991838580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:2,}" Feb 13 21:06:46.991892 containerd[1793]: time="2025-02-13T21:06:46.991883870Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:06:46.991925 containerd[1793]: time="2025-02-13T21:06:46.991891724Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:06:46.992033 containerd[1793]: time="2025-02-13T21:06:46.992019548Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:06:46.992055 kubelet[3104]: I0213 21:06:46.992029 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7" Feb 13 21:06:46.992092 containerd[1793]: time="2025-02-13T21:06:46.992079161Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:06:46.992118 containerd[1793]: time="2025-02-13T21:06:46.992092547Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:06:46.992320 containerd[1793]: time="2025-02-13T21:06:46.992307633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:2,}" Feb 13 21:06:46.992345 containerd[1793]: time="2025-02-13T21:06:46.992318557Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:06:46.992422 containerd[1793]: time="2025-02-13T21:06:46.992413131Z" level=info msg="Ensure that sandbox 7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7 in task-service has been cleanup successfully" Feb 13 21:06:46.992495 containerd[1793]: time="2025-02-13T21:06:46.992486308Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:06:46.992513 containerd[1793]: time="2025-02-13T21:06:46.992494962Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:06:46.992613 containerd[1793]: time="2025-02-13T21:06:46.992602167Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:06:46.992689 kubelet[3104]: I0213 21:06:46.992681 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831" Feb 13 21:06:46.992718 containerd[1793]: time="2025-02-13T21:06:46.992659355Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:06:46.992718 containerd[1793]: time="2025-02-13T21:06:46.992690449Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:06:46.992764 systemd[1]: run-netns-cni\x2d6c3f81ce\x2d4215\x2d68f8\x2d8d2f\x2dabbe06982cb2.mount: Deactivated successfully. Feb 13 21:06:46.992901 containerd[1793]: time="2025-02-13T21:06:46.992835363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:2,}" Feb 13 21:06:46.992930 containerd[1793]: time="2025-02-13T21:06:46.992919325Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:06:46.993043 containerd[1793]: time="2025-02-13T21:06:46.993032453Z" level=info msg="Ensure that sandbox 7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831 in task-service has been cleanup successfully" Feb 13 21:06:46.993130 containerd[1793]: time="2025-02-13T21:06:46.993118884Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:06:46.993157 containerd[1793]: time="2025-02-13T21:06:46.993129606Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:06:46.993222 kubelet[3104]: I0213 21:06:46.993214 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1" Feb 13 21:06:46.993263 containerd[1793]: time="2025-02-13T21:06:46.993253267Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:06:46.993299 containerd[1793]: time="2025-02-13T21:06:46.993290675Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:06:46.993320 containerd[1793]: time="2025-02-13T21:06:46.993299921Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:06:46.993456 containerd[1793]: time="2025-02-13T21:06:46.993447096Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:06:46.993502 containerd[1793]: time="2025-02-13T21:06:46.993458870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:2,}" Feb 13 21:06:46.993545 containerd[1793]: time="2025-02-13T21:06:46.993536759Z" level=info msg="Ensure that sandbox 466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1 in task-service has been cleanup successfully" Feb 13 21:06:46.993612 containerd[1793]: time="2025-02-13T21:06:46.993604151Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:06:46.993639 containerd[1793]: time="2025-02-13T21:06:46.993611503Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:06:46.993716 kubelet[3104]: I0213 21:06:46.993707 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd" Feb 13 21:06:46.993772 containerd[1793]: time="2025-02-13T21:06:46.993763192Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:06:46.993815 containerd[1793]: time="2025-02-13T21:06:46.993808181Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:06:46.993838 containerd[1793]: time="2025-02-13T21:06:46.993814726Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:06:46.993919 containerd[1793]: time="2025-02-13T21:06:46.993907966Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:06:46.994012 containerd[1793]: time="2025-02-13T21:06:46.994001684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:2,}" Feb 13 21:06:46.994082 containerd[1793]: time="2025-02-13T21:06:46.994002622Z" level=info msg="Ensure that sandbox e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd in task-service has been cleanup successfully" Feb 13 21:06:46.994162 containerd[1793]: time="2025-02-13T21:06:46.994153185Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:06:46.994194 containerd[1793]: time="2025-02-13T21:06:46.994161778Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:06:46.994276 containerd[1793]: time="2025-02-13T21:06:46.994267641Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:06:46.994308 containerd[1793]: time="2025-02-13T21:06:46.994301723Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:06:46.994354 containerd[1793]: time="2025-02-13T21:06:46.994307585Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:06:46.994448 containerd[1793]: time="2025-02-13T21:06:46.994439880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:2,}" Feb 13 21:06:46.994526 systemd[1]: run-netns-cni\x2d11eee7df\x2dd4ec\x2d0b89\x2d96b7\x2d3ee0d57551ce.mount: Deactivated successfully. Feb 13 21:06:46.994577 systemd[1]: run-netns-cni\x2db9d53058\x2d8138\x2de5bb\x2dac2c\x2d601766ce14ce.mount: Deactivated successfully. Feb 13 21:06:46.994616 systemd[1]: run-netns-cni\x2d87f28edc\x2da4ce\x2da64c\x2dcdc0\x2d7a5a8afa5cb3.mount: Deactivated successfully. Feb 13 21:06:46.996455 systemd[1]: run-netns-cni\x2daccec48e\x2d1854\x2d6cf4\x2d5bf0\x2d35b5ae8f4948.mount: Deactivated successfully. Feb 13 21:06:46.996499 systemd[1]: run-netns-cni\x2d29c4f08a\x2da41f\x2df5ae\x2d2a16\x2d17b4f97ade55.mount: Deactivated successfully. Feb 13 21:06:47.028857 containerd[1793]: time="2025-02-13T21:06:47.028796702Z" level=error msg="Failed to destroy network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.029102 containerd[1793]: time="2025-02-13T21:06:47.029082410Z" level=error msg="encountered an error cleaning up failed sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.029204 containerd[1793]: time="2025-02-13T21:06:47.029189240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.029384 kubelet[3104]: E0213 21:06:47.029356 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.029428 kubelet[3104]: E0213 21:06:47.029412 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:47.029577 kubelet[3104]: E0213 21:06:47.029437 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:47.029577 kubelet[3104]: E0213 21:06:47.029480 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podUID="0b74a159-92ea-4d91-b1cc-e328f7cf493e" Feb 13 21:06:47.033782 containerd[1793]: time="2025-02-13T21:06:47.033748136Z" level=error msg="Failed to destroy network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.033967 containerd[1793]: time="2025-02-13T21:06:47.033953750Z" level=error msg="encountered an error cleaning up failed sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034006 containerd[1793]: time="2025-02-13T21:06:47.033992081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034060 containerd[1793]: time="2025-02-13T21:06:47.033998299Z" level=error msg="Failed to destroy network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034097 containerd[1793]: time="2025-02-13T21:06:47.034057694Z" level=error msg="Failed to destroy network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034165 kubelet[3104]: E0213 21:06:47.034135 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034214 kubelet[3104]: E0213 21:06:47.034187 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:47.034214 kubelet[3104]: E0213 21:06:47.034208 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:47.034281 containerd[1793]: time="2025-02-13T21:06:47.034196789Z" level=error msg="encountered an error cleaning up failed sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034281 containerd[1793]: time="2025-02-13T21:06:47.034220850Z" level=error msg="Failed to destroy network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034281 containerd[1793]: time="2025-02-13T21:06:47.034244691Z" level=error msg="encountered an error cleaning up failed sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034281 containerd[1793]: time="2025-02-13T21:06:47.034231503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034402 kubelet[3104]: E0213 21:06:47.034248 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podUID="37d2fc1a-4796-4ba1-89bd-a5b0ae45374e" Feb 13 21:06:47.034443 containerd[1793]: time="2025-02-13T21:06:47.034279207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034443 containerd[1793]: time="2025-02-13T21:06:47.034419440Z" level=error msg="encountered an error cleaning up failed sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034482 kubelet[3104]: E0213 21:06:47.034403 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034482 kubelet[3104]: E0213 21:06:47.034418 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034482 kubelet[3104]: E0213 21:06:47.034429 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:47.034482 kubelet[3104]: E0213 21:06:47.034438 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:47.034581 containerd[1793]: time="2025-02-13T21:06:47.034445029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034646 kubelet[3104]: E0213 21:06:47.034442 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:47.034646 kubelet[3104]: E0213 21:06:47.034451 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:47.034646 kubelet[3104]: E0213 21:06:47.034466 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:47.034719 kubelet[3104]: E0213 21:06:47.034472 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wr725" podUID="a02339f1-24d3-4002-b90e-3dd190621c61" Feb 13 21:06:47.034719 kubelet[3104]: E0213 21:06:47.034511 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.034719 kubelet[3104]: E0213 21:06:47.034532 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:47.034781 kubelet[3104]: E0213 21:06:47.034550 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:47.034781 kubelet[3104]: E0213 21:06:47.034579 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podUID="4552908e-a949-4d18-be80-ec54e1c8e06d" Feb 13 21:06:47.035855 containerd[1793]: time="2025-02-13T21:06:47.035838580Z" level=error msg="Failed to destroy network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.036004 containerd[1793]: time="2025-02-13T21:06:47.035968043Z" level=error msg="encountered an error cleaning up failed sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.036004 containerd[1793]: time="2025-02-13T21:06:47.035991395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.036141 kubelet[3104]: E0213 21:06:47.036103 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:47.036141 kubelet[3104]: E0213 21:06:47.036124 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:47.036221 kubelet[3104]: E0213 21:06:47.036134 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:47.036221 kubelet[3104]: E0213 21:06:47.036194 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pfzwq" podUID="322cd9eb-1852-4c5c-aae9-cdfcee960ce3" Feb 13 21:06:47.587818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628-shm.mount: Deactivated successfully. Feb 13 21:06:48.002395 kubelet[3104]: I0213 21:06:48.002314 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628" Feb 13 21:06:48.002671 containerd[1793]: time="2025-02-13T21:06:48.002629269Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:06:48.002890 containerd[1793]: time="2025-02-13T21:06:48.002880808Z" level=info msg="Ensure that sandbox bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628 in task-service has been cleanup successfully" Feb 13 21:06:48.002984 containerd[1793]: time="2025-02-13T21:06:48.002973598Z" level=info msg="TearDown network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" successfully" Feb 13 21:06:48.003004 containerd[1793]: time="2025-02-13T21:06:48.002984452Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" returns successfully" Feb 13 21:06:48.003102 containerd[1793]: time="2025-02-13T21:06:48.003092139Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:06:48.003165 kubelet[3104]: I0213 21:06:48.003157 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9" Feb 13 21:06:48.003194 containerd[1793]: time="2025-02-13T21:06:48.003148045Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:06:48.003194 containerd[1793]: time="2025-02-13T21:06:48.003179808Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:06:48.003319 containerd[1793]: time="2025-02-13T21:06:48.003307311Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:06:48.003381 containerd[1793]: time="2025-02-13T21:06:48.003359053Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:06:48.003413 containerd[1793]: time="2025-02-13T21:06:48.003381546Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:06:48.003413 containerd[1793]: time="2025-02-13T21:06:48.003370913Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:06:48.003500 containerd[1793]: time="2025-02-13T21:06:48.003490617Z" level=info msg="Ensure that sandbox 28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9 in task-service has been cleanup successfully" Feb 13 21:06:48.003558 containerd[1793]: time="2025-02-13T21:06:48.003544928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:3,}" Feb 13 21:06:48.003595 containerd[1793]: time="2025-02-13T21:06:48.003577245Z" level=info msg="TearDown network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" successfully" Feb 13 21:06:48.003595 containerd[1793]: time="2025-02-13T21:06:48.003587618Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" returns successfully" Feb 13 21:06:48.003693 containerd[1793]: time="2025-02-13T21:06:48.003683866Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:06:48.003738 containerd[1793]: time="2025-02-13T21:06:48.003730391Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:06:48.003774 containerd[1793]: time="2025-02-13T21:06:48.003738313Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:06:48.003815 kubelet[3104]: I0213 21:06:48.003803 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c" Feb 13 21:06:48.003932 containerd[1793]: time="2025-02-13T21:06:48.003917554Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:06:48.003985 containerd[1793]: time="2025-02-13T21:06:48.003963226Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:06:48.004008 containerd[1793]: time="2025-02-13T21:06:48.003980719Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:06:48.004066 containerd[1793]: time="2025-02-13T21:06:48.003985935Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:06:48.004102 containerd[1793]: time="2025-02-13T21:06:48.004093783Z" level=info msg="Ensure that sandbox 0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c in task-service has been cleanup successfully" Feb 13 21:06:48.004176 containerd[1793]: time="2025-02-13T21:06:48.004166611Z" level=info msg="TearDown network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" successfully" Feb 13 21:06:48.004176 containerd[1793]: time="2025-02-13T21:06:48.004174706Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" returns successfully" Feb 13 21:06:48.004289 containerd[1793]: time="2025-02-13T21:06:48.004277111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:3,}" Feb 13 21:06:48.004316 containerd[1793]: time="2025-02-13T21:06:48.004284308Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:06:48.004347 containerd[1793]: time="2025-02-13T21:06:48.004328785Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:06:48.004371 containerd[1793]: time="2025-02-13T21:06:48.004347125Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:06:48.004427 kubelet[3104]: I0213 21:06:48.004419 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673" Feb 13 21:06:48.004473 containerd[1793]: time="2025-02-13T21:06:48.004464595Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:06:48.004517 containerd[1793]: time="2025-02-13T21:06:48.004506717Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:06:48.004545 containerd[1793]: time="2025-02-13T21:06:48.004517142Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:06:48.004619 containerd[1793]: time="2025-02-13T21:06:48.004606472Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:06:48.004702 containerd[1793]: time="2025-02-13T21:06:48.004692221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:3,}" Feb 13 21:06:48.004734 containerd[1793]: time="2025-02-13T21:06:48.004717734Z" level=info msg="Ensure that sandbox 7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673 in task-service has been cleanup successfully" Feb 13 21:06:48.004805 containerd[1793]: time="2025-02-13T21:06:48.004794383Z" level=info msg="TearDown network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" successfully" Feb 13 21:06:48.004827 containerd[1793]: time="2025-02-13T21:06:48.004804766Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" returns successfully" Feb 13 21:06:48.004915 containerd[1793]: time="2025-02-13T21:06:48.004905579Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:06:48.004954 containerd[1793]: time="2025-02-13T21:06:48.004946737Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:06:48.004973 containerd[1793]: time="2025-02-13T21:06:48.004954178Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:06:48.004991 systemd[1]: run-netns-cni\x2d0ed3c57f\x2d0187\x2d2c62\x2df30c\x2d7c217defd9ae.mount: Deactivated successfully. Feb 13 21:06:48.005110 kubelet[3104]: I0213 21:06:48.005040 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a" Feb 13 21:06:48.005138 containerd[1793]: time="2025-02-13T21:06:48.005047767Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:06:48.005138 containerd[1793]: time="2025-02-13T21:06:48.005086492Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:06:48.005138 containerd[1793]: time="2025-02-13T21:06:48.005108235Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:06:48.005287 containerd[1793]: time="2025-02-13T21:06:48.005272708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:3,}" Feb 13 21:06:48.005326 containerd[1793]: time="2025-02-13T21:06:48.005285510Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:06:48.005379 containerd[1793]: time="2025-02-13T21:06:48.005369909Z" level=info msg="Ensure that sandbox f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a in task-service has been cleanup successfully" Feb 13 21:06:48.005448 containerd[1793]: time="2025-02-13T21:06:48.005440436Z" level=info msg="TearDown network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" successfully" Feb 13 21:06:48.005468 containerd[1793]: time="2025-02-13T21:06:48.005447768Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" returns successfully" Feb 13 21:06:48.005581 containerd[1793]: time="2025-02-13T21:06:48.005571076Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:06:48.005645 kubelet[3104]: I0213 21:06:48.005635 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d" Feb 13 21:06:48.005684 containerd[1793]: time="2025-02-13T21:06:48.005617188Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:06:48.005684 containerd[1793]: time="2025-02-13T21:06:48.005651539Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:06:48.005803 containerd[1793]: time="2025-02-13T21:06:48.005790633Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:06:48.005829 containerd[1793]: time="2025-02-13T21:06:48.005821751Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:06:48.005852 containerd[1793]: time="2025-02-13T21:06:48.005842148Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:06:48.005869 containerd[1793]: time="2025-02-13T21:06:48.005852866Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:06:48.005939 containerd[1793]: time="2025-02-13T21:06:48.005928106Z" level=info msg="Ensure that sandbox d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d in task-service has been cleanup successfully" Feb 13 21:06:48.006022 containerd[1793]: time="2025-02-13T21:06:48.006013905Z" level=info msg="TearDown network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" successfully" Feb 13 21:06:48.006042 containerd[1793]: time="2025-02-13T21:06:48.006022048Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" returns successfully" Feb 13 21:06:48.006042 containerd[1793]: time="2025-02-13T21:06:48.006025729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:3,}" Feb 13 21:06:48.006139 containerd[1793]: time="2025-02-13T21:06:48.006130038Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:06:48.006181 containerd[1793]: time="2025-02-13T21:06:48.006173831Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:06:48.006198 containerd[1793]: time="2025-02-13T21:06:48.006180538Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:06:48.006301 containerd[1793]: time="2025-02-13T21:06:48.006292034Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:06:48.006341 containerd[1793]: time="2025-02-13T21:06:48.006334542Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:06:48.006369 containerd[1793]: time="2025-02-13T21:06:48.006341573Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:06:48.006510 containerd[1793]: time="2025-02-13T21:06:48.006500661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:3,}" Feb 13 21:06:48.007533 systemd[1]: run-netns-cni\x2d18b19bda\x2df73d\x2dadc6\x2d1873\x2d65d515fdec16.mount: Deactivated successfully. Feb 13 21:06:48.007604 systemd[1]: run-netns-cni\x2d0ff9b556\x2d49de\x2da9e6\x2dc453\x2d7d9b14883f11.mount: Deactivated successfully. Feb 13 21:06:48.007698 systemd[1]: run-netns-cni\x2d01d4e899\x2d0df3\x2dd14e\x2d51c8\x2dddec37b2350e.mount: Deactivated successfully. Feb 13 21:06:48.007750 systemd[1]: run-netns-cni\x2db6c1306c\x2d988f\x2da6b1\x2d1ac7\x2d1b298167f720.mount: Deactivated successfully. Feb 13 21:06:48.010433 systemd[1]: run-netns-cni\x2d48b02de8\x2de131\x2dd7f3\x2de589\x2db0b4c2b92a7f.mount: Deactivated successfully. Feb 13 21:06:48.047389 containerd[1793]: time="2025-02-13T21:06:48.047362158Z" level=error msg="Failed to destroy network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047475 containerd[1793]: time="2025-02-13T21:06:48.047362657Z" level=error msg="Failed to destroy network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047617 containerd[1793]: time="2025-02-13T21:06:48.047599445Z" level=error msg="encountered an error cleaning up failed sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047666 containerd[1793]: time="2025-02-13T21:06:48.047652406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047726 containerd[1793]: time="2025-02-13T21:06:48.047666180Z" level=error msg="encountered an error cleaning up failed sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047759 containerd[1793]: time="2025-02-13T21:06:48.047742138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047827 kubelet[3104]: E0213 21:06:48.047799 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047872 kubelet[3104]: E0213 21:06:48.047819 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.047872 kubelet[3104]: E0213 21:06:48.047850 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:48.047943 kubelet[3104]: E0213 21:06:48.047872 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:48.047943 kubelet[3104]: E0213 21:06:48.047909 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podUID="0b74a159-92ea-4d91-b1cc-e328f7cf493e" Feb 13 21:06:48.047943 kubelet[3104]: E0213 21:06:48.047849 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:48.048050 kubelet[3104]: E0213 21:06:48.047942 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:48.048050 kubelet[3104]: E0213 21:06:48.047980 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wr725" podUID="a02339f1-24d3-4002-b90e-3dd190621c61" Feb 13 21:06:48.048702 containerd[1793]: time="2025-02-13T21:06:48.048684413Z" level=error msg="Failed to destroy network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.048900 containerd[1793]: time="2025-02-13T21:06:48.048883238Z" level=error msg="encountered an error cleaning up failed sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.048939 containerd[1793]: time="2025-02-13T21:06:48.048925392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.049041 kubelet[3104]: E0213 21:06:48.049026 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.049070 kubelet[3104]: E0213 21:06:48.049051 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:48.049070 kubelet[3104]: E0213 21:06:48.049065 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:48.049107 kubelet[3104]: E0213 21:06:48.049089 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podUID="37d2fc1a-4796-4ba1-89bd-a5b0ae45374e" Feb 13 21:06:48.050535 containerd[1793]: time="2025-02-13T21:06:48.050518354Z" level=error msg="Failed to destroy network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.050695 containerd[1793]: time="2025-02-13T21:06:48.050682523Z" level=error msg="encountered an error cleaning up failed sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.050724 containerd[1793]: time="2025-02-13T21:06:48.050714310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.050775 containerd[1793]: time="2025-02-13T21:06:48.050760799Z" level=error msg="Failed to destroy network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.050845 kubelet[3104]: E0213 21:06:48.050830 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.050884 kubelet[3104]: E0213 21:06:48.050859 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:48.050884 kubelet[3104]: E0213 21:06:48.050877 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:48.050937 kubelet[3104]: E0213 21:06:48.050907 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:48.050980 containerd[1793]: time="2025-02-13T21:06:48.050896913Z" level=error msg="encountered an error cleaning up failed sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.050980 containerd[1793]: time="2025-02-13T21:06:48.050915198Z" level=error msg="Failed to destroy network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.050980 containerd[1793]: time="2025-02-13T21:06:48.050921997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.051055 kubelet[3104]: E0213 21:06:48.051043 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.051078 kubelet[3104]: E0213 21:06:48.051061 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:48.051078 kubelet[3104]: E0213 21:06:48.051072 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:48.051119 containerd[1793]: time="2025-02-13T21:06:48.051051204Z" level=error msg="encountered an error cleaning up failed sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.051119 containerd[1793]: time="2025-02-13T21:06:48.051074959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.051172 kubelet[3104]: E0213 21:06:48.051088 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pfzwq" podUID="322cd9eb-1852-4c5c-aae9-cdfcee960ce3" Feb 13 21:06:48.051172 kubelet[3104]: E0213 21:06:48.051134 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:48.051172 kubelet[3104]: E0213 21:06:48.051155 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:48.051237 kubelet[3104]: E0213 21:06:48.051166 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:48.051237 kubelet[3104]: E0213 21:06:48.051183 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podUID="4552908e-a949-4d18-be80-ec54e1c8e06d" Feb 13 21:06:48.588543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7-shm.mount: Deactivated successfully. Feb 13 21:06:49.009446 kubelet[3104]: I0213 21:06:49.009332 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7" Feb 13 21:06:49.009809 containerd[1793]: time="2025-02-13T21:06:49.009758650Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" Feb 13 21:06:49.010072 containerd[1793]: time="2025-02-13T21:06:49.010007001Z" level=info msg="Ensure that sandbox 8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7 in task-service has been cleanup successfully" Feb 13 21:06:49.010254 containerd[1793]: time="2025-02-13T21:06:49.010206515Z" level=info msg="TearDown network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" successfully" Feb 13 21:06:49.010254 containerd[1793]: time="2025-02-13T21:06:49.010223293Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" returns successfully" Feb 13 21:06:49.010479 containerd[1793]: time="2025-02-13T21:06:49.010439593Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:06:49.010541 containerd[1793]: time="2025-02-13T21:06:49.010527489Z" level=info msg="TearDown network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" successfully" Feb 13 21:06:49.010577 containerd[1793]: time="2025-02-13T21:06:49.010545824Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" returns successfully" Feb 13 21:06:49.010864 containerd[1793]: time="2025-02-13T21:06:49.010811359Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:06:49.010947 containerd[1793]: time="2025-02-13T21:06:49.010895530Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:06:49.010992 containerd[1793]: time="2025-02-13T21:06:49.010945072Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:06:49.011075 kubelet[3104]: I0213 21:06:49.011027 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd" Feb 13 21:06:49.011235 containerd[1793]: time="2025-02-13T21:06:49.011194768Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:06:49.011284 containerd[1793]: time="2025-02-13T21:06:49.011264944Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:06:49.011284 containerd[1793]: time="2025-02-13T21:06:49.011277269Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:06:49.011387 containerd[1793]: time="2025-02-13T21:06:49.011365548Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" Feb 13 21:06:49.011600 containerd[1793]: time="2025-02-13T21:06:49.011577940Z" level=info msg="Ensure that sandbox df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd in task-service has been cleanup successfully" Feb 13 21:06:49.011734 containerd[1793]: time="2025-02-13T21:06:49.011709064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:4,}" Feb 13 21:06:49.011786 containerd[1793]: time="2025-02-13T21:06:49.011769613Z" level=info msg="TearDown network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" successfully" Feb 13 21:06:49.011821 containerd[1793]: time="2025-02-13T21:06:49.011790296Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" returns successfully" Feb 13 21:06:49.012039 containerd[1793]: time="2025-02-13T21:06:49.011987847Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:06:49.012107 containerd[1793]: time="2025-02-13T21:06:49.012087181Z" level=info msg="TearDown network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" successfully" Feb 13 21:06:49.012145 containerd[1793]: time="2025-02-13T21:06:49.012106576Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" returns successfully" Feb 13 21:06:49.012351 containerd[1793]: time="2025-02-13T21:06:49.012314890Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:06:49.012371 containerd[1793]: time="2025-02-13T21:06:49.012357958Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:06:49.012371 containerd[1793]: time="2025-02-13T21:06:49.012364722Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:06:49.012407 kubelet[3104]: I0213 21:06:49.012388 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1" Feb 13 21:06:49.012500 containerd[1793]: time="2025-02-13T21:06:49.012490912Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:06:49.012532 containerd[1793]: time="2025-02-13T21:06:49.012526039Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:06:49.012549 containerd[1793]: time="2025-02-13T21:06:49.012531846Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:06:49.012571 containerd[1793]: time="2025-02-13T21:06:49.012560963Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" Feb 13 21:06:49.012870 containerd[1793]: time="2025-02-13T21:06:49.012856040Z" level=info msg="Ensure that sandbox 016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1 in task-service has been cleanup successfully" Feb 13 21:06:49.012993 containerd[1793]: time="2025-02-13T21:06:49.012866940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:4,}" Feb 13 21:06:49.013038 containerd[1793]: time="2025-02-13T21:06:49.012991402Z" level=info msg="TearDown network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" successfully" Feb 13 21:06:49.013038 containerd[1793]: time="2025-02-13T21:06:49.013003205Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" returns successfully" Feb 13 21:06:49.013244 systemd[1]: run-netns-cni\x2dc69ee781\x2d23b1\x2d25be\x2da72b\x2de780fd3010a0.mount: Deactivated successfully. Feb 13 21:06:49.013965 containerd[1793]: time="2025-02-13T21:06:49.013952070Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:06:49.014031 containerd[1793]: time="2025-02-13T21:06:49.014006736Z" level=info msg="TearDown network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" successfully" Feb 13 21:06:49.014064 containerd[1793]: time="2025-02-13T21:06:49.014031459Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" returns successfully" Feb 13 21:06:49.014146 containerd[1793]: time="2025-02-13T21:06:49.014137382Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:06:49.014179 containerd[1793]: time="2025-02-13T21:06:49.014172756Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:06:49.014179 containerd[1793]: time="2025-02-13T21:06:49.014178581Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:06:49.014291 containerd[1793]: time="2025-02-13T21:06:49.014278529Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:06:49.014360 containerd[1793]: time="2025-02-13T21:06:49.014330575Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:06:49.014395 containerd[1793]: time="2025-02-13T21:06:49.014359329Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:06:49.014423 kubelet[3104]: I0213 21:06:49.014370 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0" Feb 13 21:06:49.014602 containerd[1793]: time="2025-02-13T21:06:49.014591557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:4,}" Feb 13 21:06:49.014674 containerd[1793]: time="2025-02-13T21:06:49.014663024Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" Feb 13 21:06:49.014789 containerd[1793]: time="2025-02-13T21:06:49.014769632Z" level=info msg="Ensure that sandbox 3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0 in task-service has been cleanup successfully" Feb 13 21:06:49.014994 containerd[1793]: time="2025-02-13T21:06:49.014870650Z" level=info msg="TearDown network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" successfully" Feb 13 21:06:49.014994 containerd[1793]: time="2025-02-13T21:06:49.014881101Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" returns successfully" Feb 13 21:06:49.014994 containerd[1793]: time="2025-02-13T21:06:49.014980944Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:06:49.015056 containerd[1793]: time="2025-02-13T21:06:49.015017381Z" level=info msg="TearDown network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" successfully" Feb 13 21:06:49.015056 containerd[1793]: time="2025-02-13T21:06:49.015039771Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" returns successfully" Feb 13 21:06:49.015145 containerd[1793]: time="2025-02-13T21:06:49.015137206Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:06:49.015181 containerd[1793]: time="2025-02-13T21:06:49.015174591Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:06:49.015203 containerd[1793]: time="2025-02-13T21:06:49.015181374Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:06:49.015262 kubelet[3104]: I0213 21:06:49.015254 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4" Feb 13 21:06:49.015314 containerd[1793]: time="2025-02-13T21:06:49.015302270Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:06:49.015365 containerd[1793]: time="2025-02-13T21:06:49.015354920Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:06:49.015391 containerd[1793]: time="2025-02-13T21:06:49.015364571Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:06:49.015472 containerd[1793]: time="2025-02-13T21:06:49.015459699Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" Feb 13 21:06:49.015515 containerd[1793]: time="2025-02-13T21:06:49.015506352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:4,}" Feb 13 21:06:49.015563 containerd[1793]: time="2025-02-13T21:06:49.015554782Z" level=info msg="Ensure that sandbox 8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4 in task-service has been cleanup successfully" Feb 13 21:06:49.015641 containerd[1793]: time="2025-02-13T21:06:49.015632646Z" level=info msg="TearDown network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" successfully" Feb 13 21:06:49.015667 containerd[1793]: time="2025-02-13T21:06:49.015641598Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" returns successfully" Feb 13 21:06:49.015744 systemd[1]: run-netns-cni\x2d046f5e7d\x2d8354\x2d47f1\x2d395a\x2da11d5f2a1b30.mount: Deactivated successfully. Feb 13 21:06:49.015803 containerd[1793]: time="2025-02-13T21:06:49.015746287Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:06:49.015827 containerd[1793]: time="2025-02-13T21:06:49.015789881Z" level=info msg="TearDown network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" successfully" Feb 13 21:06:49.015827 containerd[1793]: time="2025-02-13T21:06:49.015810120Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" returns successfully" Feb 13 21:06:49.015815 systemd[1]: run-netns-cni\x2dc02dc2bf\x2de3a3\x2dbd5c\x2d3904\x2d715517723bdd.mount: Deactivated successfully. Feb 13 21:06:49.015922 containerd[1793]: time="2025-02-13T21:06:49.015910820Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:06:49.015981 containerd[1793]: time="2025-02-13T21:06:49.015954729Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:06:49.016009 containerd[1793]: time="2025-02-13T21:06:49.015981742Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:06:49.016063 kubelet[3104]: I0213 21:06:49.016054 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f" Feb 13 21:06:49.016106 containerd[1793]: time="2025-02-13T21:06:49.016089176Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:06:49.016146 containerd[1793]: time="2025-02-13T21:06:49.016137904Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:06:49.016165 containerd[1793]: time="2025-02-13T21:06:49.016144943Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:06:49.016264 containerd[1793]: time="2025-02-13T21:06:49.016252827Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" Feb 13 21:06:49.016332 containerd[1793]: time="2025-02-13T21:06:49.016322231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:4,}" Feb 13 21:06:49.016381 containerd[1793]: time="2025-02-13T21:06:49.016370974Z" level=info msg="Ensure that sandbox 2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f in task-service has been cleanup successfully" Feb 13 21:06:49.016465 containerd[1793]: time="2025-02-13T21:06:49.016455328Z" level=info msg="TearDown network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" successfully" Feb 13 21:06:49.016496 containerd[1793]: time="2025-02-13T21:06:49.016464300Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" returns successfully" Feb 13 21:06:49.016594 containerd[1793]: time="2025-02-13T21:06:49.016584474Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:06:49.016633 containerd[1793]: time="2025-02-13T21:06:49.016620826Z" level=info msg="TearDown network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" successfully" Feb 13 21:06:49.016662 containerd[1793]: time="2025-02-13T21:06:49.016632800Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" returns successfully" Feb 13 21:06:49.016733 containerd[1793]: time="2025-02-13T21:06:49.016723616Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:06:49.016768 containerd[1793]: time="2025-02-13T21:06:49.016761594Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:06:49.016793 containerd[1793]: time="2025-02-13T21:06:49.016768215Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:06:49.016873 containerd[1793]: time="2025-02-13T21:06:49.016864500Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:06:49.016912 containerd[1793]: time="2025-02-13T21:06:49.016905380Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:06:49.016940 containerd[1793]: time="2025-02-13T21:06:49.016912424Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:06:49.017089 containerd[1793]: time="2025-02-13T21:06:49.017078400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:4,}" Feb 13 21:06:49.018567 systemd[1]: run-netns-cni\x2da9f83e37\x2d620d\x2dc11d\x2da22a\x2d811808723b2e.mount: Deactivated successfully. Feb 13 21:06:49.018634 systemd[1]: run-netns-cni\x2dff3cbb5e\x2dafbb\x2dfddc\x2d05b2\x2dad7b99bbc0d0.mount: Deactivated successfully. Feb 13 21:06:49.018763 systemd[1]: run-netns-cni\x2d0d35ed97\x2d40ef\x2d990a\x2da02f\x2d5f70ea64d558.mount: Deactivated successfully. Feb 13 21:06:49.047204 containerd[1793]: time="2025-02-13T21:06:49.047177210Z" level=error msg="Failed to destroy network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.047349 containerd[1793]: time="2025-02-13T21:06:49.047334840Z" level=error msg="encountered an error cleaning up failed sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.047389 containerd[1793]: time="2025-02-13T21:06:49.047368520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.047509 kubelet[3104]: E0213 21:06:49.047486 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.047552 kubelet[3104]: E0213 21:06:49.047527 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:49.047552 kubelet[3104]: E0213 21:06:49.047545 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:49.047631 kubelet[3104]: E0213 21:06:49.047569 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podUID="0b74a159-92ea-4d91-b1cc-e328f7cf493e" Feb 13 21:06:49.071751 containerd[1793]: time="2025-02-13T21:06:49.071704764Z" level=error msg="Failed to destroy network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.071947 containerd[1793]: time="2025-02-13T21:06:49.071932379Z" level=error msg="encountered an error cleaning up failed sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.071984 containerd[1793]: time="2025-02-13T21:06:49.071971555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.072122 kubelet[3104]: E0213 21:06:49.072098 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.072152 kubelet[3104]: E0213 21:06:49.072142 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:49.072170 kubelet[3104]: E0213 21:06:49.072158 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:49.072200 kubelet[3104]: E0213 21:06:49.072187 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wr725" podUID="a02339f1-24d3-4002-b90e-3dd190621c61" Feb 13 21:06:49.074243 containerd[1793]: time="2025-02-13T21:06:49.074215818Z" level=error msg="Failed to destroy network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.074460 containerd[1793]: time="2025-02-13T21:06:49.074445218Z" level=error msg="encountered an error cleaning up failed sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.074503 containerd[1793]: time="2025-02-13T21:06:49.074489133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.074614 kubelet[3104]: E0213 21:06:49.074598 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.074665 kubelet[3104]: E0213 21:06:49.074633 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:49.074665 kubelet[3104]: E0213 21:06:49.074646 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:49.074702 kubelet[3104]: E0213 21:06:49.074670 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podUID="37d2fc1a-4796-4ba1-89bd-a5b0ae45374e" Feb 13 21:06:49.075359 containerd[1793]: time="2025-02-13T21:06:49.075343114Z" level=error msg="Failed to destroy network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.075501 containerd[1793]: time="2025-02-13T21:06:49.075491029Z" level=error msg="encountered an error cleaning up failed sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.075528 containerd[1793]: time="2025-02-13T21:06:49.075518375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.075607 kubelet[3104]: E0213 21:06:49.075590 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.075653 kubelet[3104]: E0213 21:06:49.075617 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:49.075653 kubelet[3104]: E0213 21:06:49.075640 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:49.075714 kubelet[3104]: E0213 21:06:49.075669 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pfzwq" podUID="322cd9eb-1852-4c5c-aae9-cdfcee960ce3" Feb 13 21:06:49.076203 containerd[1793]: time="2025-02-13T21:06:49.076189256Z" level=error msg="Failed to destroy network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.076342 containerd[1793]: time="2025-02-13T21:06:49.076331544Z" level=error msg="encountered an error cleaning up failed sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.076366 containerd[1793]: time="2025-02-13T21:06:49.076355581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.076427 kubelet[3104]: E0213 21:06:49.076415 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.076459 kubelet[3104]: E0213 21:06:49.076435 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:49.076459 kubelet[3104]: E0213 21:06:49.076448 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:49.076497 kubelet[3104]: E0213 21:06:49.076473 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podUID="4552908e-a949-4d18-be80-ec54e1c8e06d" Feb 13 21:06:49.076902 containerd[1793]: time="2025-02-13T21:06:49.076861808Z" level=error msg="Failed to destroy network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.077032 containerd[1793]: time="2025-02-13T21:06:49.076996181Z" level=error msg="encountered an error cleaning up failed sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.077032 containerd[1793]: time="2025-02-13T21:06:49.077019731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.077092 kubelet[3104]: E0213 21:06:49.077080 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:49.077110 kubelet[3104]: E0213 21:06:49.077099 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:49.077127 kubelet[3104]: E0213 21:06:49.077110 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:49.077147 kubelet[3104]: E0213 21:06:49.077127 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:49.587954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3-shm.mount: Deactivated successfully. Feb 13 21:06:50.022971 kubelet[3104]: I0213 21:06:50.022867 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7" Feb 13 21:06:50.023342 containerd[1793]: time="2025-02-13T21:06:50.023234480Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\"" Feb 13 21:06:50.023648 containerd[1793]: time="2025-02-13T21:06:50.023412610Z" level=info msg="Ensure that sandbox 114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7 in task-service has been cleanup successfully" Feb 13 21:06:50.023648 containerd[1793]: time="2025-02-13T21:06:50.023552639Z" level=info msg="TearDown network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" successfully" Feb 13 21:06:50.023648 containerd[1793]: time="2025-02-13T21:06:50.023565424Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" returns successfully" Feb 13 21:06:50.023749 containerd[1793]: time="2025-02-13T21:06:50.023730534Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" Feb 13 21:06:50.023813 containerd[1793]: time="2025-02-13T21:06:50.023799167Z" level=info msg="TearDown network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" successfully" Feb 13 21:06:50.023849 containerd[1793]: time="2025-02-13T21:06:50.023811463Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" returns successfully" Feb 13 21:06:50.024014 containerd[1793]: time="2025-02-13T21:06:50.023989859Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:06:50.024107 containerd[1793]: time="2025-02-13T21:06:50.024087335Z" level=info msg="TearDown network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" successfully" Feb 13 21:06:50.024144 containerd[1793]: time="2025-02-13T21:06:50.024107707Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" returns successfully" Feb 13 21:06:50.024373 containerd[1793]: time="2025-02-13T21:06:50.024356498Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:06:50.024453 containerd[1793]: time="2025-02-13T21:06:50.024431852Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:06:50.024487 containerd[1793]: time="2025-02-13T21:06:50.024453091Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:06:50.024518 kubelet[3104]: I0213 21:06:50.024492 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104" Feb 13 21:06:50.024685 containerd[1793]: time="2025-02-13T21:06:50.024667112Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:06:50.024749 containerd[1793]: time="2025-02-13T21:06:50.024736546Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:06:50.024780 containerd[1793]: time="2025-02-13T21:06:50.024748725Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:06:50.024884 containerd[1793]: time="2025-02-13T21:06:50.024864442Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\"" Feb 13 21:06:50.025064 containerd[1793]: time="2025-02-13T21:06:50.025045396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:5,}" Feb 13 21:06:50.025096 containerd[1793]: time="2025-02-13T21:06:50.025071322Z" level=info msg="Ensure that sandbox f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104 in task-service has been cleanup successfully" Feb 13 21:06:50.025273 containerd[1793]: time="2025-02-13T21:06:50.025252930Z" level=info msg="TearDown network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" successfully" Feb 13 21:06:50.025307 containerd[1793]: time="2025-02-13T21:06:50.025275458Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" returns successfully" Feb 13 21:06:50.025516 containerd[1793]: time="2025-02-13T21:06:50.025503376Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" Feb 13 21:06:50.025582 containerd[1793]: time="2025-02-13T21:06:50.025571752Z" level=info msg="TearDown network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" successfully" Feb 13 21:06:50.025612 containerd[1793]: time="2025-02-13T21:06:50.025581817Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" returns successfully" Feb 13 21:06:50.025695 containerd[1793]: time="2025-02-13T21:06:50.025685464Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:06:50.025734 containerd[1793]: time="2025-02-13T21:06:50.025716508Z" level=info msg="TearDown network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" successfully" Feb 13 21:06:50.025734 containerd[1793]: time="2025-02-13T21:06:50.025722025Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" returns successfully" Feb 13 21:06:50.025838 kubelet[3104]: I0213 21:06:50.025830 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77" Feb 13 21:06:50.025870 containerd[1793]: time="2025-02-13T21:06:50.025826553Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:06:50.025892 containerd[1793]: time="2025-02-13T21:06:50.025880137Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:06:50.025892 containerd[1793]: time="2025-02-13T21:06:50.025887338Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:06:50.026007 containerd[1793]: time="2025-02-13T21:06:50.025996569Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:06:50.026049 containerd[1793]: time="2025-02-13T21:06:50.026038508Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:06:50.026075 containerd[1793]: time="2025-02-13T21:06:50.026047896Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\"" Feb 13 21:06:50.026100 containerd[1793]: time="2025-02-13T21:06:50.026048885Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:06:50.026142 containerd[1793]: time="2025-02-13T21:06:50.026132533Z" level=info msg="Ensure that sandbox cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77 in task-service has been cleanup successfully" Feb 13 21:06:50.026214 containerd[1793]: time="2025-02-13T21:06:50.026206400Z" level=info msg="TearDown network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" successfully" Feb 13 21:06:50.026214 containerd[1793]: time="2025-02-13T21:06:50.026213463Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" returns successfully" Feb 13 21:06:50.026278 containerd[1793]: time="2025-02-13T21:06:50.026258627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:5,}" Feb 13 21:06:50.026300 containerd[1793]: time="2025-02-13T21:06:50.026292638Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" Feb 13 21:06:50.026340 containerd[1793]: time="2025-02-13T21:06:50.026332324Z" level=info msg="TearDown network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" successfully" Feb 13 21:06:50.026359 containerd[1793]: time="2025-02-13T21:06:50.026339382Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" returns successfully" Feb 13 21:06:50.026404 systemd[1]: run-netns-cni\x2d7d89fd7a\x2d1532\x2dbe21\x2db021\x2db073d5f13f64.mount: Deactivated successfully. Feb 13 21:06:50.026524 containerd[1793]: time="2025-02-13T21:06:50.026481317Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:06:50.026556 containerd[1793]: time="2025-02-13T21:06:50.026520722Z" level=info msg="TearDown network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" successfully" Feb 13 21:06:50.026556 containerd[1793]: time="2025-02-13T21:06:50.026529395Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" returns successfully" Feb 13 21:06:50.026636 containerd[1793]: time="2025-02-13T21:06:50.026627115Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:06:50.026672 containerd[1793]: time="2025-02-13T21:06:50.026666176Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:06:50.026690 containerd[1793]: time="2025-02-13T21:06:50.026672518Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:06:50.026706 kubelet[3104]: I0213 21:06:50.026667 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3" Feb 13 21:06:50.026790 containerd[1793]: time="2025-02-13T21:06:50.026781505Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:06:50.026825 containerd[1793]: time="2025-02-13T21:06:50.026818077Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:06:50.026844 containerd[1793]: time="2025-02-13T21:06:50.026824324Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:06:50.026863 containerd[1793]: time="2025-02-13T21:06:50.026857653Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\"" Feb 13 21:06:50.026962 containerd[1793]: time="2025-02-13T21:06:50.026951144Z" level=info msg="Ensure that sandbox a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3 in task-service has been cleanup successfully" Feb 13 21:06:50.027048 containerd[1793]: time="2025-02-13T21:06:50.027032402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:5,}" Feb 13 21:06:50.027048 containerd[1793]: time="2025-02-13T21:06:50.027039390Z" level=info msg="TearDown network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" successfully" Feb 13 21:06:50.027118 containerd[1793]: time="2025-02-13T21:06:50.027054374Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" returns successfully" Feb 13 21:06:50.027161 containerd[1793]: time="2025-02-13T21:06:50.027152693Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" Feb 13 21:06:50.027223 containerd[1793]: time="2025-02-13T21:06:50.027191737Z" level=info msg="TearDown network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" successfully" Feb 13 21:06:50.027249 containerd[1793]: time="2025-02-13T21:06:50.027224128Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" returns successfully" Feb 13 21:06:50.027326 containerd[1793]: time="2025-02-13T21:06:50.027317660Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:06:50.027360 containerd[1793]: time="2025-02-13T21:06:50.027350112Z" level=info msg="TearDown network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" successfully" Feb 13 21:06:50.027388 containerd[1793]: time="2025-02-13T21:06:50.027358909Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" returns successfully" Feb 13 21:06:50.027458 containerd[1793]: time="2025-02-13T21:06:50.027447026Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:06:50.027491 kubelet[3104]: I0213 21:06:50.027465 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a" Feb 13 21:06:50.027526 containerd[1793]: time="2025-02-13T21:06:50.027491264Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:06:50.027526 containerd[1793]: time="2025-02-13T21:06:50.027497751Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:06:50.027591 containerd[1793]: time="2025-02-13T21:06:50.027577989Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:06:50.027631 containerd[1793]: time="2025-02-13T21:06:50.027616386Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\"" Feb 13 21:06:50.027661 containerd[1793]: time="2025-02-13T21:06:50.027636092Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:06:50.027661 containerd[1793]: time="2025-02-13T21:06:50.027646459Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:06:50.027729 containerd[1793]: time="2025-02-13T21:06:50.027720010Z" level=info msg="Ensure that sandbox d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a in task-service has been cleanup successfully" Feb 13 21:06:50.027816 containerd[1793]: time="2025-02-13T21:06:50.027804473Z" level=info msg="TearDown network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" successfully" Feb 13 21:06:50.027842 containerd[1793]: time="2025-02-13T21:06:50.027816149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:5,}" Feb 13 21:06:50.027892 containerd[1793]: time="2025-02-13T21:06:50.027815662Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" returns successfully" Feb 13 21:06:50.028064 containerd[1793]: time="2025-02-13T21:06:50.028054862Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" Feb 13 21:06:50.028109 containerd[1793]: time="2025-02-13T21:06:50.028099559Z" level=info msg="TearDown network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" successfully" Feb 13 21:06:50.028140 containerd[1793]: time="2025-02-13T21:06:50.028109219Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" returns successfully" Feb 13 21:06:50.028199 containerd[1793]: time="2025-02-13T21:06:50.028190711Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:06:50.028231 containerd[1793]: time="2025-02-13T21:06:50.028225494Z" level=info msg="TearDown network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" successfully" Feb 13 21:06:50.028252 containerd[1793]: time="2025-02-13T21:06:50.028231576Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" returns successfully" Feb 13 21:06:50.028314 containerd[1793]: time="2025-02-13T21:06:50.028307243Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:06:50.028336 kubelet[3104]: I0213 21:06:50.028311 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e" Feb 13 21:06:50.028355 containerd[1793]: time="2025-02-13T21:06:50.028338344Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:06:50.028355 containerd[1793]: time="2025-02-13T21:06:50.028343699Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:06:50.028438 containerd[1793]: time="2025-02-13T21:06:50.028423481Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:06:50.028460 containerd[1793]: time="2025-02-13T21:06:50.028450841Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\"" Feb 13 21:06:50.028484 containerd[1793]: time="2025-02-13T21:06:50.028476370Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:06:50.028507 containerd[1793]: time="2025-02-13T21:06:50.028483747Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:06:50.028531 containerd[1793]: time="2025-02-13T21:06:50.028523793Z" level=info msg="Ensure that sandbox 9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e in task-service has been cleanup successfully" Feb 13 21:06:50.028592 containerd[1793]: time="2025-02-13T21:06:50.028585683Z" level=info msg="TearDown network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" successfully" Feb 13 21:06:50.028618 containerd[1793]: time="2025-02-13T21:06:50.028592511Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" returns successfully" Feb 13 21:06:50.028641 containerd[1793]: time="2025-02-13T21:06:50.028629568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:5,}" Feb 13 21:06:50.028682 containerd[1793]: time="2025-02-13T21:06:50.028674994Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" Feb 13 21:06:50.028711 containerd[1793]: time="2025-02-13T21:06:50.028705816Z" level=info msg="TearDown network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" successfully" Feb 13 21:06:50.028744 containerd[1793]: time="2025-02-13T21:06:50.028712186Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" returns successfully" Feb 13 21:06:50.028830 systemd[1]: run-netns-cni\x2dbaae33ca\x2d27ec\x2db16b\x2d0823\x2d7e0e8bffef8c.mount: Deactivated successfully. Feb 13 21:06:50.028877 containerd[1793]: time="2025-02-13T21:06:50.028824839Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:06:50.028896 containerd[1793]: time="2025-02-13T21:06:50.028876054Z" level=info msg="TearDown network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" successfully" Feb 13 21:06:50.028896 containerd[1793]: time="2025-02-13T21:06:50.028886429Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" returns successfully" Feb 13 21:06:50.028898 systemd[1]: run-netns-cni\x2daec2849b\x2d1669\x2d7e71\x2dfd48\x2dfb65d4bbffed.mount: Deactivated successfully. Feb 13 21:06:50.028956 systemd[1]: run-netns-cni\x2dc7f663a8\x2d0a54\x2d3343\x2dfa1f\x2d0d86a9da74a5.mount: Deactivated successfully. Feb 13 21:06:50.028981 containerd[1793]: time="2025-02-13T21:06:50.028969443Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:06:50.029010 containerd[1793]: time="2025-02-13T21:06:50.029003653Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:06:50.029028 containerd[1793]: time="2025-02-13T21:06:50.029009890Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:06:50.029108 containerd[1793]: time="2025-02-13T21:06:50.029098350Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:06:50.029139 containerd[1793]: time="2025-02-13T21:06:50.029132426Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:06:50.029158 containerd[1793]: time="2025-02-13T21:06:50.029139274Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:06:50.029278 containerd[1793]: time="2025-02-13T21:06:50.029268598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:5,}" Feb 13 21:06:50.032843 systemd[1]: run-netns-cni\x2ddf268a7d\x2d12bf\x2d9df5\x2d6881\x2d9e2a569fc2a8.mount: Deactivated successfully. Feb 13 21:06:50.032957 systemd[1]: run-netns-cni\x2daa33d696\x2d455e\x2d1ba2\x2d2983\x2d9cd2bec970cd.mount: Deactivated successfully. Feb 13 21:06:50.065989 containerd[1793]: time="2025-02-13T21:06:50.065890037Z" level=error msg="Failed to destroy network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066150 containerd[1793]: time="2025-02-13T21:06:50.066132866Z" level=error msg="encountered an error cleaning up failed sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066197 containerd[1793]: time="2025-02-13T21:06:50.066180804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066369 kubelet[3104]: E0213 21:06:50.066345 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066419 kubelet[3104]: E0213 21:06:50.066391 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:50.066419 kubelet[3104]: E0213 21:06:50.066406 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:50.066459 kubelet[3104]: E0213 21:06:50.066441 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:50.066553 containerd[1793]: time="2025-02-13T21:06:50.066540177Z" level=error msg="Failed to destroy network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066724 containerd[1793]: time="2025-02-13T21:06:50.066710028Z" level=error msg="encountered an error cleaning up failed sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066755 containerd[1793]: time="2025-02-13T21:06:50.066742127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066835 kubelet[3104]: E0213 21:06:50.066823 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.066864 kubelet[3104]: E0213 21:06:50.066842 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:50.066864 kubelet[3104]: E0213 21:06:50.066855 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:50.066906 kubelet[3104]: E0213 21:06:50.066875 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podUID="4552908e-a949-4d18-be80-ec54e1c8e06d" Feb 13 21:06:50.072065 containerd[1793]: time="2025-02-13T21:06:50.072036073Z" level=error msg="Failed to destroy network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072202 containerd[1793]: time="2025-02-13T21:06:50.072187989Z" level=error msg="Failed to destroy network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072260 containerd[1793]: time="2025-02-13T21:06:50.072243374Z" level=error msg="encountered an error cleaning up failed sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072292 containerd[1793]: time="2025-02-13T21:06:50.072282572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072337 containerd[1793]: time="2025-02-13T21:06:50.072326753Z" level=error msg="encountered an error cleaning up failed sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072357 containerd[1793]: time="2025-02-13T21:06:50.072349548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072416 kubelet[3104]: E0213 21:06:50.072394 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072446 kubelet[3104]: E0213 21:06:50.072416 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.072446 kubelet[3104]: E0213 21:06:50.072434 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:50.072511 kubelet[3104]: E0213 21:06:50.072442 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:50.072511 kubelet[3104]: E0213 21:06:50.072451 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:50.072511 kubelet[3104]: E0213 21:06:50.072458 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:50.072618 kubelet[3104]: E0213 21:06:50.072476 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pfzwq" podUID="322cd9eb-1852-4c5c-aae9-cdfcee960ce3" Feb 13 21:06:50.072618 kubelet[3104]: E0213 21:06:50.072486 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podUID="0b74a159-92ea-4d91-b1cc-e328f7cf493e" Feb 13 21:06:50.073894 containerd[1793]: time="2025-02-13T21:06:50.073827271Z" level=error msg="Failed to destroy network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.073999 containerd[1793]: time="2025-02-13T21:06:50.073943968Z" level=error msg="Failed to destroy network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.074071 containerd[1793]: time="2025-02-13T21:06:50.074060642Z" level=error msg="encountered an error cleaning up failed sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.074098 containerd[1793]: time="2025-02-13T21:06:50.074087237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.074209 containerd[1793]: time="2025-02-13T21:06:50.074192857Z" level=error msg="encountered an error cleaning up failed sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.074236 containerd[1793]: time="2025-02-13T21:06:50.074225988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.074256 kubelet[3104]: E0213 21:06:50.074200 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.074256 kubelet[3104]: E0213 21:06:50.074229 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:50.074256 kubelet[3104]: E0213 21:06:50.074243 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:50.074313 kubelet[3104]: E0213 21:06:50.074263 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podUID="37d2fc1a-4796-4ba1-89bd-a5b0ae45374e" Feb 13 21:06:50.074313 kubelet[3104]: E0213 21:06:50.074290 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:50.074359 kubelet[3104]: E0213 21:06:50.074311 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:50.074359 kubelet[3104]: E0213 21:06:50.074323 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:50.074359 kubelet[3104]: E0213 21:06:50.074339 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wr725" podUID="a02339f1-24d3-4002-b90e-3dd190621c61" Feb 13 21:06:50.585857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460-shm.mount: Deactivated successfully. Feb 13 21:06:51.030811 kubelet[3104]: I0213 21:06:51.030757 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b" Feb 13 21:06:51.031056 containerd[1793]: time="2025-02-13T21:06:51.031004222Z" level=info msg="StopPodSandbox for \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\"" Feb 13 21:06:51.031187 containerd[1793]: time="2025-02-13T21:06:51.031124490Z" level=info msg="Ensure that sandbox 64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b in task-service has been cleanup successfully" Feb 13 21:06:51.031246 containerd[1793]: time="2025-02-13T21:06:51.031236453Z" level=info msg="TearDown network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" successfully" Feb 13 21:06:51.031270 containerd[1793]: time="2025-02-13T21:06:51.031246139Z" level=info msg="StopPodSandbox for \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" returns successfully" Feb 13 21:06:51.031407 containerd[1793]: time="2025-02-13T21:06:51.031393734Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\"" Feb 13 21:06:51.031473 containerd[1793]: time="2025-02-13T21:06:51.031446649Z" level=info msg="TearDown network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" successfully" Feb 13 21:06:51.031497 containerd[1793]: time="2025-02-13T21:06:51.031473734Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" returns successfully" Feb 13 21:06:51.031579 containerd[1793]: time="2025-02-13T21:06:51.031569997Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" Feb 13 21:06:51.031615 containerd[1793]: time="2025-02-13T21:06:51.031608233Z" level=info msg="TearDown network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" successfully" Feb 13 21:06:51.031658 containerd[1793]: time="2025-02-13T21:06:51.031614509Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" returns successfully" Feb 13 21:06:51.031729 containerd[1793]: time="2025-02-13T21:06:51.031719051Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:06:51.031772 containerd[1793]: time="2025-02-13T21:06:51.031753583Z" level=info msg="TearDown network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" successfully" Feb 13 21:06:51.031802 containerd[1793]: time="2025-02-13T21:06:51.031771907Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" returns successfully" Feb 13 21:06:51.031854 kubelet[3104]: I0213 21:06:51.031844 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460" Feb 13 21:06:51.031908 containerd[1793]: time="2025-02-13T21:06:51.031900054Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:06:51.031943 containerd[1793]: time="2025-02-13T21:06:51.031934350Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:06:51.031943 containerd[1793]: time="2025-02-13T21:06:51.031940125Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:06:51.032059 containerd[1793]: time="2025-02-13T21:06:51.032048962Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:06:51.032100 containerd[1793]: time="2025-02-13T21:06:51.032090774Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:06:51.032135 containerd[1793]: time="2025-02-13T21:06:51.032099491Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:06:51.032135 containerd[1793]: time="2025-02-13T21:06:51.032095253Z" level=info msg="StopPodSandbox for \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\"" Feb 13 21:06:51.032243 containerd[1793]: time="2025-02-13T21:06:51.032230184Z" level=info msg="Ensure that sandbox c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460 in task-service has been cleanup successfully" Feb 13 21:06:51.032336 containerd[1793]: time="2025-02-13T21:06:51.032326529Z" level=info msg="TearDown network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" successfully" Feb 13 21:06:51.032366 containerd[1793]: time="2025-02-13T21:06:51.032337371Z" level=info msg="StopPodSandbox for \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" returns successfully" Feb 13 21:06:51.032366 containerd[1793]: time="2025-02-13T21:06:51.032337474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:6,}" Feb 13 21:06:51.032466 containerd[1793]: time="2025-02-13T21:06:51.032452076Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\"" Feb 13 21:06:51.032526 containerd[1793]: time="2025-02-13T21:06:51.032499719Z" level=info msg="TearDown network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" successfully" Feb 13 21:06:51.032562 containerd[1793]: time="2025-02-13T21:06:51.032527318Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" returns successfully" Feb 13 21:06:51.032647 containerd[1793]: time="2025-02-13T21:06:51.032634709Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" Feb 13 21:06:51.032697 containerd[1793]: time="2025-02-13T21:06:51.032688613Z" level=info msg="TearDown network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" successfully" Feb 13 21:06:51.032736 containerd[1793]: time="2025-02-13T21:06:51.032697308Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" returns successfully" Feb 13 21:06:51.032832 containerd[1793]: time="2025-02-13T21:06:51.032819495Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:06:51.032870 containerd[1793]: time="2025-02-13T21:06:51.032860915Z" level=info msg="TearDown network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" successfully" Feb 13 21:06:51.032870 containerd[1793]: time="2025-02-13T21:06:51.032868015Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" returns successfully" Feb 13 21:06:51.032925 kubelet[3104]: I0213 21:06:51.032875 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f" Feb 13 21:06:51.033037 containerd[1793]: time="2025-02-13T21:06:51.033022499Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:06:51.033087 containerd[1793]: time="2025-02-13T21:06:51.033066683Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:06:51.033087 containerd[1793]: time="2025-02-13T21:06:51.033073262Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:06:51.033149 containerd[1793]: time="2025-02-13T21:06:51.033102779Z" level=info msg="StopPodSandbox for \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\"" Feb 13 21:06:51.033180 containerd[1793]: time="2025-02-13T21:06:51.033167431Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:06:51.033210 containerd[1793]: time="2025-02-13T21:06:51.033198000Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:06:51.033210 containerd[1793]: time="2025-02-13T21:06:51.033200528Z" level=info msg="Ensure that sandbox 789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f in task-service has been cleanup successfully" Feb 13 21:06:51.033207 systemd[1]: run-netns-cni\x2defd85f81\x2d8be4\x2dc461\x2df401\x2dfbc3442c69c4.mount: Deactivated successfully. Feb 13 21:06:51.033445 containerd[1793]: time="2025-02-13T21:06:51.033203265Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:06:51.033445 containerd[1793]: time="2025-02-13T21:06:51.033303500Z" level=info msg="TearDown network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" successfully" Feb 13 21:06:51.033445 containerd[1793]: time="2025-02-13T21:06:51.033314633Z" level=info msg="StopPodSandbox for \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" returns successfully" Feb 13 21:06:51.033529 containerd[1793]: time="2025-02-13T21:06:51.033449785Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\"" Feb 13 21:06:51.033529 containerd[1793]: time="2025-02-13T21:06:51.033467783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:6,}" Feb 13 21:06:51.033586 containerd[1793]: time="2025-02-13T21:06:51.033519412Z" level=info msg="TearDown network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" successfully" Feb 13 21:06:51.033586 containerd[1793]: time="2025-02-13T21:06:51.033548066Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" returns successfully" Feb 13 21:06:51.033661 containerd[1793]: time="2025-02-13T21:06:51.033646707Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" Feb 13 21:06:51.033707 containerd[1793]: time="2025-02-13T21:06:51.033698302Z" level=info msg="TearDown network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" successfully" Feb 13 21:06:51.033707 containerd[1793]: time="2025-02-13T21:06:51.033706224Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" returns successfully" Feb 13 21:06:51.033807 containerd[1793]: time="2025-02-13T21:06:51.033795835Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:06:51.033852 containerd[1793]: time="2025-02-13T21:06:51.033843790Z" level=info msg="TearDown network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" successfully" Feb 13 21:06:51.033883 containerd[1793]: time="2025-02-13T21:06:51.033851542Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" returns successfully" Feb 13 21:06:51.033934 kubelet[3104]: I0213 21:06:51.033925 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df" Feb 13 21:06:51.033991 containerd[1793]: time="2025-02-13T21:06:51.033978937Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:06:51.034050 containerd[1793]: time="2025-02-13T21:06:51.034026823Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:06:51.034074 containerd[1793]: time="2025-02-13T21:06:51.034050830Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:06:51.034135 containerd[1793]: time="2025-02-13T21:06:51.034123328Z" level=info msg="StopPodSandbox for \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\"" Feb 13 21:06:51.034188 containerd[1793]: time="2025-02-13T21:06:51.034176477Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:06:51.034236 containerd[1793]: time="2025-02-13T21:06:51.034227658Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:06:51.034257 containerd[1793]: time="2025-02-13T21:06:51.034237306Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:06:51.034257 containerd[1793]: time="2025-02-13T21:06:51.034247913Z" level=info msg="Ensure that sandbox 96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df in task-service has been cleanup successfully" Feb 13 21:06:51.034358 containerd[1793]: time="2025-02-13T21:06:51.034347432Z" level=info msg="TearDown network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" successfully" Feb 13 21:06:51.034384 containerd[1793]: time="2025-02-13T21:06:51.034358850Z" level=info msg="StopPodSandbox for \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" returns successfully" Feb 13 21:06:51.034445 containerd[1793]: time="2025-02-13T21:06:51.034430516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:6,}" Feb 13 21:06:51.034509 containerd[1793]: time="2025-02-13T21:06:51.034497170Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\"" Feb 13 21:06:51.034561 containerd[1793]: time="2025-02-13T21:06:51.034551810Z" level=info msg="TearDown network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" successfully" Feb 13 21:06:51.034586 containerd[1793]: time="2025-02-13T21:06:51.034562208Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" returns successfully" Feb 13 21:06:51.034685 containerd[1793]: time="2025-02-13T21:06:51.034675589Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" Feb 13 21:06:51.034723 containerd[1793]: time="2025-02-13T21:06:51.034715380Z" level=info msg="TearDown network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" successfully" Feb 13 21:06:51.034778 containerd[1793]: time="2025-02-13T21:06:51.034723926Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" returns successfully" Feb 13 21:06:51.034844 containerd[1793]: time="2025-02-13T21:06:51.034832239Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:06:51.034892 containerd[1793]: time="2025-02-13T21:06:51.034883114Z" level=info msg="TearDown network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" successfully" Feb 13 21:06:51.034927 containerd[1793]: time="2025-02-13T21:06:51.034891960Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" returns successfully" Feb 13 21:06:51.035001 containerd[1793]: time="2025-02-13T21:06:51.034988370Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:06:51.035027 kubelet[3104]: I0213 21:06:51.035012 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565" Feb 13 21:06:51.035065 containerd[1793]: time="2025-02-13T21:06:51.035031489Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:06:51.035065 containerd[1793]: time="2025-02-13T21:06:51.035051638Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:06:51.035181 containerd[1793]: time="2025-02-13T21:06:51.035171024Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:06:51.035227 containerd[1793]: time="2025-02-13T21:06:51.035218727Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:06:51.035259 containerd[1793]: time="2025-02-13T21:06:51.035227599Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:06:51.035289 containerd[1793]: time="2025-02-13T21:06:51.035262454Z" level=info msg="StopPodSandbox for \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\"" Feb 13 21:06:51.035377 containerd[1793]: time="2025-02-13T21:06:51.035367016Z" level=info msg="Ensure that sandbox 025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565 in task-service has been cleanup successfully" Feb 13 21:06:51.035429 containerd[1793]: time="2025-02-13T21:06:51.035417599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:6,}" Feb 13 21:06:51.035466 containerd[1793]: time="2025-02-13T21:06:51.035458086Z" level=info msg="TearDown network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" successfully" Feb 13 21:06:51.035498 containerd[1793]: time="2025-02-13T21:06:51.035468484Z" level=info msg="StopPodSandbox for \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" returns successfully" Feb 13 21:06:51.035609 containerd[1793]: time="2025-02-13T21:06:51.035598994Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\"" Feb 13 21:06:51.035653 containerd[1793]: time="2025-02-13T21:06:51.035645372Z" level=info msg="TearDown network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" successfully" Feb 13 21:06:51.035686 containerd[1793]: time="2025-02-13T21:06:51.035652722Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" returns successfully" Feb 13 21:06:51.035825 containerd[1793]: time="2025-02-13T21:06:51.035813762Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" Feb 13 21:06:51.035875 containerd[1793]: time="2025-02-13T21:06:51.035865922Z" level=info msg="TearDown network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" successfully" Feb 13 21:06:51.035894 containerd[1793]: time="2025-02-13T21:06:51.035877097Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" returns successfully" Feb 13 21:06:51.035987 containerd[1793]: time="2025-02-13T21:06:51.035978349Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:06:51.036023 containerd[1793]: time="2025-02-13T21:06:51.036013714Z" level=info msg="TearDown network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" successfully" Feb 13 21:06:51.036023 containerd[1793]: time="2025-02-13T21:06:51.036019382Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" returns successfully" Feb 13 21:06:51.036121 containerd[1793]: time="2025-02-13T21:06:51.036111133Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:06:51.036150 kubelet[3104]: I0213 21:06:51.036120 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa" Feb 13 21:06:51.036134 systemd[1]: run-netns-cni\x2daebe4c02\x2d0496\x2d6885\x2d039c\x2d3bf41508d382.mount: Deactivated successfully. Feb 13 21:06:51.036232 containerd[1793]: time="2025-02-13T21:06:51.036154037Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:06:51.036232 containerd[1793]: time="2025-02-13T21:06:51.036160795Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:06:51.036228 systemd[1]: run-netns-cni\x2df0b1c8d4\x2da3cb\x2d7c21\x2d92c9\x2d0af37e78b913.mount: Deactivated successfully. Feb 13 21:06:51.036312 containerd[1793]: time="2025-02-13T21:06:51.036275048Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:06:51.036332 containerd[1793]: time="2025-02-13T21:06:51.036307361Z" level=info msg="StopPodSandbox for \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\"" Feb 13 21:06:51.036350 containerd[1793]: time="2025-02-13T21:06:51.036327862Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:06:51.036350 containerd[1793]: time="2025-02-13T21:06:51.036337557Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:06:51.036415 containerd[1793]: time="2025-02-13T21:06:51.036406093Z" level=info msg="Ensure that sandbox 8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa in task-service has been cleanup successfully" Feb 13 21:06:51.036498 containerd[1793]: time="2025-02-13T21:06:51.036486121Z" level=info msg="TearDown network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" successfully" Feb 13 21:06:51.036498 containerd[1793]: time="2025-02-13T21:06:51.036494924Z" level=info msg="StopPodSandbox for \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" returns successfully" Feb 13 21:06:51.036555 containerd[1793]: time="2025-02-13T21:06:51.036537608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:6,}" Feb 13 21:06:51.036679 containerd[1793]: time="2025-02-13T21:06:51.036669572Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\"" Feb 13 21:06:51.036718 containerd[1793]: time="2025-02-13T21:06:51.036707178Z" level=info msg="TearDown network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" successfully" Feb 13 21:06:51.036718 containerd[1793]: time="2025-02-13T21:06:51.036712907Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" returns successfully" Feb 13 21:06:51.036834 containerd[1793]: time="2025-02-13T21:06:51.036823802Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" Feb 13 21:06:51.036872 containerd[1793]: time="2025-02-13T21:06:51.036864413Z" level=info msg="TearDown network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" successfully" Feb 13 21:06:51.036894 containerd[1793]: time="2025-02-13T21:06:51.036871461Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" returns successfully" Feb 13 21:06:51.036988 containerd[1793]: time="2025-02-13T21:06:51.036976532Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:06:51.037045 containerd[1793]: time="2025-02-13T21:06:51.037034356Z" level=info msg="TearDown network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" successfully" Feb 13 21:06:51.037077 containerd[1793]: time="2025-02-13T21:06:51.037046098Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" returns successfully" Feb 13 21:06:51.037179 containerd[1793]: time="2025-02-13T21:06:51.037169163Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:06:51.037245 containerd[1793]: time="2025-02-13T21:06:51.037212469Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:06:51.037245 containerd[1793]: time="2025-02-13T21:06:51.037222082Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:06:51.037328 containerd[1793]: time="2025-02-13T21:06:51.037319191Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:06:51.037361 containerd[1793]: time="2025-02-13T21:06:51.037354838Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:06:51.037380 containerd[1793]: time="2025-02-13T21:06:51.037361103Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:06:51.037532 containerd[1793]: time="2025-02-13T21:06:51.037523021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:6,}" Feb 13 21:06:51.038519 systemd[1]: run-netns-cni\x2dbb1d7599\x2d5dfe\x2d04de\x2da211\x2ddeb78bbed073.mount: Deactivated successfully. Feb 13 21:06:51.038564 systemd[1]: run-netns-cni\x2d6228c4da\x2d2560\x2de27d\x2dbfa2\x2d354da3162dfb.mount: Deactivated successfully. Feb 13 21:06:51.038598 systemd[1]: run-netns-cni\x2dcec4c631\x2d588a\x2df993\x2d0ba9\x2d9ead84b78bf5.mount: Deactivated successfully. Feb 13 21:06:51.080311 containerd[1793]: time="2025-02-13T21:06:51.080278703Z" level=error msg="Failed to destroy network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.080672 containerd[1793]: time="2025-02-13T21:06:51.080511154Z" level=error msg="encountered an error cleaning up failed sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.080672 containerd[1793]: time="2025-02-13T21:06:51.080564436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.080881 kubelet[3104]: E0213 21:06:51.080855 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.080923 kubelet[3104]: E0213 21:06:51.080907 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:51.080960 kubelet[3104]: E0213 21:06:51.080933 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pfzwq" Feb 13 21:06:51.080996 kubelet[3104]: E0213 21:06:51.080976 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pfzwq_kube-system(322cd9eb-1852-4c5c-aae9-cdfcee960ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pfzwq" podUID="322cd9eb-1852-4c5c-aae9-cdfcee960ce3" Feb 13 21:06:51.083078 containerd[1793]: time="2025-02-13T21:06:51.083056429Z" level=error msg="Failed to destroy network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083239 containerd[1793]: time="2025-02-13T21:06:51.083222136Z" level=error msg="Failed to destroy network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083297 containerd[1793]: time="2025-02-13T21:06:51.083281015Z" level=error msg="encountered an error cleaning up failed sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083343 containerd[1793]: time="2025-02-13T21:06:51.083327154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083413 containerd[1793]: time="2025-02-13T21:06:51.083392599Z" level=error msg="encountered an error cleaning up failed sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083463 containerd[1793]: time="2025-02-13T21:06:51.083430822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083525 kubelet[3104]: E0213 21:06:51.083497 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083563 kubelet[3104]: E0213 21:06:51.083521 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.083563 kubelet[3104]: E0213 21:06:51.083549 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:51.083563 kubelet[3104]: E0213 21:06:51.083549 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:51.083661 kubelet[3104]: E0213 21:06:51.083567 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" Feb 13 21:06:51.083661 kubelet[3104]: E0213 21:06:51.083568 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dv5v9" Feb 13 21:06:51.083661 kubelet[3104]: E0213 21:06:51.083601 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dv5v9_calico-system(22f50a6b-4846-46c1-8c41-99e176056718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dv5v9" podUID="22f50a6b-4846-46c1-8c41-99e176056718" Feb 13 21:06:51.083752 kubelet[3104]: E0213 21:06:51.083601 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76848bdf96-56fb4_calico-system(4552908e-a949-4d18-be80-ec54e1c8e06d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podUID="4552908e-a949-4d18-be80-ec54e1c8e06d" Feb 13 21:06:51.084935 containerd[1793]: time="2025-02-13T21:06:51.084901705Z" level=error msg="Failed to destroy network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.085143 containerd[1793]: time="2025-02-13T21:06:51.085125858Z" level=error msg="encountered an error cleaning up failed sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.085184 containerd[1793]: time="2025-02-13T21:06:51.085168757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.085276 kubelet[3104]: E0213 21:06:51.085260 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.085302 kubelet[3104]: E0213 21:06:51.085288 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:51.085320 kubelet[3104]: E0213 21:06:51.085299 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" Feb 13 21:06:51.085342 kubelet[3104]: E0213 21:06:51.085321 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gh88q_calico-apiserver(0b74a159-92ea-4d91-b1cc-e328f7cf493e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podUID="0b74a159-92ea-4d91-b1cc-e328f7cf493e" Feb 13 21:06:51.086493 containerd[1793]: time="2025-02-13T21:06:51.086475633Z" level=error msg="Failed to destroy network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.086660 containerd[1793]: time="2025-02-13T21:06:51.086644417Z" level=error msg="encountered an error cleaning up failed sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.086703 containerd[1793]: time="2025-02-13T21:06:51.086680195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.086772 kubelet[3104]: E0213 21:06:51.086756 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.086806 kubelet[3104]: E0213 21:06:51.086787 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:51.086831 kubelet[3104]: E0213 21:06:51.086806 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wr725" Feb 13 21:06:51.086854 kubelet[3104]: E0213 21:06:51.086837 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wr725_kube-system(a02339f1-24d3-4002-b90e-3dd190621c61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wr725" podUID="a02339f1-24d3-4002-b90e-3dd190621c61" Feb 13 21:06:51.087670 containerd[1793]: time="2025-02-13T21:06:51.087652456Z" level=error msg="Failed to destroy network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.087816 containerd[1793]: time="2025-02-13T21:06:51.087798330Z" level=error msg="encountered an error cleaning up failed sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.087865 containerd[1793]: time="2025-02-13T21:06:51.087830166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.087935 kubelet[3104]: E0213 21:06:51.087922 3104 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 21:06:51.087960 kubelet[3104]: E0213 21:06:51.087943 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:51.087960 kubelet[3104]: E0213 21:06:51.087954 3104 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" Feb 13 21:06:51.087998 kubelet[3104]: E0213 21:06:51.087973 3104 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dfcf67d5f-gwq2t_calico-apiserver(37d2fc1a-4796-4ba1-89bd-a5b0ae45374e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podUID="37d2fc1a-4796-4ba1-89bd-a5b0ae45374e" Feb 13 21:06:51.584447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5-shm.mount: Deactivated successfully. Feb 13 21:06:51.782304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469362300.mount: Deactivated successfully. Feb 13 21:06:51.795790 containerd[1793]: time="2025-02-13T21:06:51.795743308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:51.796012 containerd[1793]: time="2025-02-13T21:06:51.795970825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 21:06:51.796315 containerd[1793]: time="2025-02-13T21:06:51.796271670Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:51.797145 containerd[1793]: time="2025-02-13T21:06:51.797106500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:51.797521 containerd[1793]: time="2025-02-13T21:06:51.797479692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.812163265s" Feb 13 21:06:51.797521 containerd[1793]: time="2025-02-13T21:06:51.797495477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 21:06:51.800871 containerd[1793]: time="2025-02-13T21:06:51.800840758Z" level=info msg="CreateContainer within sandbox \"6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 21:06:51.812269 containerd[1793]: time="2025-02-13T21:06:51.812226095Z" level=info msg="CreateContainer within sandbox \"6251dd22da209b1fb738305d15612bb87206b63206437f6a57664cfbd8c2482e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5dce21322adb285335bfdea3172bcd96f4a54e84d6e7b3eeb796c107ea89bcb0\"" Feb 13 21:06:51.812510 containerd[1793]: time="2025-02-13T21:06:51.812464865Z" level=info msg="StartContainer for \"5dce21322adb285335bfdea3172bcd96f4a54e84d6e7b3eeb796c107ea89bcb0\"" Feb 13 21:06:51.843869 systemd[1]: Started cri-containerd-5dce21322adb285335bfdea3172bcd96f4a54e84d6e7b3eeb796c107ea89bcb0.scope - libcontainer container 5dce21322adb285335bfdea3172bcd96f4a54e84d6e7b3eeb796c107ea89bcb0. Feb 13 21:06:51.863354 containerd[1793]: time="2025-02-13T21:06:51.863324753Z" level=info msg="StartContainer for \"5dce21322adb285335bfdea3172bcd96f4a54e84d6e7b3eeb796c107ea89bcb0\" returns successfully" Feb 13 21:06:51.938679 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 21:06:51.938734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 21:06:52.045309 kubelet[3104]: I0213 21:06:52.045249 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de" Feb 13 21:06:52.046283 containerd[1793]: time="2025-02-13T21:06:52.046221073Z" level=info msg="StopPodSandbox for \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\"" Feb 13 21:06:52.047092 containerd[1793]: time="2025-02-13T21:06:52.046707464Z" level=info msg="Ensure that sandbox b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de in task-service has been cleanup successfully" Feb 13 21:06:52.047239 containerd[1793]: time="2025-02-13T21:06:52.047089116Z" level=info msg="TearDown network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\" successfully" Feb 13 21:06:52.047239 containerd[1793]: time="2025-02-13T21:06:52.047124866Z" level=info msg="StopPodSandbox for \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\" returns successfully" Feb 13 21:06:52.047922 containerd[1793]: time="2025-02-13T21:06:52.047847854Z" level=info msg="StopPodSandbox for \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\"" Feb 13 21:06:52.048134 containerd[1793]: time="2025-02-13T21:06:52.048085329Z" level=info msg="TearDown network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" successfully" Feb 13 21:06:52.048267 containerd[1793]: time="2025-02-13T21:06:52.048132917Z" level=info msg="StopPodSandbox for \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" returns successfully" Feb 13 21:06:52.048869 containerd[1793]: time="2025-02-13T21:06:52.048796578Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\"" Feb 13 21:06:52.049122 containerd[1793]: time="2025-02-13T21:06:52.048997337Z" level=info msg="TearDown network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" successfully" Feb 13 21:06:52.049283 containerd[1793]: time="2025-02-13T21:06:52.049117386Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" returns successfully" Feb 13 21:06:52.049776 containerd[1793]: time="2025-02-13T21:06:52.049719070Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" Feb 13 21:06:52.050020 containerd[1793]: time="2025-02-13T21:06:52.049972572Z" level=info msg="TearDown network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" successfully" Feb 13 21:06:52.050221 containerd[1793]: time="2025-02-13T21:06:52.050016294Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" returns successfully" Feb 13 21:06:52.050690 containerd[1793]: time="2025-02-13T21:06:52.050589919Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:06:52.050956 containerd[1793]: time="2025-02-13T21:06:52.050901555Z" level=info msg="TearDown network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" successfully" Feb 13 21:06:52.051097 containerd[1793]: time="2025-02-13T21:06:52.050958507Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" returns successfully" Feb 13 21:06:52.051620 containerd[1793]: time="2025-02-13T21:06:52.051556116Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:06:52.051875 kubelet[3104]: I0213 21:06:52.051806 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e" Feb 13 21:06:52.052036 containerd[1793]: time="2025-02-13T21:06:52.051859724Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:06:52.052036 containerd[1793]: time="2025-02-13T21:06:52.051907971Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:06:52.052659 containerd[1793]: time="2025-02-13T21:06:52.052575756Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:06:52.052909 containerd[1793]: time="2025-02-13T21:06:52.052851744Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:06:52.053065 containerd[1793]: time="2025-02-13T21:06:52.052903747Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:06:52.053065 containerd[1793]: time="2025-02-13T21:06:52.053005929Z" level=info msg="StopPodSandbox for \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\"" Feb 13 21:06:52.053567 containerd[1793]: time="2025-02-13T21:06:52.053509920Z" level=info msg="Ensure that sandbox a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e in task-service has been cleanup successfully" Feb 13 21:06:52.053953 containerd[1793]: time="2025-02-13T21:06:52.053852418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:7,}" Feb 13 21:06:52.053953 containerd[1793]: time="2025-02-13T21:06:52.053926875Z" level=info msg="TearDown network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\" successfully" Feb 13 21:06:52.054183 containerd[1793]: time="2025-02-13T21:06:52.053964983Z" level=info msg="StopPodSandbox for \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\" returns successfully" Feb 13 21:06:52.054589 containerd[1793]: time="2025-02-13T21:06:52.054519945Z" level=info msg="StopPodSandbox for \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\"" Feb 13 21:06:52.054786 containerd[1793]: time="2025-02-13T21:06:52.054741483Z" level=info msg="TearDown network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" successfully" Feb 13 21:06:52.054786 containerd[1793]: time="2025-02-13T21:06:52.054779787Z" level=info msg="StopPodSandbox for \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" returns successfully" Feb 13 21:06:52.055190 containerd[1793]: time="2025-02-13T21:06:52.055175443Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\"" Feb 13 21:06:52.055287 containerd[1793]: time="2025-02-13T21:06:52.055240104Z" level=info msg="TearDown network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" successfully" Feb 13 21:06:52.055287 containerd[1793]: time="2025-02-13T21:06:52.055284395Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" returns successfully" Feb 13 21:06:52.055480 containerd[1793]: time="2025-02-13T21:06:52.055470238Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" Feb 13 21:06:52.055514 containerd[1793]: time="2025-02-13T21:06:52.055506986Z" level=info msg="TearDown network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" successfully" Feb 13 21:06:52.055539 containerd[1793]: time="2025-02-13T21:06:52.055513692Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" returns successfully" Feb 13 21:06:52.055603 containerd[1793]: time="2025-02-13T21:06:52.055594691Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:06:52.055649 containerd[1793]: time="2025-02-13T21:06:52.055641869Z" level=info msg="TearDown network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" successfully" Feb 13 21:06:52.055649 containerd[1793]: time="2025-02-13T21:06:52.055648149Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" returns successfully" Feb 13 21:06:52.055779 containerd[1793]: time="2025-02-13T21:06:52.055768104Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:06:52.055813 kubelet[3104]: I0213 21:06:52.055773 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7" Feb 13 21:06:52.055852 containerd[1793]: time="2025-02-13T21:06:52.055817829Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:06:52.055852 containerd[1793]: time="2025-02-13T21:06:52.055827482Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:06:52.056007 containerd[1793]: time="2025-02-13T21:06:52.055995265Z" level=info msg="StopPodSandbox for \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\"" Feb 13 21:06:52.056041 containerd[1793]: time="2025-02-13T21:06:52.056020733Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:06:52.056074 containerd[1793]: time="2025-02-13T21:06:52.056064572Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:06:52.056074 containerd[1793]: time="2025-02-13T21:06:52.056072400Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:06:52.056142 containerd[1793]: time="2025-02-13T21:06:52.056093570Z" level=info msg="Ensure that sandbox 37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7 in task-service has been cleanup successfully" Feb 13 21:06:52.056197 containerd[1793]: time="2025-02-13T21:06:52.056187287Z" level=info msg="TearDown network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\" successfully" Feb 13 21:06:52.056197 containerd[1793]: time="2025-02-13T21:06:52.056195675Z" level=info msg="StopPodSandbox for \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\" returns successfully" Feb 13 21:06:52.056260 containerd[1793]: time="2025-02-13T21:06:52.056242194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:7,}" Feb 13 21:06:52.056300 containerd[1793]: time="2025-02-13T21:06:52.056291442Z" level=info msg="StopPodSandbox for \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\"" Feb 13 21:06:52.056338 containerd[1793]: time="2025-02-13T21:06:52.056330809Z" level=info msg="TearDown network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" successfully" Feb 13 21:06:52.056362 containerd[1793]: time="2025-02-13T21:06:52.056338237Z" level=info msg="StopPodSandbox for \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" returns successfully" Feb 13 21:06:52.056458 containerd[1793]: time="2025-02-13T21:06:52.056448603Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\"" Feb 13 21:06:52.056493 containerd[1793]: time="2025-02-13T21:06:52.056486792Z" level=info msg="TearDown network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" successfully" Feb 13 21:06:52.056520 containerd[1793]: time="2025-02-13T21:06:52.056493259Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" returns successfully" Feb 13 21:06:52.056641 containerd[1793]: time="2025-02-13T21:06:52.056630992Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" Feb 13 21:06:52.056888 containerd[1793]: time="2025-02-13T21:06:52.056676800Z" level=info msg="TearDown network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" successfully" Feb 13 21:06:52.056888 containerd[1793]: time="2025-02-13T21:06:52.056683855Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" returns successfully" Feb 13 21:06:52.056888 containerd[1793]: time="2025-02-13T21:06:52.056778903Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:06:52.056888 containerd[1793]: time="2025-02-13T21:06:52.056817426Z" level=info msg="TearDown network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" successfully" Feb 13 21:06:52.056888 containerd[1793]: time="2025-02-13T21:06:52.056823361Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" returns successfully" Feb 13 21:06:52.056972 containerd[1793]: time="2025-02-13T21:06:52.056943796Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:06:52.056995 containerd[1793]: time="2025-02-13T21:06:52.056985254Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:06:52.057012 containerd[1793]: time="2025-02-13T21:06:52.056994950Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:06:52.057053 kubelet[3104]: I0213 21:06:52.057045 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5" Feb 13 21:06:52.057155 containerd[1793]: time="2025-02-13T21:06:52.057147265Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:06:52.057191 containerd[1793]: time="2025-02-13T21:06:52.057184391Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:06:52.057216 containerd[1793]: time="2025-02-13T21:06:52.057190894Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:06:52.057249 containerd[1793]: time="2025-02-13T21:06:52.057242142Z" level=info msg="StopPodSandbox for \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\"" Feb 13 21:06:52.057341 containerd[1793]: time="2025-02-13T21:06:52.057331996Z" level=info msg="Ensure that sandbox 5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5 in task-service has been cleanup successfully" Feb 13 21:06:52.057380 containerd[1793]: time="2025-02-13T21:06:52.057368612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:7,}" Feb 13 21:06:52.057433 containerd[1793]: time="2025-02-13T21:06:52.057424366Z" level=info msg="TearDown network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\" successfully" Feb 13 21:06:52.057470 containerd[1793]: time="2025-02-13T21:06:52.057432700Z" level=info msg="StopPodSandbox for \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\" returns successfully" Feb 13 21:06:52.058299 containerd[1793]: time="2025-02-13T21:06:52.057780725Z" level=info msg="StopPodSandbox for \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\"" Feb 13 21:06:52.058299 containerd[1793]: time="2025-02-13T21:06:52.057870160Z" level=info msg="TearDown network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" successfully" Feb 13 21:06:52.058299 containerd[1793]: time="2025-02-13T21:06:52.057904691Z" level=info msg="StopPodSandbox for \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" returns successfully" Feb 13 21:06:52.058427 containerd[1793]: time="2025-02-13T21:06:52.058336784Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\"" Feb 13 21:06:52.058598 containerd[1793]: time="2025-02-13T21:06:52.058563677Z" level=info msg="TearDown network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" successfully" Feb 13 21:06:52.058648 containerd[1793]: time="2025-02-13T21:06:52.058597611Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" returns successfully" Feb 13 21:06:52.058766 containerd[1793]: time="2025-02-13T21:06:52.058749295Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" Feb 13 21:06:52.058830 containerd[1793]: time="2025-02-13T21:06:52.058805158Z" level=info msg="TearDown network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" successfully" Feb 13 21:06:52.058868 containerd[1793]: time="2025-02-13T21:06:52.058829514Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" returns successfully" Feb 13 21:06:52.058978 containerd[1793]: time="2025-02-13T21:06:52.058966562Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:06:52.059042 containerd[1793]: time="2025-02-13T21:06:52.059016749Z" level=info msg="TearDown network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" successfully" Feb 13 21:06:52.059085 containerd[1793]: time="2025-02-13T21:06:52.059041366Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" returns successfully" Feb 13 21:06:52.059177 containerd[1793]: time="2025-02-13T21:06:52.059164576Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:06:52.059226 containerd[1793]: time="2025-02-13T21:06:52.059215867Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:06:52.059259 containerd[1793]: time="2025-02-13T21:06:52.059226525Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:06:52.059395 containerd[1793]: time="2025-02-13T21:06:52.059384080Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:06:52.059442 containerd[1793]: time="2025-02-13T21:06:52.059432764Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:06:52.059474 containerd[1793]: time="2025-02-13T21:06:52.059442880Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:06:52.059689 containerd[1793]: time="2025-02-13T21:06:52.059673437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:7,}" Feb 13 21:06:52.060421 kubelet[3104]: I0213 21:06:52.060406 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419" Feb 13 21:06:52.060718 containerd[1793]: time="2025-02-13T21:06:52.060701156Z" level=info msg="StopPodSandbox for \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\"" Feb 13 21:06:52.060854 containerd[1793]: time="2025-02-13T21:06:52.060839572Z" level=info msg="Ensure that sandbox 5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419 in task-service has been cleanup successfully" Feb 13 21:06:52.060962 containerd[1793]: time="2025-02-13T21:06:52.060950046Z" level=info msg="TearDown network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\" successfully" Feb 13 21:06:52.060997 containerd[1793]: time="2025-02-13T21:06:52.060962380Z" level=info msg="StopPodSandbox for \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\" returns successfully" Feb 13 21:06:52.061126 containerd[1793]: time="2025-02-13T21:06:52.061112668Z" level=info msg="StopPodSandbox for \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\"" Feb 13 21:06:52.061204 containerd[1793]: time="2025-02-13T21:06:52.061168715Z" level=info msg="TearDown network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" successfully" Feb 13 21:06:52.061241 containerd[1793]: time="2025-02-13T21:06:52.061204321Z" level=info msg="StopPodSandbox for \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" returns successfully" Feb 13 21:06:52.061323 containerd[1793]: time="2025-02-13T21:06:52.061310118Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\"" Feb 13 21:06:52.061368 containerd[1793]: time="2025-02-13T21:06:52.061351516Z" level=info msg="TearDown network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" successfully" Feb 13 21:06:52.061368 containerd[1793]: time="2025-02-13T21:06:52.061358015Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" returns successfully" Feb 13 21:06:52.061471 containerd[1793]: time="2025-02-13T21:06:52.061456789Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" Feb 13 21:06:52.061537 containerd[1793]: time="2025-02-13T21:06:52.061505748Z" level=info msg="TearDown network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" successfully" Feb 13 21:06:52.061569 containerd[1793]: time="2025-02-13T21:06:52.061536561Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" returns successfully" Feb 13 21:06:52.061653 containerd[1793]: time="2025-02-13T21:06:52.061643547Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:06:52.061698 containerd[1793]: time="2025-02-13T21:06:52.061688646Z" level=info msg="TearDown network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" successfully" Feb 13 21:06:52.061730 containerd[1793]: time="2025-02-13T21:06:52.061697636Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" returns successfully" Feb 13 21:06:52.061779 kubelet[3104]: I0213 21:06:52.061700 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0" Feb 13 21:06:52.062876 containerd[1793]: time="2025-02-13T21:06:52.062857061Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:06:52.062925 containerd[1793]: time="2025-02-13T21:06:52.062915387Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:06:52.062925 containerd[1793]: time="2025-02-13T21:06:52.062922768Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:06:52.062967 containerd[1793]: time="2025-02-13T21:06:52.062924506Z" level=info msg="StopPodSandbox for \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\"" Feb 13 21:06:52.063070 containerd[1793]: time="2025-02-13T21:06:52.063059596Z" level=info msg="Ensure that sandbox 8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0 in task-service has been cleanup successfully" Feb 13 21:06:52.063181 containerd[1793]: time="2025-02-13T21:06:52.063168831Z" level=info msg="TearDown network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\" successfully" Feb 13 21:06:52.063222 containerd[1793]: time="2025-02-13T21:06:52.063181340Z" level=info msg="StopPodSandbox for \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\" returns successfully" Feb 13 21:06:52.063244 containerd[1793]: time="2025-02-13T21:06:52.063223437Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:06:52.063291 containerd[1793]: time="2025-02-13T21:06:52.063279788Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:06:52.063291 containerd[1793]: time="2025-02-13T21:06:52.063289431Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:06:52.063346 containerd[1793]: time="2025-02-13T21:06:52.063299567Z" level=info msg="StopPodSandbox for \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\"" Feb 13 21:06:52.063371 containerd[1793]: time="2025-02-13T21:06:52.063345749Z" level=info msg="TearDown network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" successfully" Feb 13 21:06:52.063371 containerd[1793]: time="2025-02-13T21:06:52.063354559Z" level=info msg="StopPodSandbox for \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" returns successfully" Feb 13 21:06:52.063530 containerd[1793]: time="2025-02-13T21:06:52.063515372Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\"" Feb 13 21:06:52.063595 containerd[1793]: time="2025-02-13T21:06:52.063578861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:7,}" Feb 13 21:06:52.063686 containerd[1793]: time="2025-02-13T21:06:52.063580898Z" level=info msg="TearDown network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" successfully" Feb 13 21:06:52.063720 containerd[1793]: time="2025-02-13T21:06:52.063685369Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" returns successfully" Feb 13 21:06:52.063872 containerd[1793]: time="2025-02-13T21:06:52.063857410Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" Feb 13 21:06:52.063967 containerd[1793]: time="2025-02-13T21:06:52.063951146Z" level=info msg="TearDown network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" successfully" Feb 13 21:06:52.063967 containerd[1793]: time="2025-02-13T21:06:52.063964872Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" returns successfully" Feb 13 21:06:52.064146 containerd[1793]: time="2025-02-13T21:06:52.064125845Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:06:52.064202 containerd[1793]: time="2025-02-13T21:06:52.064192061Z" level=info msg="TearDown network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" successfully" Feb 13 21:06:52.064241 containerd[1793]: time="2025-02-13T21:06:52.064201723Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" returns successfully" Feb 13 21:06:52.064375 containerd[1793]: time="2025-02-13T21:06:52.064353127Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:06:52.064421 containerd[1793]: time="2025-02-13T21:06:52.064412694Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:06:52.064462 containerd[1793]: time="2025-02-13T21:06:52.064423978Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:06:52.064613 containerd[1793]: time="2025-02-13T21:06:52.064581444Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:06:52.064663 containerd[1793]: time="2025-02-13T21:06:52.064646716Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:06:52.064663 containerd[1793]: time="2025-02-13T21:06:52.064657911Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:06:52.064937 containerd[1793]: time="2025-02-13T21:06:52.064925092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:7,}" Feb 13 21:06:52.067324 kubelet[3104]: I0213 21:06:52.067284 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-clkg8" podStartSLOduration=0.646038067 podStartE2EDuration="16.067269848s" podCreationTimestamp="2025-02-13 21:06:36 +0000 UTC" firstStartedPulling="2025-02-13 21:06:36.376585073 +0000 UTC m=+12.527768864" lastFinishedPulling="2025-02-13 21:06:51.797816849 +0000 UTC m=+27.949000645" observedRunningTime="2025-02-13 21:06:52.066960722 +0000 UTC m=+28.218144521" watchObservedRunningTime="2025-02-13 21:06:52.067269848 +0000 UTC m=+28.218453640" Feb 13 21:06:52.129621 systemd-networkd[1709]: cali7fd2366729c: Link UP Feb 13 21:06:52.129753 systemd-networkd[1709]: cali7fd2366729c: Gained carrier Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.074 [INFO][6116] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.084 [INFO][6116] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0 calico-apiserver-dfcf67d5f- calico-apiserver 0b74a159-92ea-4d91-b1cc-e328f7cf493e 657 0 2025-02-13 21:06:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dfcf67d5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.1-a-9675b630d5 calico-apiserver-dfcf67d5f-gh88q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7fd2366729c [] []}} ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.084 [INFO][6116] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.105 [INFO][6260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" HandleID="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.111 [INFO][6260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" HandleID="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.1-a-9675b630d5", "pod":"calico-apiserver-dfcf67d5f-gh88q", "timestamp":"2025-02-13 21:06:52.105489701 +0000 UTC"}, Hostname:"ci-4186.1.1-a-9675b630d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.111 [INFO][6260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.111 [INFO][6260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.111 [INFO][6260] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-9675b630d5' Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.114 [INFO][6260] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.116 [INFO][6260] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.118 [INFO][6260] ipam/ipam.go 489: Trying affinity for 192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.118 [INFO][6260] ipam/ipam.go 155: Attempting to load block cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.119 [INFO][6260] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.119 [INFO][6260] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.120 [INFO][6260] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755 Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.121 [INFO][6260] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.124 [INFO][6260] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.95.193/26] block=192.168.95.192/26 handle="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.124 [INFO][6260] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.193/26] handle="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.124 [INFO][6260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:06:52.146887 containerd[1793]: 2025-02-13 21:06:52.124 [INFO][6260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.193/26] IPv6=[] ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" HandleID="k8s-pod-network.72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" Feb 13 21:06:52.147298 containerd[1793]: 2025-02-13 21:06:52.125 [INFO][6116] cni-plugin/k8s.go 386: Populated endpoint ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0", GenerateName:"calico-apiserver-dfcf67d5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b74a159-92ea-4d91-b1cc-e328f7cf493e", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dfcf67d5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"", Pod:"calico-apiserver-dfcf67d5f-gh88q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fd2366729c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.147298 containerd[1793]: 2025-02-13 21:06:52.125 [INFO][6116] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.95.193/32] ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" Feb 13 21:06:52.147298 containerd[1793]: 2025-02-13 21:06:52.125 [INFO][6116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fd2366729c ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" Feb 13 21:06:52.147298 containerd[1793]: 2025-02-13 21:06:52.129 [INFO][6116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" Feb 13 21:06:52.147298 containerd[1793]: 2025-02-13 21:06:52.129 [INFO][6116] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0", GenerateName:"calico-apiserver-dfcf67d5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b74a159-92ea-4d91-b1cc-e328f7cf493e", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dfcf67d5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755", Pod:"calico-apiserver-dfcf67d5f-gh88q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fd2366729c", MAC:"6a:c9:39:76:d0:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.147298 containerd[1793]: 2025-02-13 21:06:52.146 [INFO][6116] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gh88q" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gh88q-eth0" Feb 13 21:06:52.156033 containerd[1793]: time="2025-02-13T21:06:52.155989761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:52.156221 containerd[1793]: time="2025-02-13T21:06:52.156199821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:52.156221 containerd[1793]: time="2025-02-13T21:06:52.156210357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.156291 containerd[1793]: time="2025-02-13T21:06:52.156260924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.166706 systemd[1]: Started cri-containerd-72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755.scope - libcontainer container 72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755. Feb 13 21:06:52.167434 systemd[1]: Started sshd@13-147.28.180.173:22-109.206.236.167:58674.service - OpenSSH per-connection server daemon (109.206.236.167:58674). Feb 13 21:06:52.188430 containerd[1793]: time="2025-02-13T21:06:52.188407131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gh88q,Uid:0b74a159-92ea-4d91-b1cc-e328f7cf493e,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755\"" Feb 13 21:06:52.189173 containerd[1793]: time="2025-02-13T21:06:52.189099878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 21:06:52.265140 systemd-networkd[1709]: cali60c34b88680: Link UP Feb 13 21:06:52.265771 systemd-networkd[1709]: cali60c34b88680: Gained carrier Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.083 [INFO][6141] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.090 [INFO][6141] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0 calico-apiserver-dfcf67d5f- calico-apiserver 37d2fc1a-4796-4ba1-89bd-a5b0ae45374e 658 0 2025-02-13 21:06:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dfcf67d5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.1-a-9675b630d5 calico-apiserver-dfcf67d5f-gwq2t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali60c34b88680 [] []}} ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.090 [INFO][6141] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.111 [INFO][6268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" HandleID="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.116 [INFO][6268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" HandleID="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e3dc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.1-a-9675b630d5", "pod":"calico-apiserver-dfcf67d5f-gwq2t", "timestamp":"2025-02-13 21:06:52.111477502 +0000 UTC"}, Hostname:"ci-4186.1.1-a-9675b630d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.116 [INFO][6268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.124 [INFO][6268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.124 [INFO][6268] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-9675b630d5' Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.215 [INFO][6268] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.222 [INFO][6268] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.231 [INFO][6268] ipam/ipam.go 489: Trying affinity for 192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.235 [INFO][6268] ipam/ipam.go 155: Attempting to load block cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.239 [INFO][6268] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.239 [INFO][6268] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.242 [INFO][6268] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.249 [INFO][6268] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.258 [INFO][6268] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.95.194/26] block=192.168.95.192/26 handle="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.258 [INFO][6268] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.194/26] handle="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.258 [INFO][6268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:06:52.283355 containerd[1793]: 2025-02-13 21:06:52.258 [INFO][6268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.194/26] IPv6=[] ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" HandleID="k8s-pod-network.75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" Feb 13 21:06:52.285216 containerd[1793]: 2025-02-13 21:06:52.262 [INFO][6141] cni-plugin/k8s.go 386: Populated endpoint ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0", GenerateName:"calico-apiserver-dfcf67d5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"37d2fc1a-4796-4ba1-89bd-a5b0ae45374e", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dfcf67d5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"", Pod:"calico-apiserver-dfcf67d5f-gwq2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60c34b88680", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.285216 containerd[1793]: 2025-02-13 21:06:52.262 [INFO][6141] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.95.194/32] ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" Feb 13 21:06:52.285216 containerd[1793]: 2025-02-13 21:06:52.262 [INFO][6141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60c34b88680 ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" Feb 13 21:06:52.285216 containerd[1793]: 2025-02-13 21:06:52.265 [INFO][6141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" Feb 13 21:06:52.285216 containerd[1793]: 2025-02-13 21:06:52.266 [INFO][6141] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0", GenerateName:"calico-apiserver-dfcf67d5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"37d2fc1a-4796-4ba1-89bd-a5b0ae45374e", ResourceVersion:"658", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dfcf67d5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c", Pod:"calico-apiserver-dfcf67d5f-gwq2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60c34b88680", MAC:"72:36:6c:d9:f4:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.285216 containerd[1793]: 2025-02-13 21:06:52.280 [INFO][6141] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c" Namespace="calico-apiserver" Pod="calico-apiserver-dfcf67d5f-gwq2t" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--apiserver--dfcf67d5f--gwq2t-eth0" Feb 13 21:06:52.300130 containerd[1793]: time="2025-02-13T21:06:52.300069967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:52.300130 containerd[1793]: time="2025-02-13T21:06:52.300098871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:52.300130 containerd[1793]: time="2025-02-13T21:06:52.300105559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.300238 containerd[1793]: time="2025-02-13T21:06:52.300144784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.318800 systemd[1]: Started cri-containerd-75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c.scope - libcontainer container 75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c. Feb 13 21:06:52.342407 systemd-networkd[1709]: cali25f7c795645: Link UP Feb 13 21:06:52.342527 systemd-networkd[1709]: cali25f7c795645: Gained carrier Feb 13 21:06:52.345446 containerd[1793]: time="2025-02-13T21:06:52.345423538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dfcf67d5f-gwq2t,Uid:37d2fc1a-4796-4ba1-89bd-a5b0ae45374e,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c\"" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.083 [INFO][6153] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.094 [INFO][6153] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0 coredns-668d6bf9bc- kube-system a02339f1-24d3-4002-b90e-3dd190621c61 656 0 2025-02-13 21:06:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.1-a-9675b630d5 coredns-668d6bf9bc-wr725 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali25f7c795645 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.094 [INFO][6153] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.112 [INFO][6273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" HandleID="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Workload="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.117 [INFO][6273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" HandleID="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Workload="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294030), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.1-a-9675b630d5", "pod":"coredns-668d6bf9bc-wr725", "timestamp":"2025-02-13 21:06:52.11274552 +0000 UTC"}, Hostname:"ci-4186.1.1-a-9675b630d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.117 [INFO][6273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.258 [INFO][6273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.258 [INFO][6273] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-9675b630d5' Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.314 [INFO][6273] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.321 [INFO][6273] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.326 [INFO][6273] ipam/ipam.go 489: Trying affinity for 192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.327 [INFO][6273] ipam/ipam.go 155: Attempting to load block cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.329 [INFO][6273] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.329 [INFO][6273] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.330 [INFO][6273] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.337 [INFO][6273] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.340 [INFO][6273] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.95.195/26] block=192.168.95.192/26 handle="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.340 [INFO][6273] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.195/26] handle="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.340 [INFO][6273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:06:52.348838 containerd[1793]: 2025-02-13 21:06:52.340 [INFO][6273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.195/26] IPv6=[] ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" HandleID="k8s-pod-network.83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Workload="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" Feb 13 21:06:52.349357 containerd[1793]: 2025-02-13 21:06:52.341 [INFO][6153] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a02339f1-24d3-4002-b90e-3dd190621c61", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"", Pod:"coredns-668d6bf9bc-wr725", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25f7c795645", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.349357 containerd[1793]: 2025-02-13 21:06:52.341 [INFO][6153] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.95.195/32] ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" Feb 13 21:06:52.349357 containerd[1793]: 2025-02-13 21:06:52.341 [INFO][6153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25f7c795645 ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" Feb 13 21:06:52.349357 containerd[1793]: 2025-02-13 21:06:52.342 [INFO][6153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" Feb 13 21:06:52.349357 containerd[1793]: 2025-02-13 21:06:52.342 [INFO][6153] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a02339f1-24d3-4002-b90e-3dd190621c61", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d", Pod:"coredns-668d6bf9bc-wr725", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25f7c795645", MAC:"de:0b:7a:0b:f7:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.349357 containerd[1793]: 2025-02-13 21:06:52.347 [INFO][6153] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wr725" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--wr725-eth0" Feb 13 21:06:52.358789 containerd[1793]: time="2025-02-13T21:06:52.358748880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:52.358789 containerd[1793]: time="2025-02-13T21:06:52.358773525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:52.358789 containerd[1793]: time="2025-02-13T21:06:52.358780435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.358889 containerd[1793]: time="2025-02-13T21:06:52.358816476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.380906 systemd[1]: Started cri-containerd-83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d.scope - libcontainer container 83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d. Feb 13 21:06:52.409921 containerd[1793]: time="2025-02-13T21:06:52.409894864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wr725,Uid:a02339f1-24d3-4002-b90e-3dd190621c61,Namespace:kube-system,Attempt:7,} returns sandbox id \"83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d\"" Feb 13 21:06:52.411288 containerd[1793]: time="2025-02-13T21:06:52.411245435Z" level=info msg="CreateContainer within sandbox \"83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 21:06:52.416569 containerd[1793]: time="2025-02-13T21:06:52.416524832Z" level=info msg="CreateContainer within sandbox \"83c30850470b72c2cc5cad54a8908fc60831edd82e7e9822eda9bcee4db7a24d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27105739deccc51b191dcbf589a56aab4442450e2e55a60c7f7f802bd63f07ed\"" Feb 13 21:06:52.416739 containerd[1793]: time="2025-02-13T21:06:52.416726801Z" level=info msg="StartContainer for \"27105739deccc51b191dcbf589a56aab4442450e2e55a60c7f7f802bd63f07ed\"" Feb 13 21:06:52.436832 systemd[1]: Started cri-containerd-27105739deccc51b191dcbf589a56aab4442450e2e55a60c7f7f802bd63f07ed.scope - libcontainer container 27105739deccc51b191dcbf589a56aab4442450e2e55a60c7f7f802bd63f07ed. Feb 13 21:06:52.437537 systemd-networkd[1709]: cali486c8c0cfe9: Link UP Feb 13 21:06:52.437658 systemd-networkd[1709]: cali486c8c0cfe9: Gained carrier Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.088 [INFO][6196] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.096 [INFO][6196] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0 calico-kube-controllers-76848bdf96- calico-system 4552908e-a949-4d18-be80-ec54e1c8e06d 659 0 2025-02-13 21:06:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76848bdf96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.1-a-9675b630d5 calico-kube-controllers-76848bdf96-56fb4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali486c8c0cfe9 [] []}} ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.096 [INFO][6196] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.112 [INFO][6278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" HandleID="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.117 [INFO][6278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" HandleID="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.1-a-9675b630d5", "pod":"calico-kube-controllers-76848bdf96-56fb4", "timestamp":"2025-02-13 21:06:52.112686026 +0000 UTC"}, Hostname:"ci-4186.1.1-a-9675b630d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.117 [INFO][6278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.340 [INFO][6278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.340 [INFO][6278] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-9675b630d5' Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.414 [INFO][6278] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.422 [INFO][6278] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.427 [INFO][6278] ipam/ipam.go 489: Trying affinity for 192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.428 [INFO][6278] ipam/ipam.go 155: Attempting to load block cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.429 [INFO][6278] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.429 [INFO][6278] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.430 [INFO][6278] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791 Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.433 [INFO][6278] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6278] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.95.196/26] block=192.168.95.192/26 handle="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6278] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.196/26] handle="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:06:52.442827 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.196/26] IPv6=[] ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" HandleID="k8s-pod-network.8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Workload="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" Feb 13 21:06:52.443233 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6196] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0", GenerateName:"calico-kube-controllers-76848bdf96-", Namespace:"calico-system", SelfLink:"", UID:"4552908e-a949-4d18-be80-ec54e1c8e06d", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76848bdf96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"", Pod:"calico-kube-controllers-76848bdf96-56fb4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali486c8c0cfe9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.443233 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6196] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.95.196/32] ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" Feb 13 21:06:52.443233 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali486c8c0cfe9 ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" Feb 13 21:06:52.443233 containerd[1793]: 2025-02-13 21:06:52.437 [INFO][6196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" Feb 13 21:06:52.443233 containerd[1793]: 2025-02-13 21:06:52.437 [INFO][6196] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0", GenerateName:"calico-kube-controllers-76848bdf96-", Namespace:"calico-system", SelfLink:"", UID:"4552908e-a949-4d18-be80-ec54e1c8e06d", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76848bdf96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791", Pod:"calico-kube-controllers-76848bdf96-56fb4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali486c8c0cfe9", MAC:"fe:c1:3b:d8:17:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.443233 containerd[1793]: 2025-02-13 21:06:52.442 [INFO][6196] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791" Namespace="calico-system" Pod="calico-kube-controllers-76848bdf96-56fb4" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-calico--kube--controllers--76848bdf96--56fb4-eth0" Feb 13 21:06:52.448851 containerd[1793]: time="2025-02-13T21:06:52.448827046Z" level=info msg="StartContainer for \"27105739deccc51b191dcbf589a56aab4442450e2e55a60c7f7f802bd63f07ed\" returns successfully" Feb 13 21:06:52.452582 containerd[1793]: time="2025-02-13T21:06:52.452528334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:52.452582 containerd[1793]: time="2025-02-13T21:06:52.452562931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:52.452582 containerd[1793]: time="2025-02-13T21:06:52.452570092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.452737 containerd[1793]: time="2025-02-13T21:06:52.452646485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.475974 systemd[1]: Started cri-containerd-8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791.scope - libcontainer container 8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791. Feb 13 21:06:52.500505 containerd[1793]: time="2025-02-13T21:06:52.500457642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76848bdf96-56fb4,Uid:4552908e-a949-4d18-be80-ec54e1c8e06d,Namespace:calico-system,Attempt:7,} returns sandbox id \"8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791\"" Feb 13 21:06:52.542872 systemd-networkd[1709]: cali42877eae1ce: Link UP Feb 13 21:06:52.543119 systemd-networkd[1709]: cali42877eae1ce: Gained carrier Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.089 [INFO][6186] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.097 [INFO][6186] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0 csi-node-driver- calico-system 22f50a6b-4846-46c1-8c41-99e176056718 583 0 2025-02-13 21:06:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.1-a-9675b630d5 csi-node-driver-dv5v9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali42877eae1ce [] []}} ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.097 [INFO][6186] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.115 [INFO][6284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" HandleID="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Workload="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.119 [INFO][6284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" HandleID="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Workload="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.1-a-9675b630d5", "pod":"csi-node-driver-dv5v9", "timestamp":"2025-02-13 21:06:52.115670552 +0000 UTC"}, Hostname:"ci-4186.1.1-a-9675b630d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.119 [INFO][6284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.436 [INFO][6284] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-9675b630d5' Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.514 [INFO][6284] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.522 [INFO][6284] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.528 [INFO][6284] ipam/ipam.go 489: Trying affinity for 192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.529 [INFO][6284] ipam/ipam.go 155: Attempting to load block cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.531 [INFO][6284] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.531 [INFO][6284] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.532 [INFO][6284] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.535 [INFO][6284] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.539 [INFO][6284] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.95.197/26] block=192.168.95.192/26 handle="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.539 [INFO][6284] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.197/26] handle="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.539 [INFO][6284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:06:52.552020 containerd[1793]: 2025-02-13 21:06:52.539 [INFO][6284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.197/26] IPv6=[] ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" HandleID="k8s-pod-network.cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Workload="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" Feb 13 21:06:52.552970 containerd[1793]: 2025-02-13 21:06:52.541 [INFO][6186] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22f50a6b-4846-46c1-8c41-99e176056718", ResourceVersion:"583", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"", Pod:"csi-node-driver-dv5v9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42877eae1ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.552970 containerd[1793]: 2025-02-13 21:06:52.541 [INFO][6186] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.95.197/32] ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" Feb 13 21:06:52.552970 containerd[1793]: 2025-02-13 21:06:52.541 [INFO][6186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42877eae1ce ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" Feb 13 21:06:52.552970 containerd[1793]: 2025-02-13 21:06:52.543 [INFO][6186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" Feb 13 21:06:52.552970 containerd[1793]: 2025-02-13 21:06:52.543 [INFO][6186] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22f50a6b-4846-46c1-8c41-99e176056718", ResourceVersion:"583", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba", Pod:"csi-node-driver-dv5v9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42877eae1ce", MAC:"2e:66:db:f2:3a:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.552970 containerd[1793]: 2025-02-13 21:06:52.550 [INFO][6186] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba" Namespace="calico-system" Pod="csi-node-driver-dv5v9" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-csi--node--driver--dv5v9-eth0" Feb 13 21:06:52.563948 containerd[1793]: time="2025-02-13T21:06:52.563891504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:52.563948 containerd[1793]: time="2025-02-13T21:06:52.563945464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:52.564038 containerd[1793]: time="2025-02-13T21:06:52.563955195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.564038 containerd[1793]: time="2025-02-13T21:06:52.563999053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.577868 systemd[1]: Started cri-containerd-cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba.scope - libcontainer container cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba. Feb 13 21:06:52.592007 systemd[1]: run-netns-cni\x2d45b71b33\x2d06e4\x2da1af\x2dcf1a\x2d84052db2a1bd.mount: Deactivated successfully. Feb 13 21:06:52.592067 systemd[1]: run-netns-cni\x2db65c72e2\x2de83e\x2d8d5c\x2dd0bf\x2d465af4a04ee8.mount: Deactivated successfully. Feb 13 21:06:52.592111 systemd[1]: run-netns-cni\x2d3d290be4\x2d2627\x2dd607\x2d9f74\x2deeddd1c11f19.mount: Deactivated successfully. Feb 13 21:06:52.592150 systemd[1]: run-netns-cni\x2d1bf8b5ad\x2dd515\x2d4ed1\x2d4d93\x2de45deec146de.mount: Deactivated successfully. Feb 13 21:06:52.592188 systemd[1]: run-netns-cni\x2dc55d68dd\x2d3af3\x2d3b9c\x2d4b2c\x2dffde95e794eb.mount: Deactivated successfully. Feb 13 21:06:52.592228 systemd[1]: run-netns-cni\x2d21e5d81b\x2dc034\x2da03b\x2d918b\x2d0d0fb14bb11c.mount: Deactivated successfully. Feb 13 21:06:52.602256 containerd[1793]: time="2025-02-13T21:06:52.602203563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dv5v9,Uid:22f50a6b-4846-46c1-8c41-99e176056718,Namespace:calico-system,Attempt:7,} returns sandbox id \"cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba\"" Feb 13 21:06:52.645082 systemd-networkd[1709]: calia2564f5aea3: Link UP Feb 13 21:06:52.645292 systemd-networkd[1709]: calia2564f5aea3: Gained carrier Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.089 [INFO][6168] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.097 [INFO][6168] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0 coredns-668d6bf9bc- kube-system 322cd9eb-1852-4c5c-aae9-cdfcee960ce3 653 0 2025-02-13 21:06:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.1-a-9675b630d5 coredns-668d6bf9bc-pfzwq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2564f5aea3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.097 [INFO][6168] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.116 [INFO][6289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" HandleID="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Workload="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.119 [INFO][6289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" HandleID="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Workload="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f8730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.1-a-9675b630d5", "pod":"coredns-668d6bf9bc-pfzwq", "timestamp":"2025-02-13 21:06:52.11641063 +0000 UTC"}, Hostname:"ci-4186.1.1-a-9675b630d5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.119 [INFO][6289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.539 [INFO][6289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.539 [INFO][6289] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.1-a-9675b630d5' Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.615 [INFO][6289] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.623 [INFO][6289] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.629 [INFO][6289] ipam/ipam.go 489: Trying affinity for 192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.630 [INFO][6289] ipam/ipam.go 155: Attempting to load block cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.632 [INFO][6289] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.632 [INFO][6289] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.634 [INFO][6289] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217 Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.637 [INFO][6289] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.642 [INFO][6289] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.95.198/26] block=192.168.95.192/26 handle="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.642 [INFO][6289] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.95.198/26] handle="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" host="ci-4186.1.1-a-9675b630d5" Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.642 [INFO][6289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 21:06:52.654989 containerd[1793]: 2025-02-13 21:06:52.642 [INFO][6289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.198/26] IPv6=[] ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" HandleID="k8s-pod-network.0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Workload="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" Feb 13 21:06:52.656261 containerd[1793]: 2025-02-13 21:06:52.643 [INFO][6168] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"322cd9eb-1852-4c5c-aae9-cdfcee960ce3", ResourceVersion:"653", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"", Pod:"coredns-668d6bf9bc-pfzwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2564f5aea3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.656261 containerd[1793]: 2025-02-13 21:06:52.643 [INFO][6168] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.95.198/32] ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" Feb 13 21:06:52.656261 containerd[1793]: 2025-02-13 21:06:52.643 [INFO][6168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2564f5aea3 ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" Feb 13 21:06:52.656261 containerd[1793]: 2025-02-13 21:06:52.645 [INFO][6168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" Feb 13 21:06:52.656261 containerd[1793]: 2025-02-13 21:06:52.645 [INFO][6168] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"322cd9eb-1852-4c5c-aae9-cdfcee960ce3", ResourceVersion:"653", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 21, 6, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.1-a-9675b630d5", ContainerID:"0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217", Pod:"coredns-668d6bf9bc-pfzwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2564f5aea3", MAC:"e2:a6:8b:4b:d5:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 21:06:52.656261 containerd[1793]: 2025-02-13 21:06:52.653 [INFO][6168] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217" Namespace="kube-system" Pod="coredns-668d6bf9bc-pfzwq" WorkloadEndpoint="ci--4186.1.1--a--9675b630d5-k8s-coredns--668d6bf9bc--pfzwq-eth0" Feb 13 21:06:52.668601 containerd[1793]: time="2025-02-13T21:06:52.668560223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:06:52.668601 containerd[1793]: time="2025-02-13T21:06:52.668587058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:06:52.668830 containerd[1793]: time="2025-02-13T21:06:52.668793485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.668875 containerd[1793]: time="2025-02-13T21:06:52.668864073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:06:52.700880 systemd[1]: Started cri-containerd-0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217.scope - libcontainer container 0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217. Feb 13 21:06:52.735240 containerd[1793]: time="2025-02-13T21:06:52.735212650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pfzwq,Uid:322cd9eb-1852-4c5c-aae9-cdfcee960ce3,Namespace:kube-system,Attempt:7,} returns sandbox id \"0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217\"" Feb 13 21:06:52.736911 containerd[1793]: time="2025-02-13T21:06:52.736888840Z" level=info msg="CreateContainer within sandbox \"0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 21:06:52.742192 containerd[1793]: time="2025-02-13T21:06:52.742151162Z" level=info msg="CreateContainer within sandbox \"0ebd5ac257e2d7ac8637074e55beeadc50ad60678943afb356f645bac13b6217\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86219968fd1e0cddab94315a6647206f9c4ffcae0abc517fe284e965ba95ce59\"" Feb 13 21:06:52.742387 containerd[1793]: time="2025-02-13T21:06:52.742371767Z" level=info msg="StartContainer for \"86219968fd1e0cddab94315a6647206f9c4ffcae0abc517fe284e965ba95ce59\"" Feb 13 21:06:52.761793 systemd[1]: Started cri-containerd-86219968fd1e0cddab94315a6647206f9c4ffcae0abc517fe284e965ba95ce59.scope - libcontainer container 86219968fd1e0cddab94315a6647206f9c4ffcae0abc517fe284e965ba95ce59. Feb 13 21:06:52.773243 containerd[1793]: time="2025-02-13T21:06:52.773219403Z" level=info msg="StartContainer for \"86219968fd1e0cddab94315a6647206f9c4ffcae0abc517fe284e965ba95ce59\" returns successfully" Feb 13 21:06:53.110005 kubelet[3104]: I0213 21:06:53.109768 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wr725" podStartSLOduration=24.109746611 podStartE2EDuration="24.109746611s" podCreationTimestamp="2025-02-13 21:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:06:53.107946623 +0000 UTC m=+29.259130422" watchObservedRunningTime="2025-02-13 21:06:53.109746611 +0000 UTC m=+29.260930403" Feb 13 21:06:53.125206 kubelet[3104]: I0213 21:06:53.125152 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pfzwq" podStartSLOduration=24.125131615 podStartE2EDuration="24.125131615s" podCreationTimestamp="2025-02-13 21:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:06:53.117050704 +0000 UTC m=+29.268234515" watchObservedRunningTime="2025-02-13 21:06:53.125131615 +0000 UTC m=+29.276315416" Feb 13 21:06:53.597475 sshd[6372]: Invalid user ec2-user from 109.206.236.167 port 58674 Feb 13 21:06:53.638038 systemd-networkd[1709]: cali486c8c0cfe9: Gained IPv6LL Feb 13 21:06:53.638869 systemd-networkd[1709]: cali7fd2366729c: Gained IPv6LL Feb 13 21:06:53.865071 sshd[6372]: Connection closed by invalid user ec2-user 109.206.236.167 port 58674 [preauth] Feb 13 21:06:53.868372 systemd[1]: sshd@13-147.28.180.173:22-109.206.236.167:58674.service: Deactivated successfully. Feb 13 21:06:53.957973 systemd-networkd[1709]: cali60c34b88680: Gained IPv6LL Feb 13 21:06:54.022928 systemd-networkd[1709]: cali25f7c795645: Gained IPv6LL Feb 13 21:06:54.341920 systemd-networkd[1709]: cali42877eae1ce: Gained IPv6LL Feb 13 21:06:54.661971 systemd-networkd[1709]: calia2564f5aea3: Gained IPv6LL Feb 13 21:06:55.535479 kubelet[3104]: I0213 21:06:55.535372 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:06:56.242639 kernel: bpftool[7064]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 21:06:56.394567 systemd-networkd[1709]: vxlan.calico: Link UP Feb 13 21:06:56.394572 systemd-networkd[1709]: vxlan.calico: Gained carrier Feb 13 21:06:56.616479 containerd[1793]: time="2025-02-13T21:06:56.616457125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:56.616716 containerd[1793]: time="2025-02-13T21:06:56.616616256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 21:06:56.617025 containerd[1793]: time="2025-02-13T21:06:56.617011379Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:56.618657 containerd[1793]: time="2025-02-13T21:06:56.618643231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:56.619099 containerd[1793]: time="2025-02-13T21:06:56.619086209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.42996651s" Feb 13 21:06:56.619142 containerd[1793]: time="2025-02-13T21:06:56.619100105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 21:06:56.619594 containerd[1793]: time="2025-02-13T21:06:56.619583897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 21:06:56.620130 containerd[1793]: time="2025-02-13T21:06:56.620117544Z" level=info msg="CreateContainer within sandbox \"72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 21:06:56.624128 containerd[1793]: time="2025-02-13T21:06:56.624083182Z" level=info msg="CreateContainer within sandbox \"72cf6fbec6ba05f30c657f7f5d8aa51f994a8b0a390cac63c8354cb720167755\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e965fa3e194e70c65e941a50854ec1917fff83faa5f34c709af8bf411658bc2a\"" Feb 13 21:06:56.624324 containerd[1793]: time="2025-02-13T21:06:56.624312169Z" level=info msg="StartContainer for \"e965fa3e194e70c65e941a50854ec1917fff83faa5f34c709af8bf411658bc2a\"" Feb 13 21:06:56.648805 systemd[1]: Started cri-containerd-e965fa3e194e70c65e941a50854ec1917fff83faa5f34c709af8bf411658bc2a.scope - libcontainer container e965fa3e194e70c65e941a50854ec1917fff83faa5f34c709af8bf411658bc2a. Feb 13 21:06:56.671427 containerd[1793]: time="2025-02-13T21:06:56.671405003Z" level=info msg="StartContainer for \"e965fa3e194e70c65e941a50854ec1917fff83faa5f34c709af8bf411658bc2a\" returns successfully" Feb 13 21:06:57.087747 containerd[1793]: time="2025-02-13T21:06:57.087696900Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:57.087935 containerd[1793]: time="2025-02-13T21:06:57.087881182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 21:06:57.089437 containerd[1793]: time="2025-02-13T21:06:57.089422580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 469.82197ms" Feb 13 21:06:57.089468 containerd[1793]: time="2025-02-13T21:06:57.089441193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 21:06:57.090001 containerd[1793]: time="2025-02-13T21:06:57.089977451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 21:06:57.090525 containerd[1793]: time="2025-02-13T21:06:57.090511197Z" level=info msg="CreateContainer within sandbox \"75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 21:06:57.094989 containerd[1793]: time="2025-02-13T21:06:57.094940958Z" level=info msg="CreateContainer within sandbox \"75fe39520a05363bdf7baffeb1a37a94e90308350217659132d1080c8a1b3d4c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4f2cd0ce6ccef6b953db90c1299d452b346be3f4cd64e9276ed873445df8080f\"" Feb 13 21:06:57.095264 containerd[1793]: time="2025-02-13T21:06:57.095223849Z" level=info msg="StartContainer for \"4f2cd0ce6ccef6b953db90c1299d452b346be3f4cd64e9276ed873445df8080f\"" Feb 13 21:06:57.115804 systemd[1]: Started cri-containerd-4f2cd0ce6ccef6b953db90c1299d452b346be3f4cd64e9276ed873445df8080f.scope - libcontainer container 4f2cd0ce6ccef6b953db90c1299d452b346be3f4cd64e9276ed873445df8080f. Feb 13 21:06:57.141062 containerd[1793]: time="2025-02-13T21:06:57.141021971Z" level=info msg="StartContainer for \"4f2cd0ce6ccef6b953db90c1299d452b346be3f4cd64e9276ed873445df8080f\" returns successfully" Feb 13 21:06:57.141518 kubelet[3104]: I0213 21:06:57.141486 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gh88q" podStartSLOduration=16.710914612 podStartE2EDuration="21.141471706s" podCreationTimestamp="2025-02-13 21:06:36 +0000 UTC" firstStartedPulling="2025-02-13 21:06:52.188965064 +0000 UTC m=+28.340148858" lastFinishedPulling="2025-02-13 21:06:56.619522158 +0000 UTC m=+32.770705952" observedRunningTime="2025-02-13 21:06:57.14130241 +0000 UTC m=+33.292486204" watchObservedRunningTime="2025-02-13 21:06:57.141471706 +0000 UTC m=+33.292655498" Feb 13 21:06:57.733910 systemd-networkd[1709]: vxlan.calico: Gained IPv6LL Feb 13 21:06:58.123702 kubelet[3104]: I0213 21:06:58.123673 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:06:58.129983 kubelet[3104]: I0213 21:06:58.129948 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dfcf67d5f-gwq2t" podStartSLOduration=17.386205029 podStartE2EDuration="22.129935603s" podCreationTimestamp="2025-02-13 21:06:36 +0000 UTC" firstStartedPulling="2025-02-13 21:06:52.346168065 +0000 UTC m=+28.497351871" lastFinishedPulling="2025-02-13 21:06:57.08989865 +0000 UTC m=+33.241082445" observedRunningTime="2025-02-13 21:06:58.129891869 +0000 UTC m=+34.281075665" watchObservedRunningTime="2025-02-13 21:06:58.129935603 +0000 UTC m=+34.281119397" Feb 13 21:06:59.125368 kubelet[3104]: I0213 21:06:59.125311 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:06:59.617386 containerd[1793]: time="2025-02-13T21:06:59.617361455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:59.617661 containerd[1793]: time="2025-02-13T21:06:59.617593424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 21:06:59.617976 containerd[1793]: time="2025-02-13T21:06:59.617963734Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:59.618888 containerd[1793]: time="2025-02-13T21:06:59.618874251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:06:59.619579 containerd[1793]: time="2025-02-13T21:06:59.619565663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.529574737s" Feb 13 21:06:59.619615 containerd[1793]: time="2025-02-13T21:06:59.619583513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 21:06:59.620053 containerd[1793]: time="2025-02-13T21:06:59.620043801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 21:06:59.622889 containerd[1793]: time="2025-02-13T21:06:59.622872960Z" level=info msg="CreateContainer within sandbox \"8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 21:06:59.627082 containerd[1793]: time="2025-02-13T21:06:59.627067525Z" level=info msg="CreateContainer within sandbox \"8a9d8d2f61819c840a2837315fb543fa7899d8c3a2f3865cab29b88ae253b791\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"50654aaea4100e81e86e69222e1905d4ea8b6283261b9d88dbe513271c03144b\"" Feb 13 21:06:59.627249 containerd[1793]: time="2025-02-13T21:06:59.627233734Z" level=info msg="StartContainer for \"50654aaea4100e81e86e69222e1905d4ea8b6283261b9d88dbe513271c03144b\"" Feb 13 21:06:59.660924 systemd[1]: Started cri-containerd-50654aaea4100e81e86e69222e1905d4ea8b6283261b9d88dbe513271c03144b.scope - libcontainer container 50654aaea4100e81e86e69222e1905d4ea8b6283261b9d88dbe513271c03144b. Feb 13 21:06:59.688124 containerd[1793]: time="2025-02-13T21:06:59.688101816Z" level=info msg="StartContainer for \"50654aaea4100e81e86e69222e1905d4ea8b6283261b9d88dbe513271c03144b\" returns successfully" Feb 13 21:07:00.135949 kubelet[3104]: I0213 21:07:00.135908 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76848bdf96-56fb4" podStartSLOduration=17.016990302 podStartE2EDuration="24.135896132s" podCreationTimestamp="2025-02-13 21:06:36 +0000 UTC" firstStartedPulling="2025-02-13 21:06:52.501092772 +0000 UTC m=+28.652276566" lastFinishedPulling="2025-02-13 21:06:59.619998602 +0000 UTC m=+35.771182396" observedRunningTime="2025-02-13 21:07:00.135616604 +0000 UTC m=+36.286800404" watchObservedRunningTime="2025-02-13 21:07:00.135896132 +0000 UTC m=+36.287079927" Feb 13 21:07:01.132699 kubelet[3104]: I0213 21:07:01.132594 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:07:01.427207 containerd[1793]: time="2025-02-13T21:07:01.427141231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:07:01.427492 containerd[1793]: time="2025-02-13T21:07:01.427318064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 21:07:01.427762 containerd[1793]: time="2025-02-13T21:07:01.427747612Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:07:01.429183 containerd[1793]: time="2025-02-13T21:07:01.429169276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:07:01.429417 containerd[1793]: time="2025-02-13T21:07:01.429402882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.809346506s" Feb 13 21:07:01.429465 containerd[1793]: time="2025-02-13T21:07:01.429417722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 21:07:01.430627 containerd[1793]: time="2025-02-13T21:07:01.430611524Z" level=info msg="CreateContainer within sandbox \"cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 21:07:01.436412 containerd[1793]: time="2025-02-13T21:07:01.436397649Z" level=info msg="CreateContainer within sandbox \"cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e732c2f521086080fd286fe940667298a0ab9a07747293dd9d823178f7e18045\"" Feb 13 21:07:01.436747 containerd[1793]: time="2025-02-13T21:07:01.436703446Z" level=info msg="StartContainer for \"e732c2f521086080fd286fe940667298a0ab9a07747293dd9d823178f7e18045\"" Feb 13 21:07:01.467916 systemd[1]: Started cri-containerd-e732c2f521086080fd286fe940667298a0ab9a07747293dd9d823178f7e18045.scope - libcontainer container e732c2f521086080fd286fe940667298a0ab9a07747293dd9d823178f7e18045. Feb 13 21:07:01.484296 containerd[1793]: time="2025-02-13T21:07:01.484266119Z" level=info msg="StartContainer for \"e732c2f521086080fd286fe940667298a0ab9a07747293dd9d823178f7e18045\" returns successfully" Feb 13 21:07:01.485009 containerd[1793]: time="2025-02-13T21:07:01.484993990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 21:07:03.334650 containerd[1793]: time="2025-02-13T21:07:03.334599599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:07:03.334886 containerd[1793]: time="2025-02-13T21:07:03.334821610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 21:07:03.335217 containerd[1793]: time="2025-02-13T21:07:03.335177537Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:07:03.336161 containerd[1793]: time="2025-02-13T21:07:03.336121887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:07:03.336556 containerd[1793]: time="2025-02-13T21:07:03.336513128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.851499026s" Feb 13 21:07:03.336556 containerd[1793]: time="2025-02-13T21:07:03.336528711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 21:07:03.337754 containerd[1793]: time="2025-02-13T21:07:03.337727134Z" level=info msg="CreateContainer within sandbox \"cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 21:07:03.342489 containerd[1793]: time="2025-02-13T21:07:03.342474360Z" level=info msg="CreateContainer within sandbox \"cd56439f1df5a0e2f41b584a4011ed1cc01c34eef236c1948bd80ebba0abfcba\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"049e432509d1479cab9feb35c3f35d67ca492d4d1f6e653bce7770e241898b71\"" Feb 13 21:07:03.342758 containerd[1793]: time="2025-02-13T21:07:03.342745881Z" level=info msg="StartContainer for \"049e432509d1479cab9feb35c3f35d67ca492d4d1f6e653bce7770e241898b71\"" Feb 13 21:07:03.369869 systemd[1]: Started cri-containerd-049e432509d1479cab9feb35c3f35d67ca492d4d1f6e653bce7770e241898b71.scope - libcontainer container 049e432509d1479cab9feb35c3f35d67ca492d4d1f6e653bce7770e241898b71. Feb 13 21:07:03.383502 containerd[1793]: time="2025-02-13T21:07:03.383482219Z" level=info msg="StartContainer for \"049e432509d1479cab9feb35c3f35d67ca492d4d1f6e653bce7770e241898b71\" returns successfully" Feb 13 21:07:03.947918 kubelet[3104]: I0213 21:07:03.947810 3104 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 21:07:03.947918 kubelet[3104]: I0213 21:07:03.947888 3104 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 21:07:04.185903 kubelet[3104]: I0213 21:07:04.185751 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dv5v9" podStartSLOduration=17.451460465 podStartE2EDuration="28.18571233s" podCreationTimestamp="2025-02-13 21:06:36 +0000 UTC" firstStartedPulling="2025-02-13 21:06:52.60280119 +0000 UTC m=+28.753984989" lastFinishedPulling="2025-02-13 21:07:03.33705306 +0000 UTC m=+39.488236854" observedRunningTime="2025-02-13 21:07:04.184845757 +0000 UTC m=+40.336029626" watchObservedRunningTime="2025-02-13 21:07:04.18571233 +0000 UTC m=+40.336896215" Feb 13 21:07:11.179159 systemd[1]: Started sshd@14-147.28.180.173:22-109.206.236.167:51622.service - OpenSSH per-connection server daemon (109.206.236.167:51622). Feb 13 21:07:11.330383 kubelet[3104]: I0213 21:07:11.330265 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:07:12.255141 sshd[7452]: Connection closed by authenticating user docker 109.206.236.167 port 51622 [preauth] Feb 13 21:07:12.258424 systemd[1]: sshd@14-147.28.180.173:22-109.206.236.167:51622.service: Deactivated successfully. Feb 13 21:07:21.782716 kubelet[3104]: I0213 21:07:21.782439 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:07:23.902959 containerd[1793]: time="2025-02-13T21:07:23.902892179Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:07:23.903320 containerd[1793]: time="2025-02-13T21:07:23.902968135Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:07:23.903320 containerd[1793]: time="2025-02-13T21:07:23.902978985Z" level=info msg="StopPodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:07:23.903320 containerd[1793]: time="2025-02-13T21:07:23.903256286Z" level=info msg="RemovePodSandbox for \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:07:23.903320 containerd[1793]: time="2025-02-13T21:07:23.903288541Z" level=info msg="Forcibly stopping sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\"" Feb 13 21:07:23.903434 containerd[1793]: time="2025-02-13T21:07:23.903334272Z" level=info msg="TearDown network for sandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" successfully" Feb 13 21:07:23.904864 containerd[1793]: time="2025-02-13T21:07:23.904850645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.904910 containerd[1793]: time="2025-02-13T21:07:23.904877137Z" level=info msg="RemovePodSandbox \"e9526415ddf2b5def2736c83dbde440214ca6bbca341b6a7e7eede781d227057\" returns successfully" Feb 13 21:07:23.905154 containerd[1793]: time="2025-02-13T21:07:23.905114334Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:07:23.905204 containerd[1793]: time="2025-02-13T21:07:23.905171717Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:07:23.905204 containerd[1793]: time="2025-02-13T21:07:23.905177762Z" level=info msg="StopPodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:07:23.905310 containerd[1793]: time="2025-02-13T21:07:23.905297349Z" level=info msg="RemovePodSandbox for \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:07:23.905381 containerd[1793]: time="2025-02-13T21:07:23.905310034Z" level=info msg="Forcibly stopping sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\"" Feb 13 21:07:23.905401 containerd[1793]: time="2025-02-13T21:07:23.905359386Z" level=info msg="TearDown network for sandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" successfully" Feb 13 21:07:23.906585 containerd[1793]: time="2025-02-13T21:07:23.906543008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.906585 containerd[1793]: time="2025-02-13T21:07:23.906559891Z" level=info msg="RemovePodSandbox \"492d54f71e9ddb34de33c29601b5531497a6eb52fef88332daf738ef9a4555fd\" returns successfully" Feb 13 21:07:23.906803 containerd[1793]: time="2025-02-13T21:07:23.906762350Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:07:23.906914 containerd[1793]: time="2025-02-13T21:07:23.906887185Z" level=info msg="TearDown network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" successfully" Feb 13 21:07:23.906914 containerd[1793]: time="2025-02-13T21:07:23.906909812Z" level=info msg="StopPodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" returns successfully" Feb 13 21:07:23.907171 containerd[1793]: time="2025-02-13T21:07:23.907112589Z" level=info msg="RemovePodSandbox for \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:07:23.907171 containerd[1793]: time="2025-02-13T21:07:23.907146251Z" level=info msg="Forcibly stopping sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\"" Feb 13 21:07:23.907253 containerd[1793]: time="2025-02-13T21:07:23.907227523Z" level=info msg="TearDown network for sandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" successfully" Feb 13 21:07:23.908593 containerd[1793]: time="2025-02-13T21:07:23.908578684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.908639 containerd[1793]: time="2025-02-13T21:07:23.908599424Z" level=info msg="RemovePodSandbox \"28598cb5bb4f19ef2be3d2eeb2993826fb614f87c197bbbec9e1ef75161630b9\" returns successfully" Feb 13 21:07:23.908778 containerd[1793]: time="2025-02-13T21:07:23.908711931Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" Feb 13 21:07:23.908847 containerd[1793]: time="2025-02-13T21:07:23.908804073Z" level=info msg="TearDown network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" successfully" Feb 13 21:07:23.908847 containerd[1793]: time="2025-02-13T21:07:23.908810699Z" level=info msg="StopPodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" returns successfully" Feb 13 21:07:23.909080 containerd[1793]: time="2025-02-13T21:07:23.909050597Z" level=info msg="RemovePodSandbox for \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" Feb 13 21:07:23.909117 containerd[1793]: time="2025-02-13T21:07:23.909081412Z" level=info msg="Forcibly stopping sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\"" Feb 13 21:07:23.909155 containerd[1793]: time="2025-02-13T21:07:23.909114691Z" level=info msg="TearDown network for sandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" successfully" Feb 13 21:07:23.910376 containerd[1793]: time="2025-02-13T21:07:23.910340969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.910376 containerd[1793]: time="2025-02-13T21:07:23.910374434Z" level=info msg="RemovePodSandbox \"df9c2dd26c1795b054bbe7a16242fde8712a142b2ac8cc97d56afbb8e3e869cd\" returns successfully" Feb 13 21:07:23.910497 containerd[1793]: time="2025-02-13T21:07:23.910484353Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\"" Feb 13 21:07:23.910534 containerd[1793]: time="2025-02-13T21:07:23.910526361Z" level=info msg="TearDown network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" successfully" Feb 13 21:07:23.910560 containerd[1793]: time="2025-02-13T21:07:23.910533938Z" level=info msg="StopPodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" returns successfully" Feb 13 21:07:23.910644 containerd[1793]: time="2025-02-13T21:07:23.910633989Z" level=info msg="RemovePodSandbox for \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\"" Feb 13 21:07:23.910700 containerd[1793]: time="2025-02-13T21:07:23.910647115Z" level=info msg="Forcibly stopping sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\"" Feb 13 21:07:23.910737 containerd[1793]: time="2025-02-13T21:07:23.910715266Z" level=info msg="TearDown network for sandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" successfully" Feb 13 21:07:23.911876 containerd[1793]: time="2025-02-13T21:07:23.911863885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.911919 containerd[1793]: time="2025-02-13T21:07:23.911886509Z" level=info msg="RemovePodSandbox \"d089ce282490e797583b2adbc06fdf6162be40bd25ae6dda0763da66c066d46a\" returns successfully" Feb 13 21:07:23.912004 containerd[1793]: time="2025-02-13T21:07:23.911995301Z" level=info msg="StopPodSandbox for \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\"" Feb 13 21:07:23.912040 containerd[1793]: time="2025-02-13T21:07:23.912034171Z" level=info msg="TearDown network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" successfully" Feb 13 21:07:23.912082 containerd[1793]: time="2025-02-13T21:07:23.912040532Z" level=info msg="StopPodSandbox for \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" returns successfully" Feb 13 21:07:23.912232 containerd[1793]: time="2025-02-13T21:07:23.912223876Z" level=info msg="RemovePodSandbox for \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\"" Feb 13 21:07:23.912255 containerd[1793]: time="2025-02-13T21:07:23.912235805Z" level=info msg="Forcibly stopping sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\"" Feb 13 21:07:23.912297 containerd[1793]: time="2025-02-13T21:07:23.912281323Z" level=info msg="TearDown network for sandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" successfully" Feb 13 21:07:23.913428 containerd[1793]: time="2025-02-13T21:07:23.913418236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.913456 containerd[1793]: time="2025-02-13T21:07:23.913435638Z" level=info msg="RemovePodSandbox \"025485b78358587da226e95884d370bfdfad5317ace9855aeb18163e64111565\" returns successfully" Feb 13 21:07:23.913549 containerd[1793]: time="2025-02-13T21:07:23.913540255Z" level=info msg="StopPodSandbox for \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\"" Feb 13 21:07:23.913588 containerd[1793]: time="2025-02-13T21:07:23.913581418Z" level=info msg="TearDown network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\" successfully" Feb 13 21:07:23.913614 containerd[1793]: time="2025-02-13T21:07:23.913588488Z" level=info msg="StopPodSandbox for \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\" returns successfully" Feb 13 21:07:23.913733 containerd[1793]: time="2025-02-13T21:07:23.913723336Z" level=info msg="RemovePodSandbox for \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\"" Feb 13 21:07:23.913765 containerd[1793]: time="2025-02-13T21:07:23.913735458Z" level=info msg="Forcibly stopping sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\"" Feb 13 21:07:23.913799 containerd[1793]: time="2025-02-13T21:07:23.913776917Z" level=info msg="TearDown network for sandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\" successfully" Feb 13 21:07:23.914955 containerd[1793]: time="2025-02-13T21:07:23.914943434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.915022 containerd[1793]: time="2025-02-13T21:07:23.914964997Z" level=info msg="RemovePodSandbox \"a5598caad8edf0425dff8585ea02271b1c7913ac2d0a51d3d0ba843a4980f22e\" returns successfully" Feb 13 21:07:23.915129 containerd[1793]: time="2025-02-13T21:07:23.915119436Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:07:23.915168 containerd[1793]: time="2025-02-13T21:07:23.915161050Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:07:23.915187 containerd[1793]: time="2025-02-13T21:07:23.915167775Z" level=info msg="StopPodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:07:23.915289 containerd[1793]: time="2025-02-13T21:07:23.915280242Z" level=info msg="RemovePodSandbox for \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:07:23.915309 containerd[1793]: time="2025-02-13T21:07:23.915291633Z" level=info msg="Forcibly stopping sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\"" Feb 13 21:07:23.915337 containerd[1793]: time="2025-02-13T21:07:23.915322888Z" level=info msg="TearDown network for sandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" successfully" Feb 13 21:07:23.916475 containerd[1793]: time="2025-02-13T21:07:23.916464793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.916501 containerd[1793]: time="2025-02-13T21:07:23.916481096Z" level=info msg="RemovePodSandbox \"d8a3a4b346869a0c10429ac8070838b577439133613a8cd388036c740d6d9069\" returns successfully" Feb 13 21:07:23.916591 containerd[1793]: time="2025-02-13T21:07:23.916581937Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:07:23.916649 containerd[1793]: time="2025-02-13T21:07:23.916618399Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:07:23.916690 containerd[1793]: time="2025-02-13T21:07:23.916627792Z" level=info msg="StopPodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:07:23.916820 containerd[1793]: time="2025-02-13T21:07:23.916795435Z" level=info msg="RemovePodSandbox for \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:07:23.916858 containerd[1793]: time="2025-02-13T21:07:23.916822061Z" level=info msg="Forcibly stopping sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\"" Feb 13 21:07:23.916889 containerd[1793]: time="2025-02-13T21:07:23.916865875Z" level=info msg="TearDown network for sandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" successfully" Feb 13 21:07:23.918000 containerd[1793]: time="2025-02-13T21:07:23.917988781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.918043 containerd[1793]: time="2025-02-13T21:07:23.918009995Z" level=info msg="RemovePodSandbox \"e2d2a63cc355082e3d667838ab36d2b64fe4355a317ff4fee15a852a46dfa7fd\" returns successfully" Feb 13 21:07:23.918172 containerd[1793]: time="2025-02-13T21:07:23.918162944Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:07:23.918207 containerd[1793]: time="2025-02-13T21:07:23.918200673Z" level=info msg="TearDown network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" successfully" Feb 13 21:07:23.918229 containerd[1793]: time="2025-02-13T21:07:23.918207290Z" level=info msg="StopPodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" returns successfully" Feb 13 21:07:23.918322 containerd[1793]: time="2025-02-13T21:07:23.918314613Z" level=info msg="RemovePodSandbox for \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:07:23.918341 containerd[1793]: time="2025-02-13T21:07:23.918324185Z" level=info msg="Forcibly stopping sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\"" Feb 13 21:07:23.918364 containerd[1793]: time="2025-02-13T21:07:23.918350249Z" level=info msg="TearDown network for sandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" successfully" Feb 13 21:07:23.919618 containerd[1793]: time="2025-02-13T21:07:23.919608286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.919671 containerd[1793]: time="2025-02-13T21:07:23.919630243Z" level=info msg="RemovePodSandbox \"d709f55a62a0ee266e179ae3b4078db4e87cc555d02412f211cd24e553d91a3d\" returns successfully" Feb 13 21:07:23.919906 containerd[1793]: time="2025-02-13T21:07:23.919881094Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" Feb 13 21:07:23.919992 containerd[1793]: time="2025-02-13T21:07:23.919985145Z" level=info msg="TearDown network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" successfully" Feb 13 21:07:23.920031 containerd[1793]: time="2025-02-13T21:07:23.919991671Z" level=info msg="StopPodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" returns successfully" Feb 13 21:07:23.920225 containerd[1793]: time="2025-02-13T21:07:23.920200136Z" level=info msg="RemovePodSandbox for \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" Feb 13 21:07:23.920285 containerd[1793]: time="2025-02-13T21:07:23.920227659Z" level=info msg="Forcibly stopping sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\"" Feb 13 21:07:23.920305 containerd[1793]: time="2025-02-13T21:07:23.920289546Z" level=info msg="TearDown network for sandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" successfully" Feb 13 21:07:23.921445 containerd[1793]: time="2025-02-13T21:07:23.921433935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.921475 containerd[1793]: time="2025-02-13T21:07:23.921450335Z" level=info msg="RemovePodSandbox \"3b4c1aa1158410586e55dcad205753294e18ec4c0995d0e460c83ccd690b03b0\" returns successfully" Feb 13 21:07:23.921605 containerd[1793]: time="2025-02-13T21:07:23.921596369Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\"" Feb 13 21:07:23.921681 containerd[1793]: time="2025-02-13T21:07:23.921674768Z" level=info msg="TearDown network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" successfully" Feb 13 21:07:23.921719 containerd[1793]: time="2025-02-13T21:07:23.921681549Z" level=info msg="StopPodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" returns successfully" Feb 13 21:07:23.921915 containerd[1793]: time="2025-02-13T21:07:23.921891475Z" level=info msg="RemovePodSandbox for \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\"" Feb 13 21:07:23.921955 containerd[1793]: time="2025-02-13T21:07:23.921917353Z" level=info msg="Forcibly stopping sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\"" Feb 13 21:07:23.921987 containerd[1793]: time="2025-02-13T21:07:23.921963582Z" level=info msg="TearDown network for sandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" successfully" Feb 13 21:07:23.923135 containerd[1793]: time="2025-02-13T21:07:23.923123284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.923173 containerd[1793]: time="2025-02-13T21:07:23.923143830Z" level=info msg="RemovePodSandbox \"cba59620c311eb0d1d072de67ea3d820401cc8b18c8f0a6943d666f228aa7a77\" returns successfully" Feb 13 21:07:23.923326 containerd[1793]: time="2025-02-13T21:07:23.923317087Z" level=info msg="StopPodSandbox for \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\"" Feb 13 21:07:23.923378 containerd[1793]: time="2025-02-13T21:07:23.923371376Z" level=info msg="TearDown network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" successfully" Feb 13 21:07:23.923401 containerd[1793]: time="2025-02-13T21:07:23.923377778Z" level=info msg="StopPodSandbox for \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" returns successfully" Feb 13 21:07:23.923530 containerd[1793]: time="2025-02-13T21:07:23.923522389Z" level=info msg="RemovePodSandbox for \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\"" Feb 13 21:07:23.923550 containerd[1793]: time="2025-02-13T21:07:23.923531949Z" level=info msg="Forcibly stopping sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\"" Feb 13 21:07:23.923579 containerd[1793]: time="2025-02-13T21:07:23.923563936Z" level=info msg="TearDown network for sandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" successfully" Feb 13 21:07:23.924869 containerd[1793]: time="2025-02-13T21:07:23.924858020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.924963 containerd[1793]: time="2025-02-13T21:07:23.924874590Z" level=info msg="RemovePodSandbox \"64ab4db212842309d42a7b016e7df9502295569b6e7460c381f36b3bb550eb0b\" returns successfully" Feb 13 21:07:23.925101 containerd[1793]: time="2025-02-13T21:07:23.925091366Z" level=info msg="StopPodSandbox for \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\"" Feb 13 21:07:23.925138 containerd[1793]: time="2025-02-13T21:07:23.925130622Z" level=info msg="TearDown network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\" successfully" Feb 13 21:07:23.925169 containerd[1793]: time="2025-02-13T21:07:23.925138434Z" level=info msg="StopPodSandbox for \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\" returns successfully" Feb 13 21:07:23.925278 containerd[1793]: time="2025-02-13T21:07:23.925268217Z" level=info msg="RemovePodSandbox for \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\"" Feb 13 21:07:23.925325 containerd[1793]: time="2025-02-13T21:07:23.925279522Z" level=info msg="Forcibly stopping sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\"" Feb 13 21:07:23.925354 containerd[1793]: time="2025-02-13T21:07:23.925326103Z" level=info msg="TearDown network for sandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\" successfully" Feb 13 21:07:23.927109 containerd[1793]: time="2025-02-13T21:07:23.927097574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.927176 containerd[1793]: time="2025-02-13T21:07:23.927119560Z" level=info msg="RemovePodSandbox \"5dc4df5f3a9b31a60ddb2e4370059b63b2deba85f69f18cbfcd76f97093c93e5\" returns successfully" Feb 13 21:07:23.927380 containerd[1793]: time="2025-02-13T21:07:23.927370326Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:07:23.927437 containerd[1793]: time="2025-02-13T21:07:23.927430957Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:07:23.927461 containerd[1793]: time="2025-02-13T21:07:23.927437446Z" level=info msg="StopPodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:07:23.927557 containerd[1793]: time="2025-02-13T21:07:23.927544737Z" level=info msg="RemovePodSandbox for \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:07:23.927581 containerd[1793]: time="2025-02-13T21:07:23.927559615Z" level=info msg="Forcibly stopping sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\"" Feb 13 21:07:23.927612 containerd[1793]: time="2025-02-13T21:07:23.927593818Z" level=info msg="TearDown network for sandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" successfully" Feb 13 21:07:23.932239 containerd[1793]: time="2025-02-13T21:07:23.932227091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.932281 containerd[1793]: time="2025-02-13T21:07:23.932248398Z" level=info msg="RemovePodSandbox \"806d7e387453f8d8aa5dc4336096cd465cb7990f839cd53aea28bc6a97bca77d\" returns successfully" Feb 13 21:07:23.932390 containerd[1793]: time="2025-02-13T21:07:23.932380293Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:07:23.932442 containerd[1793]: time="2025-02-13T21:07:23.932429095Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:07:23.932442 containerd[1793]: time="2025-02-13T21:07:23.932439210Z" level=info msg="StopPodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:07:23.932560 containerd[1793]: time="2025-02-13T21:07:23.932547519Z" level=info msg="RemovePodSandbox for \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:07:23.932602 containerd[1793]: time="2025-02-13T21:07:23.932561684Z" level=info msg="Forcibly stopping sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\"" Feb 13 21:07:23.932644 containerd[1793]: time="2025-02-13T21:07:23.932612725Z" level=info msg="TearDown network for sandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" successfully" Feb 13 21:07:23.933808 containerd[1793]: time="2025-02-13T21:07:23.933794669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.933849 containerd[1793]: time="2025-02-13T21:07:23.933819171Z" level=info msg="RemovePodSandbox \"d743908868c106c2ba60beb013f0f1908f95ec56678e1aea1e2666786a2cc1a7\" returns successfully" Feb 13 21:07:23.933962 containerd[1793]: time="2025-02-13T21:07:23.933951766Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:07:23.934000 containerd[1793]: time="2025-02-13T21:07:23.933992819Z" level=info msg="TearDown network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" successfully" Feb 13 21:07:23.934000 containerd[1793]: time="2025-02-13T21:07:23.933999628Z" level=info msg="StopPodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" returns successfully" Feb 13 21:07:23.934093 containerd[1793]: time="2025-02-13T21:07:23.934082293Z" level=info msg="RemovePodSandbox for \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:07:23.934135 containerd[1793]: time="2025-02-13T21:07:23.934093472Z" level=info msg="Forcibly stopping sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\"" Feb 13 21:07:23.934157 containerd[1793]: time="2025-02-13T21:07:23.934141328Z" level=info msg="TearDown network for sandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" successfully" Feb 13 21:07:23.935349 containerd[1793]: time="2025-02-13T21:07:23.935310946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.935349 containerd[1793]: time="2025-02-13T21:07:23.935328631Z" level=info msg="RemovePodSandbox \"bc71ef1d5f9c21449e07e38c85ffe1beac4f9a807ff5e0b31763576b20c05628\" returns successfully" Feb 13 21:07:23.935481 containerd[1793]: time="2025-02-13T21:07:23.935469360Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" Feb 13 21:07:23.935517 containerd[1793]: time="2025-02-13T21:07:23.935509734Z" level=info msg="TearDown network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" successfully" Feb 13 21:07:23.935538 containerd[1793]: time="2025-02-13T21:07:23.935516254Z" level=info msg="StopPodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" returns successfully" Feb 13 21:07:23.935608 containerd[1793]: time="2025-02-13T21:07:23.935599434Z" level=info msg="RemovePodSandbox for \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" Feb 13 21:07:23.935631 containerd[1793]: time="2025-02-13T21:07:23.935610672Z" level=info msg="Forcibly stopping sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\"" Feb 13 21:07:23.935698 containerd[1793]: time="2025-02-13T21:07:23.935643617Z" level=info msg="TearDown network for sandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" successfully" Feb 13 21:07:23.937322 containerd[1793]: time="2025-02-13T21:07:23.937311100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.937357 containerd[1793]: time="2025-02-13T21:07:23.937330475Z" level=info msg="RemovePodSandbox \"8796e4215c03d4346b528f4c0fa80de9271293edb2beaab0ba58791569cb1af7\" returns successfully" Feb 13 21:07:23.937555 containerd[1793]: time="2025-02-13T21:07:23.937513888Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\"" Feb 13 21:07:23.937589 containerd[1793]: time="2025-02-13T21:07:23.937557950Z" level=info msg="TearDown network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" successfully" Feb 13 21:07:23.937589 containerd[1793]: time="2025-02-13T21:07:23.937580980Z" level=info msg="StopPodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" returns successfully" Feb 13 21:07:23.937737 containerd[1793]: time="2025-02-13T21:07:23.937685680Z" level=info msg="RemovePodSandbox for \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\"" Feb 13 21:07:23.937737 containerd[1793]: time="2025-02-13T21:07:23.937721138Z" level=info msg="Forcibly stopping sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\"" Feb 13 21:07:23.937793 containerd[1793]: time="2025-02-13T21:07:23.937751458Z" level=info msg="TearDown network for sandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" successfully" Feb 13 21:07:23.938932 containerd[1793]: time="2025-02-13T21:07:23.938892176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.938932 containerd[1793]: time="2025-02-13T21:07:23.938908727Z" level=info msg="RemovePodSandbox \"a6223b2daac34ac9233c865bdf29f5ecf313eee0267c57a63761c44e27592fe3\" returns successfully" Feb 13 21:07:23.939103 containerd[1793]: time="2025-02-13T21:07:23.939044227Z" level=info msg="StopPodSandbox for \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\"" Feb 13 21:07:23.939132 containerd[1793]: time="2025-02-13T21:07:23.939116697Z" level=info msg="TearDown network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" successfully" Feb 13 21:07:23.939132 containerd[1793]: time="2025-02-13T21:07:23.939123292Z" level=info msg="StopPodSandbox for \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" returns successfully" Feb 13 21:07:23.939377 containerd[1793]: time="2025-02-13T21:07:23.939297259Z" level=info msg="RemovePodSandbox for \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\"" Feb 13 21:07:23.939377 containerd[1793]: time="2025-02-13T21:07:23.939323747Z" level=info msg="Forcibly stopping sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\"" Feb 13 21:07:23.939442 containerd[1793]: time="2025-02-13T21:07:23.939389176Z" level=info msg="TearDown network for sandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" successfully" Feb 13 21:07:23.940553 containerd[1793]: time="2025-02-13T21:07:23.940542918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.940577 containerd[1793]: time="2025-02-13T21:07:23.940561462Z" level=info msg="RemovePodSandbox \"96f2f7d295cdec9dfcfe2c67db03dacf64d94c324111b66970f309c44bd369df\" returns successfully" Feb 13 21:07:23.940814 containerd[1793]: time="2025-02-13T21:07:23.940728515Z" level=info msg="StopPodSandbox for \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\"" Feb 13 21:07:23.940848 containerd[1793]: time="2025-02-13T21:07:23.940794029Z" level=info msg="TearDown network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\" successfully" Feb 13 21:07:23.940848 containerd[1793]: time="2025-02-13T21:07:23.940827162Z" level=info msg="StopPodSandbox for \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\" returns successfully" Feb 13 21:07:23.941021 containerd[1793]: time="2025-02-13T21:07:23.940961532Z" level=info msg="RemovePodSandbox for \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\"" Feb 13 21:07:23.941021 containerd[1793]: time="2025-02-13T21:07:23.940990847Z" level=info msg="Forcibly stopping sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\"" Feb 13 21:07:23.941101 containerd[1793]: time="2025-02-13T21:07:23.941064792Z" level=info msg="TearDown network for sandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\" successfully" Feb 13 21:07:23.942429 containerd[1793]: time="2025-02-13T21:07:23.942386361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.942429 containerd[1793]: time="2025-02-13T21:07:23.942424347Z" level=info msg="RemovePodSandbox \"b0b94b4e8e69008daf9eff69ef190fed604015ff90d0c3eabafe180e87c8c3de\" returns successfully" Feb 13 21:07:23.942577 containerd[1793]: time="2025-02-13T21:07:23.942566258Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:07:23.942612 containerd[1793]: time="2025-02-13T21:07:23.942604911Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:07:23.942612 containerd[1793]: time="2025-02-13T21:07:23.942611505Z" level=info msg="StopPodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:07:23.942852 containerd[1793]: time="2025-02-13T21:07:23.942817100Z" level=info msg="RemovePodSandbox for \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:07:23.942852 containerd[1793]: time="2025-02-13T21:07:23.942848429Z" level=info msg="Forcibly stopping sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\"" Feb 13 21:07:23.942937 containerd[1793]: time="2025-02-13T21:07:23.942916459Z" level=info msg="TearDown network for sandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" successfully" Feb 13 21:07:23.944068 containerd[1793]: time="2025-02-13T21:07:23.944028262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.944068 containerd[1793]: time="2025-02-13T21:07:23.944045401Z" level=info msg="RemovePodSandbox \"abca32da56d9f98e0f2a3235028b1bc2f3f48389f22bae6604fcddef0f674739\" returns successfully" Feb 13 21:07:23.944245 containerd[1793]: time="2025-02-13T21:07:23.944217553Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:07:23.944313 containerd[1793]: time="2025-02-13T21:07:23.944304953Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:07:23.944313 containerd[1793]: time="2025-02-13T21:07:23.944311929Z" level=info msg="StopPodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:07:23.944505 containerd[1793]: time="2025-02-13T21:07:23.944463381Z" level=info msg="RemovePodSandbox for \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:07:23.944505 containerd[1793]: time="2025-02-13T21:07:23.944473449Z" level=info msg="Forcibly stopping sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\"" Feb 13 21:07:23.944583 containerd[1793]: time="2025-02-13T21:07:23.944525736Z" level=info msg="TearDown network for sandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" successfully" Feb 13 21:07:23.945821 containerd[1793]: time="2025-02-13T21:07:23.945779690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.945821 containerd[1793]: time="2025-02-13T21:07:23.945799948Z" level=info msg="RemovePodSandbox \"466ce85604a81ffcf0adb514f74df036223f30285756dd3e79f40e1ea6c59de1\" returns successfully" Feb 13 21:07:23.946044 containerd[1793]: time="2025-02-13T21:07:23.945998718Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:07:23.946044 containerd[1793]: time="2025-02-13T21:07:23.946035092Z" level=info msg="TearDown network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" successfully" Feb 13 21:07:23.946044 containerd[1793]: time="2025-02-13T21:07:23.946040869Z" level=info msg="StopPodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" returns successfully" Feb 13 21:07:23.946361 containerd[1793]: time="2025-02-13T21:07:23.946314295Z" level=info msg="RemovePodSandbox for \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:07:23.946361 containerd[1793]: time="2025-02-13T21:07:23.946344468Z" level=info msg="Forcibly stopping sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\"" Feb 13 21:07:23.946440 containerd[1793]: time="2025-02-13T21:07:23.946413267Z" level=info msg="TearDown network for sandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" successfully" Feb 13 21:07:23.947717 containerd[1793]: time="2025-02-13T21:07:23.947676800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.947717 containerd[1793]: time="2025-02-13T21:07:23.947693768Z" level=info msg="RemovePodSandbox \"f39bdc1c54e67583094579cb0143089f0fee0e796689505e74340aa0877e958a\" returns successfully" Feb 13 21:07:23.947953 containerd[1793]: time="2025-02-13T21:07:23.947913989Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" Feb 13 21:07:23.948012 containerd[1793]: time="2025-02-13T21:07:23.947963238Z" level=info msg="TearDown network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" successfully" Feb 13 21:07:23.948012 containerd[1793]: time="2025-02-13T21:07:23.947970203Z" level=info msg="StopPodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" returns successfully" Feb 13 21:07:23.948165 containerd[1793]: time="2025-02-13T21:07:23.948132084Z" level=info msg="RemovePodSandbox for \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" Feb 13 21:07:23.948165 containerd[1793]: time="2025-02-13T21:07:23.948159978Z" level=info msg="Forcibly stopping sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\"" Feb 13 21:07:23.948257 containerd[1793]: time="2025-02-13T21:07:23.948237061Z" level=info msg="TearDown network for sandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" successfully" Feb 13 21:07:23.949416 containerd[1793]: time="2025-02-13T21:07:23.949373838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.949416 containerd[1793]: time="2025-02-13T21:07:23.949389402Z" level=info msg="RemovePodSandbox \"2fd9e12b34be1cf8d642dfe4f2e60efafadedcf4354053583974145aabb5db8f\" returns successfully" Feb 13 21:07:23.949594 containerd[1793]: time="2025-02-13T21:07:23.949557042Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\"" Feb 13 21:07:23.949701 containerd[1793]: time="2025-02-13T21:07:23.949616500Z" level=info msg="TearDown network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" successfully" Feb 13 21:07:23.949701 containerd[1793]: time="2025-02-13T21:07:23.949640553Z" level=info msg="StopPodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" returns successfully" Feb 13 21:07:23.949899 containerd[1793]: time="2025-02-13T21:07:23.949857575Z" level=info msg="RemovePodSandbox for \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\"" Feb 13 21:07:23.949899 containerd[1793]: time="2025-02-13T21:07:23.949871460Z" level=info msg="Forcibly stopping sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\"" Feb 13 21:07:23.949944 containerd[1793]: time="2025-02-13T21:07:23.949903490Z" level=info msg="TearDown network for sandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" successfully" Feb 13 21:07:23.951121 containerd[1793]: time="2025-02-13T21:07:23.951079466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.951121 containerd[1793]: time="2025-02-13T21:07:23.951097046Z" level=info msg="RemovePodSandbox \"f950a49db2646939f834ce6a431489707af8828f553b5b14fda9dd5bca608104\" returns successfully" Feb 13 21:07:23.951267 containerd[1793]: time="2025-02-13T21:07:23.951223251Z" level=info msg="StopPodSandbox for \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\"" Feb 13 21:07:23.951302 containerd[1793]: time="2025-02-13T21:07:23.951271319Z" level=info msg="TearDown network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" successfully" Feb 13 21:07:23.951302 containerd[1793]: time="2025-02-13T21:07:23.951278482Z" level=info msg="StopPodSandbox for \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" returns successfully" Feb 13 21:07:23.951422 containerd[1793]: time="2025-02-13T21:07:23.951383096Z" level=info msg="RemovePodSandbox for \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\"" Feb 13 21:07:23.951422 containerd[1793]: time="2025-02-13T21:07:23.951394730Z" level=info msg="Forcibly stopping sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\"" Feb 13 21:07:23.951472 containerd[1793]: time="2025-02-13T21:07:23.951428516Z" level=info msg="TearDown network for sandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" successfully" Feb 13 21:07:23.952610 containerd[1793]: time="2025-02-13T21:07:23.952570517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.952610 containerd[1793]: time="2025-02-13T21:07:23.952586820Z" level=info msg="RemovePodSandbox \"789b26d424f00808a3ab5fad99f1f47b72beac83af5effe57a868b514edb410f\" returns successfully" Feb 13 21:07:23.952770 containerd[1793]: time="2025-02-13T21:07:23.952730406Z" level=info msg="StopPodSandbox for \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\"" Feb 13 21:07:23.952806 containerd[1793]: time="2025-02-13T21:07:23.952770098Z" level=info msg="TearDown network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\" successfully" Feb 13 21:07:23.952806 containerd[1793]: time="2025-02-13T21:07:23.952776801Z" level=info msg="StopPodSandbox for \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\" returns successfully" Feb 13 21:07:23.952929 containerd[1793]: time="2025-02-13T21:07:23.952874641Z" level=info msg="RemovePodSandbox for \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\"" Feb 13 21:07:23.952929 containerd[1793]: time="2025-02-13T21:07:23.952887424Z" level=info msg="Forcibly stopping sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\"" Feb 13 21:07:23.952981 containerd[1793]: time="2025-02-13T21:07:23.952924873Z" level=info msg="TearDown network for sandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\" successfully" Feb 13 21:07:23.954037 containerd[1793]: time="2025-02-13T21:07:23.953996213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.954037 containerd[1793]: time="2025-02-13T21:07:23.954012932Z" level=info msg="RemovePodSandbox \"8368438d5b8de8880c1d927e22b3626bc4c5212d9165d0c1c516db81e89432a0\" returns successfully" Feb 13 21:07:23.954156 containerd[1793]: time="2025-02-13T21:07:23.954116621Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:07:23.954156 containerd[1793]: time="2025-02-13T21:07:23.954154664Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:07:23.954202 containerd[1793]: time="2025-02-13T21:07:23.954161332Z" level=info msg="StopPodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:07:23.954314 containerd[1793]: time="2025-02-13T21:07:23.954276360Z" level=info msg="RemovePodSandbox for \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:07:23.954314 containerd[1793]: time="2025-02-13T21:07:23.954286737Z" level=info msg="Forcibly stopping sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\"" Feb 13 21:07:23.954371 containerd[1793]: time="2025-02-13T21:07:23.954317483Z" level=info msg="TearDown network for sandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" successfully" Feb 13 21:07:23.955418 containerd[1793]: time="2025-02-13T21:07:23.955376829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.955418 containerd[1793]: time="2025-02-13T21:07:23.955394212Z" level=info msg="RemovePodSandbox \"1a9edd12dcaee4b4406dd52975b192e9f91b2c28c5929cf19d4f43b12760a3ad\" returns successfully" Feb 13 21:07:23.955536 containerd[1793]: time="2025-02-13T21:07:23.955497075Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:07:23.955567 containerd[1793]: time="2025-02-13T21:07:23.955540125Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:07:23.955567 containerd[1793]: time="2025-02-13T21:07:23.955549692Z" level=info msg="StopPodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:07:23.955689 containerd[1793]: time="2025-02-13T21:07:23.955649116Z" level=info msg="RemovePodSandbox for \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:07:23.955689 containerd[1793]: time="2025-02-13T21:07:23.955662714Z" level=info msg="Forcibly stopping sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\"" Feb 13 21:07:23.955741 containerd[1793]: time="2025-02-13T21:07:23.955694407Z" level=info msg="TearDown network for sandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" successfully" Feb 13 21:07:23.956851 containerd[1793]: time="2025-02-13T21:07:23.956810891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.956851 containerd[1793]: time="2025-02-13T21:07:23.956829597Z" level=info msg="RemovePodSandbox \"7501b1f0bebccf8c96d599d12b518a8ac89e39145380dcd10318dd4669755ac7\" returns successfully" Feb 13 21:07:23.957026 containerd[1793]: time="2025-02-13T21:07:23.956997224Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:07:23.957056 containerd[1793]: time="2025-02-13T21:07:23.957036226Z" level=info msg="TearDown network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" successfully" Feb 13 21:07:23.957056 containerd[1793]: time="2025-02-13T21:07:23.957042140Z" level=info msg="StopPodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" returns successfully" Feb 13 21:07:23.957182 containerd[1793]: time="2025-02-13T21:07:23.957142117Z" level=info msg="RemovePodSandbox for \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:07:23.957182 containerd[1793]: time="2025-02-13T21:07:23.957154536Z" level=info msg="Forcibly stopping sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\"" Feb 13 21:07:23.957228 containerd[1793]: time="2025-02-13T21:07:23.957185249Z" level=info msg="TearDown network for sandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" successfully" Feb 13 21:07:23.958309 containerd[1793]: time="2025-02-13T21:07:23.958269637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.958309 containerd[1793]: time="2025-02-13T21:07:23.958287140Z" level=info msg="RemovePodSandbox \"0b3206681777142bb4f39eddf842578db1abd2a1ba3d164b0fef9121f983646c\" returns successfully" Feb 13 21:07:23.958464 containerd[1793]: time="2025-02-13T21:07:23.958428113Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" Feb 13 21:07:23.958495 containerd[1793]: time="2025-02-13T21:07:23.958469676Z" level=info msg="TearDown network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" successfully" Feb 13 21:07:23.958495 containerd[1793]: time="2025-02-13T21:07:23.958476330Z" level=info msg="StopPodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" returns successfully" Feb 13 21:07:23.958595 containerd[1793]: time="2025-02-13T21:07:23.958584967Z" level=info msg="RemovePodSandbox for \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" Feb 13 21:07:23.958613 containerd[1793]: time="2025-02-13T21:07:23.958597344Z" level=info msg="Forcibly stopping sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\"" Feb 13 21:07:23.958662 containerd[1793]: time="2025-02-13T21:07:23.958637200Z" level=info msg="TearDown network for sandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" successfully" Feb 13 21:07:23.959762 containerd[1793]: time="2025-02-13T21:07:23.959722423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.959762 containerd[1793]: time="2025-02-13T21:07:23.959739871Z" level=info msg="RemovePodSandbox \"016f2909cab214152679a0158a239350ed8254b0aa06124f243e8bc5668cf1a1\" returns successfully" Feb 13 21:07:23.959895 containerd[1793]: time="2025-02-13T21:07:23.959856021Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\"" Feb 13 21:07:23.959929 containerd[1793]: time="2025-02-13T21:07:23.959895803Z" level=info msg="TearDown network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" successfully" Feb 13 21:07:23.959929 containerd[1793]: time="2025-02-13T21:07:23.959918533Z" level=info msg="StopPodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" returns successfully" Feb 13 21:07:23.960031 containerd[1793]: time="2025-02-13T21:07:23.960021780Z" level=info msg="RemovePodSandbox for \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\"" Feb 13 21:07:23.960053 containerd[1793]: time="2025-02-13T21:07:23.960033590Z" level=info msg="Forcibly stopping sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\"" Feb 13 21:07:23.960092 containerd[1793]: time="2025-02-13T21:07:23.960075879Z" level=info msg="TearDown network for sandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" successfully" Feb 13 21:07:23.961140 containerd[1793]: time="2025-02-13T21:07:23.961131020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.961159 containerd[1793]: time="2025-02-13T21:07:23.961147262Z" level=info msg="RemovePodSandbox \"9903ccace9462852d1b468d91593d78c0bfdcd616759437c7c9178ce919b932e\" returns successfully" Feb 13 21:07:23.961290 containerd[1793]: time="2025-02-13T21:07:23.961251611Z" level=info msg="StopPodSandbox for \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\"" Feb 13 21:07:23.961290 containerd[1793]: time="2025-02-13T21:07:23.961285900Z" level=info msg="TearDown network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" successfully" Feb 13 21:07:23.961290 containerd[1793]: time="2025-02-13T21:07:23.961291239Z" level=info msg="StopPodSandbox for \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" returns successfully" Feb 13 21:07:23.961411 containerd[1793]: time="2025-02-13T21:07:23.961377714Z" level=info msg="RemovePodSandbox for \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\"" Feb 13 21:07:23.961411 containerd[1793]: time="2025-02-13T21:07:23.961389339Z" level=info msg="Forcibly stopping sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\"" Feb 13 21:07:23.961460 containerd[1793]: time="2025-02-13T21:07:23.961422295Z" level=info msg="TearDown network for sandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" successfully" Feb 13 21:07:23.962521 containerd[1793]: time="2025-02-13T21:07:23.962480568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.962521 containerd[1793]: time="2025-02-13T21:07:23.962499793Z" level=info msg="RemovePodSandbox \"8f4314912ebb1a2d93a1d3b80fda6eb8e99e490e09de354df1be4020d7eb68fa\" returns successfully" Feb 13 21:07:23.962700 containerd[1793]: time="2025-02-13T21:07:23.962650169Z" level=info msg="StopPodSandbox for \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\"" Feb 13 21:07:23.962700 containerd[1793]: time="2025-02-13T21:07:23.962686794Z" level=info msg="TearDown network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\" successfully" Feb 13 21:07:23.962700 containerd[1793]: time="2025-02-13T21:07:23.962692588Z" level=info msg="StopPodSandbox for \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\" returns successfully" Feb 13 21:07:23.962830 containerd[1793]: time="2025-02-13T21:07:23.962787964Z" level=info msg="RemovePodSandbox for \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\"" Feb 13 21:07:23.962830 containerd[1793]: time="2025-02-13T21:07:23.962797920Z" level=info msg="Forcibly stopping sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\"" Feb 13 21:07:23.962883 containerd[1793]: time="2025-02-13T21:07:23.962832579Z" level=info msg="TearDown network for sandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\" successfully" Feb 13 21:07:23.963974 containerd[1793]: time="2025-02-13T21:07:23.963934225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.963974 containerd[1793]: time="2025-02-13T21:07:23.963951225Z" level=info msg="RemovePodSandbox \"37fd495faf23255e5ea247e6aa58dae9395a1060826659f4b8b13fa5549cb0f7\" returns successfully" Feb 13 21:07:23.964094 containerd[1793]: time="2025-02-13T21:07:23.964049682Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:07:23.964094 containerd[1793]: time="2025-02-13T21:07:23.964091811Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:07:23.964143 containerd[1793]: time="2025-02-13T21:07:23.964098186Z" level=info msg="StopPodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:07:23.964257 containerd[1793]: time="2025-02-13T21:07:23.964215607Z" level=info msg="RemovePodSandbox for \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:07:23.964257 containerd[1793]: time="2025-02-13T21:07:23.964227695Z" level=info msg="Forcibly stopping sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\"" Feb 13 21:07:23.964303 containerd[1793]: time="2025-02-13T21:07:23.964263181Z" level=info msg="TearDown network for sandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" successfully" Feb 13 21:07:23.965376 containerd[1793]: time="2025-02-13T21:07:23.965336528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.965376 containerd[1793]: time="2025-02-13T21:07:23.965354188Z" level=info msg="RemovePodSandbox \"607ba46d10ad96974b57e545cd12cdc1dd7b6138131f3ab6e1bc63407d40518f\" returns successfully" Feb 13 21:07:23.965498 containerd[1793]: time="2025-02-13T21:07:23.965464430Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:07:23.965520 containerd[1793]: time="2025-02-13T21:07:23.965502693Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:07:23.965520 containerd[1793]: time="2025-02-13T21:07:23.965509021Z" level=info msg="StopPodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:07:23.965630 containerd[1793]: time="2025-02-13T21:07:23.965614603Z" level=info msg="RemovePodSandbox for \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:07:23.965655 containerd[1793]: time="2025-02-13T21:07:23.965633179Z" level=info msg="Forcibly stopping sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\"" Feb 13 21:07:23.965680 containerd[1793]: time="2025-02-13T21:07:23.965663928Z" level=info msg="TearDown network for sandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" successfully" Feb 13 21:07:23.966773 containerd[1793]: time="2025-02-13T21:07:23.966732787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.966773 containerd[1793]: time="2025-02-13T21:07:23.966749439Z" level=info msg="RemovePodSandbox \"7d5ec4c5cb492ab7ba5ec64c093f408197ef657dfbb781c600cfe63e505b2831\" returns successfully" Feb 13 21:07:23.966909 containerd[1793]: time="2025-02-13T21:07:23.966870118Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:07:23.966940 containerd[1793]: time="2025-02-13T21:07:23.966910240Z" level=info msg="TearDown network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" successfully" Feb 13 21:07:23.966940 containerd[1793]: time="2025-02-13T21:07:23.966928322Z" level=info msg="StopPodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" returns successfully" Feb 13 21:07:23.967071 containerd[1793]: time="2025-02-13T21:07:23.967033497Z" level=info msg="RemovePodSandbox for \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:07:23.967071 containerd[1793]: time="2025-02-13T21:07:23.967043552Z" level=info msg="Forcibly stopping sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\"" Feb 13 21:07:23.967121 containerd[1793]: time="2025-02-13T21:07:23.967080410Z" level=info msg="TearDown network for sandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" successfully" Feb 13 21:07:23.968192 containerd[1793]: time="2025-02-13T21:07:23.968153155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.968192 containerd[1793]: time="2025-02-13T21:07:23.968178395Z" level=info msg="RemovePodSandbox \"7d0ffa7070cf221d6b53fde8753c80adaebce4b303256fec539795ff6b8d0673\" returns successfully" Feb 13 21:07:23.968354 containerd[1793]: time="2025-02-13T21:07:23.968305107Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" Feb 13 21:07:23.968354 containerd[1793]: time="2025-02-13T21:07:23.968343713Z" level=info msg="TearDown network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" successfully" Feb 13 21:07:23.968354 containerd[1793]: time="2025-02-13T21:07:23.968349700Z" level=info msg="StopPodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" returns successfully" Feb 13 21:07:23.968484 containerd[1793]: time="2025-02-13T21:07:23.968445043Z" level=info msg="RemovePodSandbox for \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" Feb 13 21:07:23.968484 containerd[1793]: time="2025-02-13T21:07:23.968455942Z" level=info msg="Forcibly stopping sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\"" Feb 13 21:07:23.968537 containerd[1793]: time="2025-02-13T21:07:23.968488812Z" level=info msg="TearDown network for sandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" successfully" Feb 13 21:07:23.969591 containerd[1793]: time="2025-02-13T21:07:23.969549502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.969591 containerd[1793]: time="2025-02-13T21:07:23.969566957Z" level=info msg="RemovePodSandbox \"8c7d15cdb7c0b8ba3564fbb9684d7b6608656288a9e470c2a4ccd3901de736b4\" returns successfully" Feb 13 21:07:23.969739 containerd[1793]: time="2025-02-13T21:07:23.969694838Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\"" Feb 13 21:07:23.969769 containerd[1793]: time="2025-02-13T21:07:23.969741335Z" level=info msg="TearDown network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" successfully" Feb 13 21:07:23.969769 containerd[1793]: time="2025-02-13T21:07:23.969748045Z" level=info msg="StopPodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" returns successfully" Feb 13 21:07:23.969889 containerd[1793]: time="2025-02-13T21:07:23.969849743Z" level=info msg="RemovePodSandbox for \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\"" Feb 13 21:07:23.969889 containerd[1793]: time="2025-02-13T21:07:23.969861096Z" level=info msg="Forcibly stopping sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\"" Feb 13 21:07:23.969936 containerd[1793]: time="2025-02-13T21:07:23.969914567Z" level=info msg="TearDown network for sandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" successfully" Feb 13 21:07:23.971038 containerd[1793]: time="2025-02-13T21:07:23.970999074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.971038 containerd[1793]: time="2025-02-13T21:07:23.971016394Z" level=info msg="RemovePodSandbox \"114fc3f722a39799d0f248d314bf5420657b9477dbaaf3f6fd71c3483ba645d7\" returns successfully" Feb 13 21:07:23.971183 containerd[1793]: time="2025-02-13T21:07:23.971135049Z" level=info msg="StopPodSandbox for \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\"" Feb 13 21:07:23.971183 containerd[1793]: time="2025-02-13T21:07:23.971180596Z" level=info msg="TearDown network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" successfully" Feb 13 21:07:23.971230 containerd[1793]: time="2025-02-13T21:07:23.971186869Z" level=info msg="StopPodSandbox for \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" returns successfully" Feb 13 21:07:23.971337 containerd[1793]: time="2025-02-13T21:07:23.971281534Z" level=info msg="RemovePodSandbox for \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\"" Feb 13 21:07:23.971337 containerd[1793]: time="2025-02-13T21:07:23.971293045Z" level=info msg="Forcibly stopping sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\"" Feb 13 21:07:23.971337 containerd[1793]: time="2025-02-13T21:07:23.971324802Z" level=info msg="TearDown network for sandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" successfully" Feb 13 21:07:23.972458 containerd[1793]: time="2025-02-13T21:07:23.972416732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.972458 containerd[1793]: time="2025-02-13T21:07:23.972434867Z" level=info msg="RemovePodSandbox \"c6698fade93608c8cee35101de0258390fd9402cac4152e2bc13f5618759b460\" returns successfully" Feb 13 21:07:23.972571 containerd[1793]: time="2025-02-13T21:07:23.972530026Z" level=info msg="StopPodSandbox for \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\"" Feb 13 21:07:23.972571 containerd[1793]: time="2025-02-13T21:07:23.972569956Z" level=info msg="TearDown network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\" successfully" Feb 13 21:07:23.972628 containerd[1793]: time="2025-02-13T21:07:23.972576445Z" level=info msg="StopPodSandbox for \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\" returns successfully" Feb 13 21:07:23.972772 containerd[1793]: time="2025-02-13T21:07:23.972727932Z" level=info msg="RemovePodSandbox for \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\"" Feb 13 21:07:23.972772 containerd[1793]: time="2025-02-13T21:07:23.972740670Z" level=info msg="Forcibly stopping sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\"" Feb 13 21:07:23.972822 containerd[1793]: time="2025-02-13T21:07:23.972771791Z" level=info msg="TearDown network for sandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\" successfully" Feb 13 21:07:23.973864 containerd[1793]: time="2025-02-13T21:07:23.973821601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 21:07:23.973864 containerd[1793]: time="2025-02-13T21:07:23.973838307Z" level=info msg="RemovePodSandbox \"5e8de54b5e2c3fc0edd5132d3a82a1050d08607a6ce7164ce56ae0d439fc2419\" returns successfully" Feb 13 21:07:26.598901 systemd[1]: Started sshd@15-147.28.180.173:22-109.206.236.167:45032.service - OpenSSH per-connection server daemon (109.206.236.167:45032). Feb 13 21:07:27.413925 kubelet[3104]: I0213 21:07:27.413755 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 21:07:27.551770 sshd[7536]: Connection closed by authenticating user docker 109.206.236.167 port 45032 [preauth] Feb 13 21:07:27.552713 systemd[1]: sshd@15-147.28.180.173:22-109.206.236.167:45032.service: Deactivated successfully. Feb 13 21:07:45.856760 systemd[1]: Started sshd@16-147.28.180.173:22-109.206.236.167:39026.service - OpenSSH per-connection server daemon (109.206.236.167:39026). Feb 13 21:07:46.689992 sshd[7574]: Invalid user wso2 from 109.206.236.167 port 39026 Feb 13 21:07:46.890456 sshd[7574]: Connection closed by invalid user wso2 109.206.236.167 port 39026 [preauth] Feb 13 21:07:46.893792 systemd[1]: sshd@16-147.28.180.173:22-109.206.236.167:39026.service: Deactivated successfully. Feb 13 21:08:02.578931 systemd[1]: Started sshd@17-147.28.180.173:22-109.206.236.167:49628.service - OpenSSH per-connection server daemon (109.206.236.167:49628). Feb 13 21:08:03.429781 sshd[7627]: Invalid user wso2 from 109.206.236.167 port 49628 Feb 13 21:08:03.644850 sshd[7627]: Connection closed by invalid user wso2 109.206.236.167 port 49628 [preauth] Feb 13 21:08:03.648240 systemd[1]: sshd@17-147.28.180.173:22-109.206.236.167:49628.service: Deactivated successfully. Feb 13 21:08:27.087026 systemd[1]: Started sshd@18-147.28.180.173:22-109.206.236.167:39716.service - OpenSSH per-connection server daemon (109.206.236.167:39716). Feb 13 21:08:27.929466 sshd[7689]: Invalid user dev from 109.206.236.167 port 39716 Feb 13 21:08:28.099599 sshd[7689]: Connection closed by invalid user dev 109.206.236.167 port 39716 [preauth] Feb 13 21:08:28.102873 systemd[1]: sshd@18-147.28.180.173:22-109.206.236.167:39716.service: Deactivated successfully. Feb 13 21:08:37.099631 systemd[1]: Started sshd@19-147.28.180.173:22-109.206.236.167:50774.service - OpenSSH per-connection server daemon (109.206.236.167:50774). Feb 13 21:08:37.904855 sshd[7716]: Invalid user dev from 109.206.236.167 port 50774 Feb 13 21:08:38.222897 sshd[7716]: Connection closed by invalid user dev 109.206.236.167 port 50774 [preauth] Feb 13 21:08:38.225986 systemd[1]: sshd@19-147.28.180.173:22-109.206.236.167:50774.service: Deactivated successfully. Feb 13 21:08:54.217308 systemd[1]: Started sshd@20-147.28.180.173:22-109.206.236.167:57552.service - OpenSSH per-connection server daemon (109.206.236.167:57552). Feb 13 21:08:55.016118 sshd[7769]: Invalid user gitlab-psql from 109.206.236.167 port 57552 Feb 13 21:08:55.203311 sshd[7769]: Connection closed by invalid user gitlab-psql 109.206.236.167 port 57552 [preauth] Feb 13 21:08:55.206560 systemd[1]: sshd@20-147.28.180.173:22-109.206.236.167:57552.service: Deactivated successfully. Feb 13 21:09:13.173237 systemd[1]: Started sshd@21-147.28.180.173:22-109.206.236.167:51266.service - OpenSSH per-connection server daemon (109.206.236.167:51266). Feb 13 21:09:13.992083 sshd[7816]: Invalid user gitlab-psql from 109.206.236.167 port 51266 Feb 13 21:09:14.198596 sshd[7816]: Connection closed by invalid user gitlab-psql 109.206.236.167 port 51266 [preauth] Feb 13 21:09:14.199824 systemd[1]: sshd@21-147.28.180.173:22-109.206.236.167:51266.service: Deactivated successfully. Feb 13 21:09:34.623823 systemd[1]: Started sshd@22-147.28.180.173:22-109.206.236.167:36706.service - OpenSSH per-connection server daemon (109.206.236.167:36706). Feb 13 21:09:35.570874 sshd[7853]: Invalid user master from 109.206.236.167 port 36706 Feb 13 21:09:35.751507 sshd[7853]: Connection closed by invalid user master 109.206.236.167 port 36706 [preauth] Feb 13 21:09:35.754884 systemd[1]: sshd@22-147.28.180.173:22-109.206.236.167:36706.service: Deactivated successfully. Feb 13 21:09:48.934898 systemd[1]: Started sshd@23-147.28.180.173:22-109.206.236.167:53140.service - OpenSSH per-connection server daemon (109.206.236.167:53140). Feb 13 21:09:49.724730 sshd[7884]: Invalid user master from 109.206.236.167 port 53140 Feb 13 21:09:49.910689 sshd[7884]: Connection closed by invalid user master 109.206.236.167 port 53140 [preauth] Feb 13 21:09:49.913968 systemd[1]: sshd@23-147.28.180.173:22-109.206.236.167:53140.service: Deactivated successfully. Feb 13 21:10:04.485037 systemd[1]: Started sshd@24-147.28.180.173:22-109.206.236.167:36522.service - OpenSSH per-connection server daemon (109.206.236.167:36522). Feb 13 21:10:05.313257 sshd[7936]: Invalid user user2 from 109.206.236.167 port 36522 Feb 13 21:10:05.503094 sshd[7936]: Connection closed by invalid user user2 109.206.236.167 port 36522 [preauth] Feb 13 21:10:05.506307 systemd[1]: sshd@24-147.28.180.173:22-109.206.236.167:36522.service: Deactivated successfully. Feb 13 21:10:23.005250 systemd[1]: Started sshd@25-147.28.180.173:22-109.206.236.167:46410.service - OpenSSH per-connection server daemon (109.206.236.167:46410). Feb 13 21:10:23.981342 sshd[7980]: Invalid user user2 from 109.206.236.167 port 46410 Feb 13 21:10:24.270352 sshd[7980]: Connection closed by invalid user user2 109.206.236.167 port 46410 [preauth] Feb 13 21:10:24.273425 systemd[1]: sshd@25-147.28.180.173:22-109.206.236.167:46410.service: Deactivated successfully. Feb 13 21:10:38.995314 systemd[1]: Started sshd@26-147.28.180.173:22-109.206.236.167:36280.service - OpenSSH per-connection server daemon (109.206.236.167:36280). Feb 13 21:10:39.986579 sshd[8016]: Connection closed by authenticating user nobody 109.206.236.167 port 36280 [preauth] Feb 13 21:10:39.989918 systemd[1]: sshd@26-147.28.180.173:22-109.206.236.167:36280.service: Deactivated successfully. Feb 13 21:10:56.923902 systemd[1]: Started sshd@27-147.28.180.173:22-109.206.236.167:56262.service - OpenSSH per-connection server daemon (109.206.236.167:56262). Feb 13 21:10:57.930201 sshd[8088]: Connection closed by authenticating user nobody 109.206.236.167 port 56262 [preauth] Feb 13 21:10:57.931069 systemd[1]: sshd@27-147.28.180.173:22-109.206.236.167:56262.service: Deactivated successfully. Feb 13 21:11:12.679268 systemd[1]: Started sshd@28-147.28.180.173:22-109.206.236.167:42526.service - OpenSSH per-connection server daemon (109.206.236.167:42526). Feb 13 21:11:13.409549 sshd[8116]: Invalid user gitlab-prometheus from 109.206.236.167 port 42526 Feb 13 21:11:13.598954 sshd[8116]: Connection closed by invalid user gitlab-prometheus 109.206.236.167 port 42526 [preauth] Feb 13 21:11:13.602293 systemd[1]: sshd@28-147.28.180.173:22-109.206.236.167:42526.service: Deactivated successfully. Feb 13 21:11:32.316814 systemd[1]: Started sshd@29-147.28.180.173:22-109.206.236.167:59740.service - OpenSSH per-connection server daemon (109.206.236.167:59740). Feb 13 21:11:33.179207 sshd[8154]: Invalid user gitlab-prometheus from 109.206.236.167 port 59740 Feb 13 21:11:33.408406 sshd[8154]: Connection closed by invalid user gitlab-prometheus 109.206.236.167 port 59740 [preauth] Feb 13 21:11:33.411657 systemd[1]: sshd@29-147.28.180.173:22-109.206.236.167:59740.service: Deactivated successfully. Feb 13 21:11:48.818710 systemd[1]: Started sshd@30-147.28.180.173:22-109.206.236.167:41692.service - OpenSSH per-connection server daemon (109.206.236.167:41692). Feb 13 21:11:49.620691 sshd[8200]: Invalid user opc from 109.206.236.167 port 41692 Feb 13 21:11:49.797327 sshd[8200]: Connection closed by invalid user opc 109.206.236.167 port 41692 [preauth] Feb 13 21:11:49.800542 systemd[1]: sshd@30-147.28.180.173:22-109.206.236.167:41692.service: Deactivated successfully. Feb 13 21:11:52.708418 systemd[1]: Started sshd@31-147.28.180.173:22-139.178.89.65:54120.service - OpenSSH per-connection server daemon (139.178.89.65:54120). Feb 13 21:11:52.766136 sshd[8208]: Accepted publickey for core from 139.178.89.65 port 54120 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:11:52.766834 sshd-session[8208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:11:52.769466 systemd-logind[1776]: New session 12 of user core. Feb 13 21:11:52.779924 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 21:11:52.881203 sshd[8210]: Connection closed by 139.178.89.65 port 54120 Feb 13 21:11:52.881386 sshd-session[8208]: pam_unix(sshd:session): session closed for user core Feb 13 21:11:52.883184 systemd[1]: sshd@31-147.28.180.173:22-139.178.89.65:54120.service: Deactivated successfully. Feb 13 21:11:52.884330 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 21:11:52.885175 systemd-logind[1776]: Session 12 logged out. Waiting for processes to exit. Feb 13 21:11:52.885903 systemd-logind[1776]: Removed session 12. Feb 13 21:11:57.917953 systemd[1]: Started sshd@32-147.28.180.173:22-139.178.89.65:49274.service - OpenSSH per-connection server daemon (139.178.89.65:49274). Feb 13 21:11:57.950709 sshd[8283]: Accepted publickey for core from 139.178.89.65 port 49274 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:11:57.951402 sshd-session[8283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:11:57.954016 systemd-logind[1776]: New session 13 of user core. Feb 13 21:11:57.964808 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 21:11:58.053345 sshd[8285]: Connection closed by 139.178.89.65 port 49274 Feb 13 21:11:58.053493 sshd-session[8283]: pam_unix(sshd:session): session closed for user core Feb 13 21:11:58.055183 systemd[1]: sshd@32-147.28.180.173:22-139.178.89.65:49274.service: Deactivated successfully. Feb 13 21:11:58.056137 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 21:11:58.056898 systemd-logind[1776]: Session 13 logged out. Waiting for processes to exit. Feb 13 21:11:58.057517 systemd-logind[1776]: Removed session 13. Feb 13 21:12:01.821493 update_engine[1788]: I20250213 21:12:01.821349 1788 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 21:12:01.821493 update_engine[1788]: I20250213 21:12:01.821449 1788 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 21:12:01.822454 update_engine[1788]: I20250213 21:12:01.821869 1788 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 21:12:01.822985 update_engine[1788]: I20250213 21:12:01.822892 1788 omaha_request_params.cc:62] Current group set to beta Feb 13 21:12:01.823171 update_engine[1788]: I20250213 21:12:01.823140 1788 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 21:12:01.823278 update_engine[1788]: I20250213 21:12:01.823172 1788 update_attempter.cc:643] Scheduling an action processor start. Feb 13 21:12:01.823278 update_engine[1788]: I20250213 21:12:01.823212 1788 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 21:12:01.823440 update_engine[1788]: I20250213 21:12:01.823283 1788 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 21:12:01.823560 update_engine[1788]: I20250213 21:12:01.823441 1788 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 21:12:01.823560 update_engine[1788]: I20250213 21:12:01.823472 1788 omaha_request_action.cc:272] Request: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: Feb 13 21:12:01.823560 update_engine[1788]: I20250213 21:12:01.823488 1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 21:12:01.824453 locksmithd[1830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 21:12:01.826632 update_engine[1788]: I20250213 21:12:01.826591 1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 21:12:01.826881 update_engine[1788]: I20250213 21:12:01.826841 1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 21:12:01.827128 update_engine[1788]: E20250213 21:12:01.827083 1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 21:12:01.827128 update_engine[1788]: I20250213 21:12:01.827116 1788 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 21:12:03.084796 systemd[1]: Started sshd@33-147.28.180.173:22-139.178.89.65:49290.service - OpenSSH per-connection server daemon (139.178.89.65:49290). Feb 13 21:12:03.113242 sshd[8311]: Accepted publickey for core from 139.178.89.65 port 49290 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:03.113884 sshd-session[8311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:03.116456 systemd-logind[1776]: New session 14 of user core. Feb 13 21:12:03.127867 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 21:12:03.237109 sshd[8313]: Connection closed by 139.178.89.65 port 49290 Feb 13 21:12:03.237288 sshd-session[8311]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:03.247219 systemd[1]: sshd@33-147.28.180.173:22-139.178.89.65:49290.service: Deactivated successfully. Feb 13 21:12:03.248023 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 21:12:03.248621 systemd-logind[1776]: Session 14 logged out. Waiting for processes to exit. Feb 13 21:12:03.249341 systemd[1]: Started sshd@34-147.28.180.173:22-139.178.89.65:49298.service - OpenSSH per-connection server daemon (139.178.89.65:49298). Feb 13 21:12:03.249847 systemd-logind[1776]: Removed session 14. Feb 13 21:12:03.279751 sshd[8338]: Accepted publickey for core from 139.178.89.65 port 49298 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:03.280406 sshd-session[8338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:03.283038 systemd-logind[1776]: New session 15 of user core. Feb 13 21:12:03.300729 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 21:12:03.401945 sshd[8340]: Connection closed by 139.178.89.65 port 49298 Feb 13 21:12:03.402076 sshd-session[8338]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:03.420532 systemd[1]: sshd@34-147.28.180.173:22-139.178.89.65:49298.service: Deactivated successfully. Feb 13 21:12:03.421404 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 21:12:03.422173 systemd-logind[1776]: Session 15 logged out. Waiting for processes to exit. Feb 13 21:12:03.422836 systemd[1]: Started sshd@35-147.28.180.173:22-139.178.89.65:49304.service - OpenSSH per-connection server daemon (139.178.89.65:49304). Feb 13 21:12:03.423321 systemd-logind[1776]: Removed session 15. Feb 13 21:12:03.457631 sshd[8362]: Accepted publickey for core from 139.178.89.65 port 49304 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:03.458330 sshd-session[8362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:03.461067 systemd-logind[1776]: New session 16 of user core. Feb 13 21:12:03.476777 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 21:12:03.566904 sshd[8364]: Connection closed by 139.178.89.65 port 49304 Feb 13 21:12:03.567079 sshd-session[8362]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:03.568583 systemd[1]: sshd@35-147.28.180.173:22-139.178.89.65:49304.service: Deactivated successfully. Feb 13 21:12:03.569517 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 21:12:03.570256 systemd-logind[1776]: Session 16 logged out. Waiting for processes to exit. Feb 13 21:12:03.570786 systemd-logind[1776]: Removed session 16. Feb 13 21:12:07.757194 systemd[1]: Started sshd@36-147.28.180.173:22-109.206.236.167:50246.service - OpenSSH per-connection server daemon (109.206.236.167:50246). Feb 13 21:12:08.602925 systemd[1]: Started sshd@37-147.28.180.173:22-139.178.89.65:55860.service - OpenSSH per-connection server daemon (139.178.89.65:55860). Feb 13 21:12:08.607614 sshd[8388]: Invalid user opc from 109.206.236.167 port 50246 Feb 13 21:12:08.631985 sshd[8391]: Accepted publickey for core from 139.178.89.65 port 55860 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:08.632611 sshd-session[8391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:08.635260 systemd-logind[1776]: New session 17 of user core. Feb 13 21:12:08.655888 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 21:12:08.746876 sshd[8393]: Connection closed by 139.178.89.65 port 55860 Feb 13 21:12:08.747057 sshd-session[8391]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:08.748659 systemd[1]: sshd@37-147.28.180.173:22-139.178.89.65:55860.service: Deactivated successfully. Feb 13 21:12:08.749661 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 21:12:08.750455 systemd-logind[1776]: Session 17 logged out. Waiting for processes to exit. Feb 13 21:12:08.751207 systemd-logind[1776]: Removed session 17. Feb 13 21:12:08.773137 sshd[8388]: Connection closed by invalid user opc 109.206.236.167 port 50246 [preauth] Feb 13 21:12:08.773973 systemd[1]: sshd@36-147.28.180.173:22-109.206.236.167:50246.service: Deactivated successfully. Feb 13 21:12:11.820329 update_engine[1788]: I20250213 21:12:11.820154 1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 21:12:11.821199 update_engine[1788]: I20250213 21:12:11.820741 1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 21:12:11.821444 update_engine[1788]: I20250213 21:12:11.821341 1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 21:12:11.821806 update_engine[1788]: E20250213 21:12:11.821702 1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 21:12:11.821994 update_engine[1788]: I20250213 21:12:11.821816 1788 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 21:12:13.782947 systemd[1]: Started sshd@38-147.28.180.173:22-139.178.89.65:55876.service - OpenSSH per-connection server daemon (139.178.89.65:55876). Feb 13 21:12:13.815959 sshd[8441]: Accepted publickey for core from 139.178.89.65 port 55876 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:13.816670 sshd-session[8441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:13.819443 systemd-logind[1776]: New session 18 of user core. Feb 13 21:12:13.842972 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 21:12:13.930705 sshd[8443]: Connection closed by 139.178.89.65 port 55876 Feb 13 21:12:13.930839 sshd-session[8441]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:13.960553 systemd[1]: sshd@38-147.28.180.173:22-139.178.89.65:55876.service: Deactivated successfully. Feb 13 21:12:13.964334 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 21:12:13.967609 systemd-logind[1776]: Session 18 logged out. Waiting for processes to exit. Feb 13 21:12:13.982256 systemd[1]: Started sshd@39-147.28.180.173:22-139.178.89.65:55890.service - OpenSSH per-connection server daemon (139.178.89.65:55890). Feb 13 21:12:13.984848 systemd-logind[1776]: Removed session 18. Feb 13 21:12:14.042839 sshd[8468]: Accepted publickey for core from 139.178.89.65 port 55890 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:14.043474 sshd-session[8468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:14.046266 systemd-logind[1776]: New session 19 of user core. Feb 13 21:12:14.063908 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 21:12:14.155609 sshd[8473]: Connection closed by 139.178.89.65 port 55890 Feb 13 21:12:14.155810 sshd-session[8468]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:14.175648 systemd[1]: sshd@39-147.28.180.173:22-139.178.89.65:55890.service: Deactivated successfully. Feb 13 21:12:14.176702 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 21:12:14.177587 systemd-logind[1776]: Session 19 logged out. Waiting for processes to exit. Feb 13 21:12:14.178573 systemd[1]: Started sshd@40-147.28.180.173:22-139.178.89.65:55900.service - OpenSSH per-connection server daemon (139.178.89.65:55900). Feb 13 21:12:14.179222 systemd-logind[1776]: Removed session 19. Feb 13 21:12:14.227513 sshd[8494]: Accepted publickey for core from 139.178.89.65 port 55900 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:14.228501 sshd-session[8494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:14.232245 systemd-logind[1776]: New session 20 of user core. Feb 13 21:12:14.244037 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 21:12:15.132302 sshd[8496]: Connection closed by 139.178.89.65 port 55900 Feb 13 21:12:15.132525 sshd-session[8494]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:15.145091 systemd[1]: sshd@40-147.28.180.173:22-139.178.89.65:55900.service: Deactivated successfully. Feb 13 21:12:15.146368 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 21:12:15.147440 systemd-logind[1776]: Session 20 logged out. Waiting for processes to exit. Feb 13 21:12:15.148495 systemd[1]: Started sshd@41-147.28.180.173:22-139.178.89.65:47418.service - OpenSSH per-connection server daemon (139.178.89.65:47418). Feb 13 21:12:15.149196 systemd-logind[1776]: Removed session 20. Feb 13 21:12:15.197086 sshd[8526]: Accepted publickey for core from 139.178.89.65 port 47418 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:15.198017 sshd-session[8526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:15.201434 systemd-logind[1776]: New session 21 of user core. Feb 13 21:12:15.211871 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 21:12:15.429656 sshd[8530]: Connection closed by 139.178.89.65 port 47418 Feb 13 21:12:15.429849 sshd-session[8526]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:15.444657 systemd[1]: sshd@41-147.28.180.173:22-139.178.89.65:47418.service: Deactivated successfully. Feb 13 21:12:15.445651 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 21:12:15.446441 systemd-logind[1776]: Session 21 logged out. Waiting for processes to exit. Feb 13 21:12:15.447298 systemd[1]: Started sshd@42-147.28.180.173:22-139.178.89.65:47432.service - OpenSSH per-connection server daemon (139.178.89.65:47432). Feb 13 21:12:15.447857 systemd-logind[1776]: Removed session 21. Feb 13 21:12:15.491714 sshd[8552]: Accepted publickey for core from 139.178.89.65 port 47432 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:15.492637 sshd-session[8552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:15.495788 systemd-logind[1776]: New session 22 of user core. Feb 13 21:12:15.510821 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 21:12:15.651107 sshd[8554]: Connection closed by 139.178.89.65 port 47432 Feb 13 21:12:15.651307 sshd-session[8552]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:15.652995 systemd[1]: sshd@42-147.28.180.173:22-139.178.89.65:47432.service: Deactivated successfully. Feb 13 21:12:15.654110 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 21:12:15.654991 systemd-logind[1776]: Session 22 logged out. Waiting for processes to exit. Feb 13 21:12:15.655634 systemd-logind[1776]: Removed session 22. Feb 13 21:12:20.676425 systemd[1]: Started sshd@43-147.28.180.173:22-139.178.89.65:47444.service - OpenSSH per-connection server daemon (139.178.89.65:47444). Feb 13 21:12:20.724715 sshd[8588]: Accepted publickey for core from 139.178.89.65 port 47444 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:20.726365 sshd-session[8588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:20.731677 systemd-logind[1776]: New session 23 of user core. Feb 13 21:12:20.746811 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 21:12:20.837786 sshd[8590]: Connection closed by 139.178.89.65 port 47444 Feb 13 21:12:20.837973 sshd-session[8588]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:20.839577 systemd[1]: sshd@43-147.28.180.173:22-139.178.89.65:47444.service: Deactivated successfully. Feb 13 21:12:20.840606 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 21:12:20.841382 systemd-logind[1776]: Session 23 logged out. Waiting for processes to exit. Feb 13 21:12:20.842061 systemd-logind[1776]: Removed session 23. Feb 13 21:12:21.817941 update_engine[1788]: I20250213 21:12:21.817783 1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 21:12:21.818765 update_engine[1788]: I20250213 21:12:21.818326 1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 21:12:21.818992 update_engine[1788]: I20250213 21:12:21.818877 1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 21:12:21.819402 update_engine[1788]: E20250213 21:12:21.819288 1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 21:12:21.819564 update_engine[1788]: I20250213 21:12:21.819429 1788 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 21:12:24.532862 systemd[1]: Started sshd@44-147.28.180.173:22-109.206.236.167:59306.service - OpenSSH per-connection server daemon (109.206.236.167:59306). Feb 13 21:12:25.405530 sshd[8640]: Invalid user solr from 109.206.236.167 port 59306 Feb 13 21:12:25.604176 sshd[8640]: Connection closed by invalid user solr 109.206.236.167 port 59306 [preauth] Feb 13 21:12:25.604941 systemd[1]: sshd@44-147.28.180.173:22-109.206.236.167:59306.service: Deactivated successfully. Feb 13 21:12:25.878298 systemd[1]: Started sshd@45-147.28.180.173:22-139.178.89.65:48208.service - OpenSSH per-connection server daemon (139.178.89.65:48208). Feb 13 21:12:25.951940 sshd[8646]: Accepted publickey for core from 139.178.89.65 port 48208 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:25.952730 sshd-session[8646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:25.955659 systemd-logind[1776]: New session 24 of user core. Feb 13 21:12:25.965857 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 21:12:26.095828 sshd[8648]: Connection closed by 139.178.89.65 port 48208 Feb 13 21:12:26.096011 sshd-session[8646]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:26.097542 systemd[1]: sshd@45-147.28.180.173:22-139.178.89.65:48208.service: Deactivated successfully. Feb 13 21:12:26.098501 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 21:12:26.099233 systemd-logind[1776]: Session 24 logged out. Waiting for processes to exit. Feb 13 21:12:26.099718 systemd-logind[1776]: Removed session 24. Feb 13 21:12:31.126939 systemd[1]: Started sshd@46-147.28.180.173:22-139.178.89.65:48220.service - OpenSSH per-connection server daemon (139.178.89.65:48220). Feb 13 21:12:31.161090 sshd[8673]: Accepted publickey for core from 139.178.89.65 port 48220 ssh2: RSA SHA256:oqFzLKltKjIjWJ2xRNYaxZupDvhRtAv9cyn8jloyOa0 Feb 13 21:12:31.161953 sshd-session[8673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:12:31.164457 systemd-logind[1776]: New session 25 of user core. Feb 13 21:12:31.180771 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 21:12:31.268775 sshd[8675]: Connection closed by 139.178.89.65 port 48220 Feb 13 21:12:31.268953 sshd-session[8673]: pam_unix(sshd:session): session closed for user core Feb 13 21:12:31.270535 systemd[1]: sshd@46-147.28.180.173:22-139.178.89.65:48220.service: Deactivated successfully. Feb 13 21:12:31.271490 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 21:12:31.272181 systemd-logind[1776]: Session 25 logged out. Waiting for processes to exit. Feb 13 21:12:31.272727 systemd-logind[1776]: Removed session 25. Feb 13 21:12:31.817555 update_engine[1788]: I20250213 21:12:31.817379 1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 21:12:31.818379 update_engine[1788]: I20250213 21:12:31.818022 1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 21:12:31.818730 update_engine[1788]: I20250213 21:12:31.818607 1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 21:12:31.819116 update_engine[1788]: E20250213 21:12:31.819000 1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 21:12:31.819330 update_engine[1788]: I20250213 21:12:31.819139 1788 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 21:12:31.819330 update_engine[1788]: I20250213 21:12:31.819170 1788 omaha_request_action.cc:617] Omaha request response: Feb 13 21:12:31.819549 update_engine[1788]: E20250213 21:12:31.819341 1788 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 21:12:31.819549 update_engine[1788]: I20250213 21:12:31.819391 1788 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 21:12:31.819549 update_engine[1788]: I20250213 21:12:31.819409 1788 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 21:12:31.819549 update_engine[1788]: I20250213 21:12:31.819424 1788 update_attempter.cc:306] Processing Done. Feb 13 21:12:31.819549 update_engine[1788]: E20250213 21:12:31.819456 1788 update_attempter.cc:619] Update failed. Feb 13 21:12:31.819549 update_engine[1788]: I20250213 21:12:31.819474 1788 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 21:12:31.819549 update_engine[1788]: I20250213 21:12:31.819489 1788 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 21:12:31.819549 update_engine[1788]: I20250213 21:12:31.819505 1788 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 21:12:31.820226 update_engine[1788]: I20250213 21:12:31.819698 1788 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 21:12:31.820226 update_engine[1788]: I20250213 21:12:31.819769 1788 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 21:12:31.820226 update_engine[1788]: I20250213 21:12:31.819790 1788 omaha_request_action.cc:272] Request: Feb 13 21:12:31.820226 update_engine[1788]: Feb 13 21:12:31.820226 update_engine[1788]: Feb 13 21:12:31.820226 update_engine[1788]: Feb 13 21:12:31.820226 update_engine[1788]: Feb 13 21:12:31.820226 update_engine[1788]: Feb 13 21:12:31.820226 update_engine[1788]: Feb 13 21:12:31.820226 update_engine[1788]: I20250213 21:12:31.819808 1788 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 21:12:31.821026 update_engine[1788]: I20250213 21:12:31.820222 1788 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 21:12:31.821026 update_engine[1788]: I20250213 21:12:31.820721 1788 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 21:12:31.821217 locksmithd[1830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 21:12:31.821855 update_engine[1788]: E20250213 21:12:31.821072 1788 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 21:12:31.821855 update_engine[1788]: I20250213 21:12:31.821199 1788 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 21:12:31.821855 update_engine[1788]: I20250213 21:12:31.821228 1788 omaha_request_action.cc:617] Omaha request response: Feb 13 21:12:31.821855 update_engine[1788]: I20250213 21:12:31.821249 1788 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 21:12:31.821855 update_engine[1788]: I20250213 21:12:31.821264 1788 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 21:12:31.821855 update_engine[1788]: I20250213 21:12:31.821280 1788 update_attempter.cc:306] Processing Done. Feb 13 21:12:31.821855 update_engine[1788]: I20250213 21:12:31.821297 1788 update_attempter.cc:310] Error event sent. Feb 13 21:12:31.821855 update_engine[1788]: I20250213 21:12:31.821323 1788 update_check_scheduler.cc:74] Next update check in 46m33s Feb 13 21:12:31.822501 locksmithd[1830]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0