Jan 29 11:35:12.455729 kernel: microcode: updated early: 0xde -> 0x100, date = 2024-02-05 Jan 29 11:35:12.455743 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:35:12.455751 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:35:12.455755 kernel: BIOS-provided physical RAM map: Jan 29 11:35:12.455759 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 29 11:35:12.455763 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 29 11:35:12.455768 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 29 11:35:12.455772 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 29 11:35:12.455777 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 29 11:35:12.455781 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006eb29fff] usable Jan 29 11:35:12.455785 kernel: BIOS-e820: [mem 0x000000006eb2a000-0x000000006eb2afff] ACPI NVS Jan 29 11:35:12.455789 kernel: BIOS-e820: [mem 0x000000006eb2b000-0x000000006eb2bfff] reserved Jan 29 11:35:12.455793 kernel: BIOS-e820: [mem 0x000000006eb2c000-0x0000000077fc4fff] usable Jan 29 11:35:12.455798 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Jan 29 11:35:12.455804 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Jan 29 11:35:12.455809 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Jan 29 11:35:12.455813 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Jan 29 11:35:12.455818 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Jan 29 11:35:12.455822 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Jan 29 11:35:12.455827 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:35:12.455832 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 29 11:35:12.455836 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 29 11:35:12.455841 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 11:35:12.455845 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 29 11:35:12.455851 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Jan 29 11:35:12.455856 kernel: NX (Execute Disable) protection: active Jan 29 11:35:12.455860 kernel: APIC: Static calls initialized Jan 29 11:35:12.455865 kernel: SMBIOS 3.2.1 present. Jan 29 11:35:12.455869 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Jan 29 11:35:12.455874 kernel: tsc: Detected 3400.000 MHz processor Jan 29 11:35:12.455879 kernel: tsc: Detected 3399.906 MHz TSC Jan 29 11:35:12.455883 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:35:12.455888 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:35:12.455893 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Jan 29 11:35:12.455898 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 29 11:35:12.455904 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:35:12.455909 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Jan 29 11:35:12.455914 kernel: Using GB pages for direct mapping Jan 29 11:35:12.455919 kernel: ACPI: Early table checksum verification disabled Jan 29 11:35:12.455924 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 29 11:35:12.455931 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 29 11:35:12.455936 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Jan 29 11:35:12.455942 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 29 11:35:12.455947 kernel: ACPI: FACS 0x0000000079662F80 000040 Jan 29 11:35:12.455952 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Jan 29 11:35:12.455957 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Jan 29 11:35:12.455962 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 29 11:35:12.455967 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 29 11:35:12.455972 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 29 11:35:12.455978 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 29 11:35:12.455983 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 29 11:35:12.455988 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 29 11:35:12.455993 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.455998 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 29 11:35:12.456003 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 29 11:35:12.456008 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456013 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456019 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 29 11:35:12.456024 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 29 11:35:12.456029 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456034 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456039 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 29 11:35:12.456044 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:35:12.456049 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 29 11:35:12.456055 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 29 11:35:12.456060 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 29 11:35:12.456066 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xacn 01072009 AMI 00010013) Jan 29 11:35:12.456071 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 29 11:35:12.456076 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 29 11:35:12.456081 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 29 11:35:12.456086 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 29 11:35:12.456091 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 29 11:35:12.456096 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Jan 29 11:35:12.456101 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Jan 29 11:35:12.456106 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Jan 29 11:35:12.456112 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Jan 29 11:35:12.456117 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Jan 29 11:35:12.456122 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Jan 29 11:35:12.456127 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Jan 29 11:35:12.456132 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Jan 29 11:35:12.456137 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Jan 29 11:35:12.456142 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Jan 29 11:35:12.456147 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Jan 29 11:35:12.456152 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Jan 29 11:35:12.456158 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Jan 29 11:35:12.456163 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Jan 29 11:35:12.456168 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Jan 29 11:35:12.456173 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Jan 29 11:35:12.456178 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Jan 29 11:35:12.456183 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Jan 29 11:35:12.456188 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Jan 29 11:35:12.456193 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Jan 29 11:35:12.456198 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Jan 29 11:35:12.456204 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Jan 29 11:35:12.456209 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Jan 29 11:35:12.456214 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Jan 29 11:35:12.456219 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Jan 29 11:35:12.456224 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Jan 29 11:35:12.456229 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Jan 29 11:35:12.456234 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Jan 29 11:35:12.456239 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Jan 29 11:35:12.456244 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Jan 29 11:35:12.456250 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Jan 29 11:35:12.456255 kernel: No NUMA configuration found Jan 29 11:35:12.456260 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Jan 29 11:35:12.456265 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Jan 29 11:35:12.456270 kernel: Zone ranges: Jan 29 11:35:12.456275 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:35:12.456280 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 11:35:12.456285 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Jan 29 11:35:12.456290 kernel: Movable zone start for each node Jan 29 11:35:12.456297 kernel: Early memory node ranges Jan 29 11:35:12.456304 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 29 11:35:12.456331 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 29 11:35:12.456336 kernel: node 0: [mem 0x0000000040400000-0x000000006eb29fff] Jan 29 11:35:12.456356 kernel: node 0: [mem 0x000000006eb2c000-0x0000000077fc4fff] Jan 29 11:35:12.456375 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Jan 29 11:35:12.456381 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Jan 29 11:35:12.456390 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Jan 29 11:35:12.456395 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Jan 29 11:35:12.456401 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:35:12.456406 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 29 11:35:12.456413 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 29 11:35:12.456418 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 29 11:35:12.456423 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 29 11:35:12.456429 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 29 11:35:12.456434 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Jan 29 11:35:12.456440 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Jan 29 11:35:12.456445 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 29 11:35:12.456452 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 11:35:12.456457 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 11:35:12.456463 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 11:35:12.456468 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 11:35:12.456473 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 11:35:12.456478 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 11:35:12.456484 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 11:35:12.456489 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 11:35:12.456494 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 11:35:12.456501 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 11:35:12.456506 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 11:35:12.456511 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 11:35:12.456517 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 11:35:12.456522 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 11:35:12.456527 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 11:35:12.456532 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 11:35:12.456538 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 29 11:35:12.456543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:35:12.456550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:35:12.456555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:35:12.456560 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:35:12.456566 kernel: TSC deadline timer available Jan 29 11:35:12.456571 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 29 11:35:12.456577 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Jan 29 11:35:12.456582 kernel: Booting paravirtualized kernel on bare hardware Jan 29 11:35:12.456588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:35:12.456593 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 11:35:12.456599 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 11:35:12.456605 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 11:35:12.456610 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 11:35:12.456616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:35:12.456622 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:35:12.456627 kernel: random: crng init done Jan 29 11:35:12.456632 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 29 11:35:12.456638 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 29 11:35:12.456644 kernel: Fallback order for Node 0: 0 Jan 29 11:35:12.456650 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Jan 29 11:35:12.456655 kernel: Policy zone: Normal Jan 29 11:35:12.456661 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:35:12.456666 kernel: software IO TLB: area num 16. Jan 29 11:35:12.456672 kernel: Memory: 32679316K/33411988K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 732412K reserved, 0K cma-reserved) Jan 29 11:35:12.456677 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 11:35:12.456682 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:35:12.456688 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:35:12.456694 kernel: Dynamic Preempt: voluntary Jan 29 11:35:12.456700 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:35:12.456705 kernel: rcu: RCU event tracing is enabled. Jan 29 11:35:12.456711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 11:35:12.456716 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:35:12.456722 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:35:12.456727 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:35:12.456732 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:35:12.456738 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 11:35:12.456744 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 29 11:35:12.456750 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:35:12.456755 kernel: Console: colour VGA+ 80x25 Jan 29 11:35:12.456760 kernel: printk: console [tty0] enabled Jan 29 11:35:12.456766 kernel: printk: console [ttyS1] enabled Jan 29 11:35:12.456771 kernel: ACPI: Core revision 20230628 Jan 29 11:35:12.456776 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 29 11:35:12.456782 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:35:12.456787 kernel: DMAR: Host address width 39 Jan 29 11:35:12.456794 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 29 11:35:12.456799 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 29 11:35:12.456805 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 29 11:35:12.456810 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 29 11:35:12.456815 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Jan 29 11:35:12.456821 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Jan 29 11:35:12.456826 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 29 11:35:12.456832 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 29 11:35:12.456837 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 29 11:35:12.456843 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 29 11:35:12.456849 kernel: x2apic enabled Jan 29 11:35:12.456854 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 29 11:35:12.456860 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:35:12.456865 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 29 11:35:12.456871 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 29 11:35:12.456876 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 29 11:35:12.456881 kernel: process: using mwait in idle threads Jan 29 11:35:12.456887 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:35:12.456893 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:35:12.456899 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:35:12.456904 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:35:12.456909 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:35:12.456915 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 11:35:12.456920 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:35:12.456926 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 11:35:12.456931 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 11:35:12.456937 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:35:12.456943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:35:12.456949 kernel: TAA: Mitigation: TSX disabled Jan 29 11:35:12.456954 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 29 11:35:12.456959 kernel: SRBDS: Mitigation: Microcode Jan 29 11:35:12.456965 kernel: GDS: Mitigation: Microcode Jan 29 11:35:12.456970 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:35:12.456975 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:35:12.456981 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:35:12.456986 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 11:35:12.456993 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 11:35:12.456998 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:35:12.457003 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 11:35:12.457009 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 11:35:12.457014 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 29 11:35:12.457020 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:35:12.457025 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:35:12.457030 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:35:12.457036 kernel: landlock: Up and running. Jan 29 11:35:12.457042 kernel: SELinux: Initializing. Jan 29 11:35:12.457047 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.457053 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.457058 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 11:35:12.457064 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:35:12.457069 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:35:12.457075 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:35:12.457080 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 29 11:35:12.457085 kernel: ... version: 4 Jan 29 11:35:12.457092 kernel: ... bit width: 48 Jan 29 11:35:12.457097 kernel: ... generic registers: 4 Jan 29 11:35:12.457102 kernel: ... value mask: 0000ffffffffffff Jan 29 11:35:12.457108 kernel: ... max period: 00007fffffffffff Jan 29 11:35:12.457113 kernel: ... fixed-purpose events: 3 Jan 29 11:35:12.457118 kernel: ... event mask: 000000070000000f Jan 29 11:35:12.457124 kernel: signal: max sigframe size: 2032 Jan 29 11:35:12.457129 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 29 11:35:12.457134 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:35:12.457141 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:35:12.457146 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 29 11:35:12.457152 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:35:12.457157 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:35:12.457163 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 29 11:35:12.457168 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:35:12.457174 kernel: smp: Brought up 1 node, 16 CPUs Jan 29 11:35:12.457179 kernel: smpboot: Max logical packages: 1 Jan 29 11:35:12.457184 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 29 11:35:12.457191 kernel: devtmpfs: initialized Jan 29 11:35:12.457196 kernel: x86/mm: Memory block size: 128MB Jan 29 11:35:12.457202 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6eb2a000-0x6eb2afff] (4096 bytes) Jan 29 11:35:12.457207 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Jan 29 11:35:12.457212 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:35:12.457218 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 11:35:12.457223 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:35:12.457229 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:35:12.457235 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:35:12.457241 kernel: audit: type=2000 audit(1738150507.129:1): state=initialized audit_enabled=0 res=1 Jan 29 11:35:12.457246 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:35:12.457251 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:35:12.457257 kernel: cpuidle: using governor menu Jan 29 11:35:12.457262 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:35:12.457267 kernel: dca service started, version 1.12.1 Jan 29 11:35:12.457273 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 11:35:12.457278 kernel: PCI: Using configuration type 1 for base access Jan 29 11:35:12.457285 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 29 11:35:12.457290 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:35:12.457297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:35:12.457303 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:35:12.457308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:35:12.457338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:35:12.457344 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:35:12.457349 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:35:12.457355 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:35:12.457377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:35:12.457382 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 29 11:35:12.457388 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457393 kernel: ACPI: SSDT 0xFFFFA08D02095800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 29 11:35:12.457399 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457404 kernel: ACPI: SSDT 0xFFFFA08D02089000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 29 11:35:12.457409 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457414 kernel: ACPI: SSDT 0xFFFFA08D0172F300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 29 11:35:12.457420 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457425 kernel: ACPI: SSDT 0xFFFFA08D0208C000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 29 11:35:12.457432 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457437 kernel: ACPI: SSDT 0xFFFFA08D0209D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 29 11:35:12.457442 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457447 kernel: ACPI: SSDT 0xFFFFA08D01032C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 29 11:35:12.457453 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 29 11:35:12.457458 kernel: ACPI: Interpreter enabled Jan 29 11:35:12.457464 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:35:12.457469 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:35:12.457474 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 29 11:35:12.457481 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 29 11:35:12.457486 kernel: HEST: Table parsing has been initialized. Jan 29 11:35:12.457491 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 29 11:35:12.457497 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:35:12.457502 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:35:12.457508 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 29 11:35:12.457513 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 29 11:35:12.457519 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 29 11:35:12.457524 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 29 11:35:12.457530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 29 11:35:12.457536 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 29 11:35:12.457542 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 29 11:35:12.457547 kernel: ACPI: \_TZ_.FN00: New power resource Jan 29 11:35:12.457552 kernel: ACPI: \_TZ_.FN01: New power resource Jan 29 11:35:12.457558 kernel: ACPI: \_TZ_.FN02: New power resource Jan 29 11:35:12.457563 kernel: ACPI: \_TZ_.FN03: New power resource Jan 29 11:35:12.457569 kernel: ACPI: \_TZ_.FN04: New power resource Jan 29 11:35:12.457574 kernel: ACPI: \PIN_: New power resource Jan 29 11:35:12.457580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 29 11:35:12.457653 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:35:12.457706 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 29 11:35:12.457753 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 29 11:35:12.457761 kernel: PCI host bridge to bus 0000:00 Jan 29 11:35:12.457812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:35:12.457853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:35:12.457897 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:35:12.457938 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Jan 29 11:35:12.457979 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 29 11:35:12.458019 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 29 11:35:12.458074 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 29 11:35:12.458128 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 29 11:35:12.458179 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.458230 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 29 11:35:12.458279 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.458375 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 29 11:35:12.458437 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Jan 29 11:35:12.458485 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 29 11:35:12.458533 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 29 11:35:12.458586 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 29 11:35:12.458633 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Jan 29 11:35:12.458684 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 29 11:35:12.458731 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Jan 29 11:35:12.458782 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 29 11:35:12.458831 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Jan 29 11:35:12.458885 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 29 11:35:12.458935 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 29 11:35:12.458983 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Jan 29 11:35:12.459028 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Jan 29 11:35:12.459080 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 29 11:35:12.459128 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 11:35:12.459181 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 29 11:35:12.459228 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 11:35:12.459281 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 29 11:35:12.459386 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Jan 29 11:35:12.459435 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 29 11:35:12.459485 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 29 11:35:12.459534 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Jan 29 11:35:12.459583 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 29 11:35:12.459633 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 29 11:35:12.459680 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Jan 29 11:35:12.459726 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 29 11:35:12.459780 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 29 11:35:12.459826 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Jan 29 11:35:12.459873 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Jan 29 11:35:12.459918 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 29 11:35:12.459965 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 29 11:35:12.460011 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 29 11:35:12.460057 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Jan 29 11:35:12.460106 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 29 11:35:12.460158 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 29 11:35:12.460206 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460257 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 29 11:35:12.460308 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460420 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 29 11:35:12.460468 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460519 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 29 11:35:12.460567 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460620 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 29 11:35:12.460668 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460719 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 29 11:35:12.460769 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 11:35:12.460820 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 29 11:35:12.460870 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 29 11:35:12.460918 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Jan 29 11:35:12.460964 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 29 11:35:12.461018 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 29 11:35:12.461067 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 29 11:35:12.461115 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:35:12.461170 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 29 11:35:12.461219 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 29 11:35:12.461267 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Jan 29 11:35:12.461345 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 29 11:35:12.461423 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 11:35:12.461473 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 11:35:12.461529 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 29 11:35:12.461577 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 29 11:35:12.461625 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Jan 29 11:35:12.461673 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 29 11:35:12.461720 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 11:35:12.461767 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 11:35:12.461818 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 29 11:35:12.461865 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 29 11:35:12.461911 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 11:35:12.461959 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 29 11:35:12.462010 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 29 11:35:12.462059 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 29 11:35:12.462106 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Jan 29 11:35:12.462157 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 29 11:35:12.462204 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Jan 29 11:35:12.462252 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.462302 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 29 11:35:12.462401 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 11:35:12.462450 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 29 11:35:12.462502 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 29 11:35:12.462551 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 29 11:35:12.462602 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Jan 29 11:35:12.462650 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 29 11:35:12.462697 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Jan 29 11:35:12.462745 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.462793 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 29 11:35:12.462841 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 11:35:12.462888 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 29 11:35:12.462938 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 29 11:35:12.462995 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 29 11:35:12.463053 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 29 11:35:12.463103 kernel: pci 0000:07:00.0: supports D1 D2 Jan 29 11:35:12.463150 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:35:12.463198 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 29 11:35:12.463245 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.463295 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.463384 kernel: pci_bus 0000:08: extended config space not accessible Jan 29 11:35:12.463439 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 29 11:35:12.463491 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Jan 29 11:35:12.463540 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Jan 29 11:35:12.463591 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 29 11:35:12.463639 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:35:12.463690 kernel: pci 0000:08:00.0: supports D1 D2 Jan 29 11:35:12.463745 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:35:12.463793 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 29 11:35:12.463843 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.463891 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.463899 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 29 11:35:12.463905 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 29 11:35:12.463911 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 29 11:35:12.463917 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 29 11:35:12.463924 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 29 11:35:12.463930 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 29 11:35:12.463936 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 29 11:35:12.463942 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 29 11:35:12.463948 kernel: iommu: Default domain type: Translated Jan 29 11:35:12.463954 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:35:12.463960 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:35:12.463965 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:35:12.463971 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 29 11:35:12.463978 kernel: e820: reserve RAM buffer [mem 0x6eb2a000-0x6fffffff] Jan 29 11:35:12.463983 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Jan 29 11:35:12.463989 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Jan 29 11:35:12.463994 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Jan 29 11:35:12.464000 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Jan 29 11:35:12.464051 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 29 11:35:12.464100 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 29 11:35:12.464151 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:35:12.464161 kernel: vgaarb: loaded Jan 29 11:35:12.464167 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 11:35:12.464173 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 29 11:35:12.464178 kernel: clocksource: Switched to clocksource tsc-early Jan 29 11:35:12.464184 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:35:12.464189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:35:12.464195 kernel: pnp: PnP ACPI init Jan 29 11:35:12.464244 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 29 11:35:12.464292 kernel: pnp 00:02: [dma 0 disabled] Jan 29 11:35:12.464378 kernel: pnp 00:03: [dma 0 disabled] Jan 29 11:35:12.464423 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 29 11:35:12.464467 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 29 11:35:12.464515 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 29 11:35:12.464562 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 29 11:35:12.464605 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 29 11:35:12.464651 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 29 11:35:12.464695 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 29 11:35:12.464739 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 29 11:35:12.464783 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 29 11:35:12.464828 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 29 11:35:12.464871 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 29 11:35:12.464916 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 29 11:35:12.464961 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 29 11:35:12.465003 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 29 11:35:12.465046 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 29 11:35:12.465088 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 29 11:35:12.465130 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 29 11:35:12.465172 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 29 11:35:12.465219 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 29 11:35:12.465229 kernel: pnp: PnP ACPI: found 10 devices Jan 29 11:35:12.465235 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:35:12.465241 kernel: NET: Registered PF_INET protocol family Jan 29 11:35:12.465247 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:35:12.465253 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 29 11:35:12.465259 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:35:12.465264 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:35:12.465270 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 11:35:12.465277 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 29 11:35:12.465283 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.465289 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.465297 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:35:12.465303 kernel: NET: Registered PF_XDP protocol family Jan 29 11:35:12.465386 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Jan 29 11:35:12.465434 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Jan 29 11:35:12.465481 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Jan 29 11:35:12.465528 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:35:12.465579 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465628 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465678 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465725 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465776 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 29 11:35:12.465822 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 29 11:35:12.465871 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 11:35:12.465919 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 29 11:35:12.465966 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 29 11:35:12.466012 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 11:35:12.466059 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 29 11:35:12.466105 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 29 11:35:12.466155 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 11:35:12.466201 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 29 11:35:12.466248 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 29 11:35:12.466298 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 29 11:35:12.466381 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.466429 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.466475 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 29 11:35:12.466523 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.466569 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.466615 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 29 11:35:12.466657 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:35:12.466699 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:35:12.466740 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:35:12.466782 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Jan 29 11:35:12.466823 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 29 11:35:12.466870 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Jan 29 11:35:12.466914 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 11:35:12.466966 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 29 11:35:12.467009 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Jan 29 11:35:12.467058 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 11:35:12.467101 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Jan 29 11:35:12.467148 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 29 11:35:12.467192 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Jan 29 11:35:12.467240 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 29 11:35:12.467286 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Jan 29 11:35:12.467296 kernel: PCI: CLS 64 bytes, default 64 Jan 29 11:35:12.467302 kernel: DMAR: No ATSR found Jan 29 11:35:12.467308 kernel: DMAR: No SATC found Jan 29 11:35:12.467335 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 29 11:35:12.467341 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 29 11:35:12.467347 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 29 11:35:12.467355 kernel: DMAR: IOMMU feature pasid inconsistent Jan 29 11:35:12.467361 kernel: DMAR: IOMMU feature eafs inconsistent Jan 29 11:35:12.467380 kernel: DMAR: IOMMU feature prs inconsistent Jan 29 11:35:12.467386 kernel: DMAR: IOMMU feature nest inconsistent Jan 29 11:35:12.467391 kernel: DMAR: IOMMU feature mts inconsistent Jan 29 11:35:12.467397 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 29 11:35:12.467403 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 29 11:35:12.467408 kernel: DMAR: dmar0: Using Queued invalidation Jan 29 11:35:12.467414 kernel: DMAR: dmar1: Using Queued invalidation Jan 29 11:35:12.467461 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 29 11:35:12.467513 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 29 11:35:12.467560 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 29 11:35:12.467608 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 29 11:35:12.467654 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 29 11:35:12.467701 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 29 11:35:12.467747 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 29 11:35:12.467794 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 29 11:35:12.467840 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 29 11:35:12.467889 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 29 11:35:12.467934 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 29 11:35:12.467982 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 29 11:35:12.468029 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 29 11:35:12.468075 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 29 11:35:12.468122 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 29 11:35:12.468169 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 29 11:35:12.468216 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 29 11:35:12.468264 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 29 11:35:12.468337 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 29 11:35:12.468397 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 29 11:35:12.468444 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 29 11:35:12.468490 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 29 11:35:12.468537 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 29 11:35:12.468585 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 29 11:35:12.468633 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 29 11:35:12.468685 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 29 11:35:12.468733 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 29 11:35:12.468781 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 29 11:35:12.468830 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 29 11:35:12.468838 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 29 11:35:12.468844 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:35:12.468850 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Jan 29 11:35:12.468856 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 29 11:35:12.468864 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 29 11:35:12.468870 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 29 11:35:12.468875 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 29 11:35:12.468881 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 29 11:35:12.468931 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 29 11:35:12.468940 kernel: Initialise system trusted keyrings Jan 29 11:35:12.468946 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 29 11:35:12.468951 kernel: Key type asymmetric registered Jan 29 11:35:12.468957 kernel: Asymmetric key parser 'x509' registered Jan 29 11:35:12.468964 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:35:12.468970 kernel: io scheduler mq-deadline registered Jan 29 11:35:12.468976 kernel: io scheduler kyber registered Jan 29 11:35:12.468981 kernel: io scheduler bfq registered Jan 29 11:35:12.469029 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 29 11:35:12.469077 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 29 11:35:12.469124 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 29 11:35:12.469171 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 29 11:35:12.469221 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 29 11:35:12.469268 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 29 11:35:12.469399 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 29 11:35:12.469452 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 29 11:35:12.469461 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 29 11:35:12.469467 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 29 11:35:12.469473 kernel: pstore: Using crash dump compression: deflate Jan 29 11:35:12.469481 kernel: pstore: Registered erst as persistent store backend Jan 29 11:35:12.469487 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:35:12.469492 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:35:12.469498 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:35:12.469504 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 11:35:12.469552 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 29 11:35:12.469560 kernel: i8042: PNP: No PS/2 controller found. Jan 29 11:35:12.469603 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 29 11:35:12.469649 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 29 11:35:12.469692 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-29T11:35:11 UTC (1738150511) Jan 29 11:35:12.469735 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 29 11:35:12.469743 kernel: intel_pstate: Intel P-state driver initializing Jan 29 11:35:12.469749 kernel: intel_pstate: Disabling energy efficiency optimization Jan 29 11:35:12.469755 kernel: intel_pstate: HWP enabled Jan 29 11:35:12.469761 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:35:12.469766 kernel: Segment Routing with IPv6 Jan 29 11:35:12.469772 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:35:12.469780 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:35:12.469786 kernel: Key type dns_resolver registered Jan 29 11:35:12.469791 kernel: microcode: Microcode Update Driver: v2.2. Jan 29 11:35:12.469797 kernel: IPI shorthand broadcast: enabled Jan 29 11:35:12.469803 kernel: sched_clock: Marking stable (2725000694, 1457395427)->(4688428546, -506032425) Jan 29 11:35:12.469809 kernel: registered taskstats version 1 Jan 29 11:35:12.469815 kernel: Loading compiled-in X.509 certificates Jan 29 11:35:12.469820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:35:12.469826 kernel: Key type .fscrypt registered Jan 29 11:35:12.469833 kernel: Key type fscrypt-provisioning registered Jan 29 11:35:12.469839 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:35:12.469844 kernel: ima: No architecture policies found Jan 29 11:35:12.469850 kernel: clk: Disabling unused clocks Jan 29 11:35:12.469856 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:35:12.469861 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:35:12.469867 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:35:12.469873 kernel: Run /init as init process Jan 29 11:35:12.469878 kernel: with arguments: Jan 29 11:35:12.469885 kernel: /init Jan 29 11:35:12.469891 kernel: with environment: Jan 29 11:35:12.469897 kernel: HOME=/ Jan 29 11:35:12.469903 kernel: TERM=linux Jan 29 11:35:12.469909 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:35:12.469915 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:35:12.469922 systemd[1]: Detected architecture x86-64. Jan 29 11:35:12.469929 systemd[1]: Running in initrd. Jan 29 11:35:12.469935 systemd[1]: No hostname configured, using default hostname. Jan 29 11:35:12.469941 systemd[1]: Hostname set to . Jan 29 11:35:12.469947 systemd[1]: Initializing machine ID from random generator. Jan 29 11:35:12.469953 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:35:12.469959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:35:12.469965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:35:12.469971 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:35:12.469979 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:35:12.469985 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:35:12.469991 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:35:12.469997 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:35:12.470004 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:35:12.470010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:35:12.470015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:12.470022 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:35:12.470028 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:35:12.470034 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:35:12.470040 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:35:12.470046 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:35:12.470052 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:35:12.470058 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:35:12.470064 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:35:12.470070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:35:12.470077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:35:12.470083 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:35:12.470089 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:35:12.470095 kernel: tsc: Refined TSC clocksource calibration: 3407.985 MHz Jan 29 11:35:12.470101 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc5a980c, max_idle_ns: 440795300013 ns Jan 29 11:35:12.470107 kernel: clocksource: Switched to clocksource tsc Jan 29 11:35:12.470113 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:35:12.470119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:35:12.470126 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:35:12.470132 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:35:12.470138 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:35:12.470154 systemd-journald[269]: Collecting audit messages is disabled. Jan 29 11:35:12.470170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:35:12.470177 systemd-journald[269]: Journal started Jan 29 11:35:12.470193 systemd-journald[269]: Runtime Journal (/run/log/journal/bffac9580d864c6ba7c3736a55de36ef) is 8.0M, max 639.1M, 631.1M free. Jan 29 11:35:12.472711 systemd-modules-load[271]: Inserted module 'overlay' Jan 29 11:35:12.490432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:12.512342 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:35:12.517316 kernel: Bridge firewalling registered Jan 29 11:35:12.517352 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:35:12.524541 systemd-modules-load[271]: Inserted module 'br_netfilter' Jan 29 11:35:12.542821 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:35:12.543095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:35:12.543197 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:35:12.543298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:35:12.551490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:35:12.579664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:35:12.633845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:35:12.645464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:12.663842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:12.683817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:35:12.713984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:12.756668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:35:12.759765 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:35:12.761546 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:35:12.769842 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:35:12.774528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:12.775084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:35:12.781250 systemd-resolved[298]: Positive Trust Anchors: Jan 29 11:35:12.781254 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:35:12.781277 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:35:12.782885 systemd-resolved[298]: Defaulting to hostname 'linux'. Jan 29 11:35:12.796728 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:35:12.813622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:12.940100 dracut-cmdline[312]: dracut-dracut-053 Jan 29 11:35:12.947515 dracut-cmdline[312]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:35:13.119329 kernel: SCSI subsystem initialized Jan 29 11:35:13.130298 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:35:13.142382 kernel: iscsi: registered transport (tcp) Jan 29 11:35:13.162458 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:35:13.162475 kernel: QLogic iSCSI HBA Driver Jan 29 11:35:13.185402 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:35:13.196561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:35:13.294890 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:35:13.294919 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:35:13.303651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:35:13.339326 kernel: raid6: avx2x4 gen() 53758 MB/s Jan 29 11:35:13.360326 kernel: raid6: avx2x2 gen() 54886 MB/s Jan 29 11:35:13.386424 kernel: raid6: avx2x1 gen() 45840 MB/s Jan 29 11:35:13.386444 kernel: raid6: using algorithm avx2x2 gen() 54886 MB/s Jan 29 11:35:13.413510 kernel: raid6: .... xor() 30432 MB/s, rmw enabled Jan 29 11:35:13.413529 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:35:13.434299 kernel: xor: automatically using best checksumming function avx Jan 29 11:35:13.539343 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:35:13.545079 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:35:13.566717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:13.575875 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jan 29 11:35:13.587629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:13.612486 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:35:13.651983 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 29 11:35:13.669804 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:35:13.679598 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:35:13.782707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:35:13.806626 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 11:35:13.806678 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 11:35:13.806692 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:35:13.822298 kernel: PTP clock support registered Jan 29 11:35:13.822346 kernel: ACPI: bus type USB registered Jan 29 11:35:13.827298 kernel: usbcore: registered new interface driver usbfs Jan 29 11:35:13.827316 kernel: usbcore: registered new interface driver hub Jan 29 11:35:13.827324 kernel: usbcore: registered new device driver usb Jan 29 11:35:13.846302 kernel: libata version 3.00 loaded. Jan 29 11:35:13.862634 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:35:13.862733 kernel: AES CTR mode by8 optimization enabled Jan 29 11:35:13.866301 kernel: ahci 0000:00:17.0: version 3.0 Jan 29 11:35:14.073864 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 11:35:14.073944 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 29 11:35:14.074007 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 29 11:35:14.074068 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 29 11:35:14.074136 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 29 11:35:14.074214 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 11:35:14.074272 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 29 11:35:14.074339 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 29 11:35:14.074397 kernel: scsi host0: ahci Jan 29 11:35:14.074459 kernel: hub 1-0:1.0: USB hub found Jan 29 11:35:14.074528 kernel: scsi host1: ahci Jan 29 11:35:14.074588 kernel: hub 1-0:1.0: 16 ports detected Jan 29 11:35:14.074652 kernel: scsi host2: ahci Jan 29 11:35:14.074709 kernel: hub 2-0:1.0: USB hub found Jan 29 11:35:14.074776 kernel: scsi host3: ahci Jan 29 11:35:14.074833 kernel: hub 2-0:1.0: 10 ports detected Jan 29 11:35:14.074895 kernel: scsi host4: ahci Jan 29 11:35:14.074953 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 29 11:35:14.074962 kernel: scsi host5: ahci Jan 29 11:35:14.075021 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 29 11:35:14.075029 kernel: scsi host6: ahci Jan 29 11:35:14.075083 kernel: scsi host7: ahci Jan 29 11:35:14.075137 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Jan 29 11:35:14.075145 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Jan 29 11:35:14.075152 kernel: pps pps0: new PPS source ptp0 Jan 29 11:35:14.075229 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Jan 29 11:35:14.075238 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Jan 29 11:35:14.075245 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 29 11:35:14.093740 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Jan 29 11:35:14.093750 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 11:35:14.093833 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Jan 29 11:35:14.093847 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1e:1e Jan 29 11:35:14.093948 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Jan 29 11:35:14.093964 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 29 11:35:14.094061 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Jan 29 11:35:14.094070 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 11:35:13.867474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:35:14.142626 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Jan 29 11:35:14.676444 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 11:35:14.676526 kernel: pps pps1: new PPS source ptp1 Jan 29 11:35:14.676594 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 29 11:35:14.676660 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 11:35:14.676722 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1e:1f Jan 29 11:35:14.676784 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 29 11:35:14.676843 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 11:35:14.676908 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 29 11:35:14.760856 kernel: hub 1-14:1.0: USB hub found Jan 29 11:35:14.760945 kernel: hub 1-14:1.0: 4 ports detected Jan 29 11:35:14.761015 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 11:35:14.761084 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 29 11:35:14.761147 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761156 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761163 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 11:35:14.761171 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761180 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761187 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 11:35:14.761195 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761202 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761209 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 11:35:14.761216 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 11:35:14.761223 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 11:35:14.761230 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 11:35:14.761238 kernel: ata2.00: Features: NCQ-prio Jan 29 11:35:14.761246 kernel: ata1.00: Features: NCQ-prio Jan 29 11:35:14.761254 kernel: ata2.00: configured for UDMA/133 Jan 29 11:35:14.761261 kernel: ata1.00: configured for UDMA/133 Jan 29 11:35:14.761268 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 11:35:14.761382 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 11:35:14.761452 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 29 11:35:14.761519 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 11:35:14.761527 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 29 11:35:14.761596 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:14.761604 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 11:35:14.761665 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 29 11:35:14.761724 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 29 11:35:14.761782 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 11:35:14.761840 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 29 11:35:14.761896 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 29 11:35:14.761954 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 29 11:35:14.762011 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:35:14.762068 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 29 11:35:14.762126 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:14.762135 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:35:14.762142 kernel: GPT:9289727 != 937703087 Jan 29 11:35:14.762150 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:35:14.762157 kernel: GPT:9289727 != 937703087 Jan 29 11:35:14.762164 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:35:14.762173 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:35:14.762180 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 29 11:35:14.762237 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 29 11:35:14.762339 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 29 11:35:14.762401 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:35:14.762459 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 29 11:35:14.762518 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 11:35:14.762582 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 11:35:14.762591 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Jan 29 11:35:15.221096 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 29 11:35:15.221292 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 11:35:15.221489 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (542) Jan 29 11:35:15.221512 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (572) Jan 29 11:35:15.221535 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:35:15.221542 kernel: usbcore: registered new interface driver usbhid Jan 29 11:35:15.221555 kernel: usbhid: USB HID core driver Jan 29 11:35:15.221562 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 29 11:35:15.221569 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:15.221576 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 29 11:35:15.221671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:35:15.221699 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 29 11:35:15.221712 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 29 11:35:15.221788 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 11:35:15.221854 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 29 11:35:15.221916 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 11:35:13.965571 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:35:15.243741 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 29 11:35:15.243829 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 29 11:35:14.090442 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:35:14.119489 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:35:14.173454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:35:14.183396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:35:14.183465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:14.201420 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:35:15.310561 disk-uuid[718]: Primary Header is updated. Jan 29 11:35:15.310561 disk-uuid[718]: Secondary Entries is updated. Jan 29 11:35:15.310561 disk-uuid[718]: Secondary Header is updated. Jan 29 11:35:14.222453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:35:14.232376 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:35:14.232447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:14.243394 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:14.260444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:14.270617 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:35:14.293138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:14.313410 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:35:14.324478 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:14.729461 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 29 11:35:14.760044 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 29 11:35:14.788339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 29 11:35:14.813476 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 11:35:14.824374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 11:35:14.841440 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:35:15.875789 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:15.884121 disk-uuid[719]: The operation has completed successfully. Jan 29 11:35:15.892410 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:35:15.922178 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:35:15.922287 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:35:15.966445 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:35:15.991424 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:35:15.991438 sh[748]: Success Jan 29 11:35:16.026577 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:35:16.043198 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:35:16.049748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:35:16.099050 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:35:16.099070 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:16.108686 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:35:16.115715 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:35:16.121565 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:35:16.134326 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:35:16.137170 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:35:16.137609 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:35:16.210427 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:16.210451 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:16.210464 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:35:16.210480 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:35:16.210493 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:35:16.148648 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:35:16.231664 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:16.152012 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:35:16.211999 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:35:16.225122 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:35:16.248865 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:35:16.304771 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:35:16.335470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:35:16.347585 systemd-networkd[932]: lo: Link UP Jan 29 11:35:16.347587 systemd-networkd[932]: lo: Gained carrier Jan 29 11:35:16.359479 ignition[824]: Ignition 2.20.0 Jan 29 11:35:16.350233 systemd-networkd[932]: Enumeration completed Jan 29 11:35:16.359484 ignition[824]: Stage: fetch-offline Jan 29 11:35:16.350316 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:35:16.359502 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:16.351093 systemd-networkd[932]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.359508 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:16.362532 systemd[1]: Reached target network.target - Network. Jan 29 11:35:16.359557 ignition[824]: parsed url from cmdline: "" Jan 29 11:35:16.362791 unknown[824]: fetched base config from "system" Jan 29 11:35:16.359559 ignition[824]: no config URL provided Jan 29 11:35:16.362795 unknown[824]: fetched user config from "system" Jan 29 11:35:16.359562 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:35:16.369634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:35:16.359587 ignition[824]: parsing config with SHA512: dddb000a6e029f3cd6d647af5241af8837b416a41a17eea5ce856311aad67376e3ff445748ef39f687b58b14c336de1a81e60360b6cb944f6fd7641b22976f45 Jan 29 11:35:16.379520 systemd-networkd[932]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.363027 ignition[824]: fetch-offline: fetch-offline passed Jan 29 11:35:16.383815 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:35:16.363030 ignition[824]: POST message to Packet Timeline Jan 29 11:35:16.395454 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:35:16.363033 ignition[824]: POST Status error: resource requires networking Jan 29 11:35:16.407352 systemd-networkd[932]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.591507 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 29 11:35:16.363072 ignition[824]: Ignition finished successfully Jan 29 11:35:16.583403 systemd-networkd[932]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.406210 ignition[947]: Ignition 2.20.0 Jan 29 11:35:16.406215 ignition[947]: Stage: kargs Jan 29 11:35:16.406350 ignition[947]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:16.406358 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:16.406987 ignition[947]: kargs: kargs passed Jan 29 11:35:16.406991 ignition[947]: POST message to Packet Timeline Jan 29 11:35:16.407004 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:16.407410 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55000->[::1]:53: read: connection refused Jan 29 11:35:16.607875 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Jan 29 11:35:16.608863 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41901->[::1]:53: read: connection refused Jan 29 11:35:16.809420 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 29 11:35:16.810252 systemd-networkd[932]: eno1: Link UP Jan 29 11:35:16.810389 systemd-networkd[932]: eno2: Link UP Jan 29 11:35:16.810504 systemd-networkd[932]: enp2s0f0np0: Link UP Jan 29 11:35:16.810639 systemd-networkd[932]: enp2s0f0np0: Gained carrier Jan 29 11:35:16.824579 systemd-networkd[932]: enp2s0f1np1: Link UP Jan 29 11:35:16.855567 systemd-networkd[932]: enp2s0f0np0: DHCPv4 address 139.178.70.53/31, gateway 139.178.70.52 acquired from 145.40.83.140 Jan 29 11:35:17.009282 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Jan 29 11:35:17.010362 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34455->[::1]:53: read: connection refused Jan 29 11:35:17.631037 systemd-networkd[932]: enp2s0f1np1: Gained carrier Jan 29 11:35:17.810797 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Jan 29 11:35:17.811986 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47254->[::1]:53: read: connection refused Jan 29 11:35:18.462805 systemd-networkd[932]: enp2s0f0np0: Gained IPv6LL Jan 29 11:35:18.782728 systemd-networkd[932]: enp2s0f1np1: Gained IPv6LL Jan 29 11:35:19.413571 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Jan 29 11:35:19.414715 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37084->[::1]:53: read: connection refused Jan 29 11:35:22.618253 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Jan 29 11:35:23.139218 ignition[947]: GET result: OK Jan 29 11:35:23.526059 ignition[947]: Ignition finished successfully Jan 29 11:35:23.531725 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:35:23.554609 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:35:23.563015 ignition[965]: Ignition 2.20.0 Jan 29 11:35:23.563019 ignition[965]: Stage: disks Jan 29 11:35:23.563116 ignition[965]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:23.563122 ignition[965]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:23.563682 ignition[965]: disks: disks passed Jan 29 11:35:23.563684 ignition[965]: POST message to Packet Timeline Jan 29 11:35:23.563696 ignition[965]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:24.031772 ignition[965]: GET result: OK Jan 29 11:35:24.438149 ignition[965]: Ignition finished successfully Jan 29 11:35:24.441262 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:35:24.458719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:35:24.465825 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:35:24.483769 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:35:24.504772 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:35:24.531609 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:35:24.563569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:35:24.595629 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:35:24.605811 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:35:24.620559 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:35:24.699298 kernel: EXT4-fs (sda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:35:24.699459 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:35:24.699781 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:35:24.724472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:35:24.768857 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (995) Jan 29 11:35:24.768872 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:24.768880 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:24.732536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:35:24.804524 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:35:24.804536 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:35:24.804543 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:35:24.804826 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:35:24.805245 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 29 11:35:24.824651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:35:24.824676 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:35:24.871233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:35:24.888651 coreos-metadata[1012]: Jan 29 11:35:24.885 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 11:35:24.888571 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:35:24.924446 coreos-metadata[1013]: Jan 29 11:35:24.885 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 11:35:24.916512 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:35:24.958489 initrd-setup-root[1027]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:35:24.968353 initrd-setup-root[1034]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:35:24.978369 initrd-setup-root[1041]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:35:24.987411 initrd-setup-root[1048]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:35:25.000076 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:35:25.011518 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:35:25.033298 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:25.050597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:35:25.051183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:35:25.068433 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:35:25.088544 ignition[1119]: INFO : Ignition 2.20.0 Jan 29 11:35:25.088544 ignition[1119]: INFO : Stage: mount Jan 29 11:35:25.088544 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:25.088544 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:25.088544 ignition[1119]: INFO : mount: mount passed Jan 29 11:35:25.088544 ignition[1119]: INFO : POST message to Packet Timeline Jan 29 11:35:25.088544 ignition[1119]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:25.134218 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:35:25.168636 coreos-metadata[1012]: Jan 29 11:35:25.091 INFO Fetch successful Jan 29 11:35:25.168636 coreos-metadata[1012]: Jan 29 11:35:25.132 INFO wrote hostname ci-4152.2.0-a-23f4c5510f to /sysroot/etc/hostname Jan 29 11:35:25.661766 coreos-metadata[1013]: Jan 29 11:35:25.661 INFO Fetch successful Jan 29 11:35:25.698163 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 29 11:35:25.698222 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 29 11:35:25.734471 ignition[1119]: INFO : GET result: OK Jan 29 11:35:26.115027 ignition[1119]: INFO : Ignition finished successfully Jan 29 11:35:26.118055 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:35:26.145528 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:35:26.149205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:35:26.197300 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1144) Jan 29 11:35:26.214831 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:26.214849 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:26.220731 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:35:26.235313 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:35:26.235331 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:35:26.236981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:35:26.259146 ignition[1161]: INFO : Ignition 2.20.0 Jan 29 11:35:26.259146 ignition[1161]: INFO : Stage: files Jan 29 11:35:26.273348 ignition[1161]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:26.273348 ignition[1161]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:26.273348 ignition[1161]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:35:26.273348 ignition[1161]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:35:26.263260 unknown[1161]: wrote ssh authorized keys file for user: core Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:26.687646 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:35:26.914599 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:35:27.055696 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:27.055696 ignition[1161]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: files passed Jan 29 11:35:27.086541 ignition[1161]: INFO : POST message to Packet Timeline Jan 29 11:35:27.086541 ignition[1161]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:27.277562 ignition[1161]: INFO : GET result: OK Jan 29 11:35:27.651887 ignition[1161]: INFO : Ignition finished successfully Jan 29 11:35:27.654828 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:35:27.685517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:35:27.685985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:35:27.703776 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:35:27.703837 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:35:27.746683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:35:27.761875 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:35:27.792789 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:27.792789 initrd-setup-root-after-ignition[1199]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:27.806729 initrd-setup-root-after-ignition[1203]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:27.799718 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:35:27.860222 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:35:27.860274 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:35:27.879685 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:35:27.890587 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:35:27.907725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:35:27.916538 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:35:28.000186 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:35:28.018716 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:35:28.037864 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:28.058599 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:35:28.058793 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:35:28.078739 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:35:28.078850 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:35:28.113834 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:35:28.134915 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:35:28.154034 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:35:28.172912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:35:28.193906 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:35:28.214928 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:35:28.234919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:35:28.255948 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:35:28.277932 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:35:28.297906 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:35:28.315803 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:35:28.316205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:35:28.342144 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:28.361934 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:35:28.382805 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:35:28.383270 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:35:28.404792 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:35:28.405184 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:35:28.436885 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:35:28.437355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:35:28.457105 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:35:28.475767 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:35:28.479576 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:35:28.496914 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:35:28.514910 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:35:28.533878 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:35:28.534181 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:35:28.553944 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:35:28.554252 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:35:28.576991 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:35:28.577423 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:35:28.597990 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:35:28.706458 ignition[1224]: INFO : Ignition 2.20.0 Jan 29 11:35:28.706458 ignition[1224]: INFO : Stage: umount Jan 29 11:35:28.706458 ignition[1224]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:28.706458 ignition[1224]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:28.706458 ignition[1224]: INFO : umount: umount passed Jan 29 11:35:28.706458 ignition[1224]: INFO : POST message to Packet Timeline Jan 29 11:35:28.706458 ignition[1224]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:28.598384 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:35:28.616984 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:35:28.617394 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:35:28.645581 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:35:28.680571 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:35:28.698381 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:35:28.698471 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:35:28.717554 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:35:28.882524 ignition[1224]: INFO : GET result: OK Jan 29 11:35:28.717669 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:35:28.757541 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:35:28.762073 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:35:28.762336 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:35:28.837262 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:35:28.837390 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:35:29.240719 ignition[1224]: INFO : Ignition finished successfully Jan 29 11:35:29.243653 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:35:29.243948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:35:29.260595 systemd[1]: Stopped target network.target - Network. Jan 29 11:35:29.275581 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:35:29.275779 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:35:29.293729 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:35:29.293893 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:35:29.311721 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:35:29.311882 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:35:29.329690 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:35:29.329850 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:35:29.348691 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:35:29.348860 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:35:29.368204 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:35:29.379428 systemd-networkd[932]: enp2s0f1np1: DHCPv6 lease lost Jan 29 11:35:29.386546 systemd-networkd[932]: enp2s0f0np0: DHCPv6 lease lost Jan 29 11:35:29.386788 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:35:29.395661 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:35:29.395946 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:35:29.425613 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:35:29.425988 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:35:29.446398 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:35:29.446527 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:35:29.473443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:35:29.481616 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:35:29.481661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:35:29.509559 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:35:29.509634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:29.528684 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:35:29.528826 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:35:29.547808 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:35:29.547976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:29.568916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:29.588475 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:35:29.588832 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:29.631465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:35:29.631614 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:35:29.635855 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:35:29.635958 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:35:29.663560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:35:29.663701 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:35:29.694754 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:35:29.694926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:35:29.722677 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:35:29.722843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:29.765697 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:35:29.783469 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:35:30.026489 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Jan 29 11:35:29.783633 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:35:29.805605 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:35:29.805749 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:35:29.827593 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:35:29.827737 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:35:29.846583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:35:29.846724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:29.869647 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:35:29.869886 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:35:29.905128 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:35:29.905428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:35:29.908840 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:35:29.949527 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:35:29.970077 systemd[1]: Switching root. Jan 29 11:35:30.147588 systemd-journald[269]: Journal stopped Jan 29 11:35:12.455729 kernel: microcode: updated early: 0xde -> 0x100, date = 2024-02-05 Jan 29 11:35:12.455743 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:35:12.455751 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:35:12.455755 kernel: BIOS-provided physical RAM map: Jan 29 11:35:12.455759 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 29 11:35:12.455763 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 29 11:35:12.455768 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 29 11:35:12.455772 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 29 11:35:12.455777 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 29 11:35:12.455781 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006eb29fff] usable Jan 29 11:35:12.455785 kernel: BIOS-e820: [mem 0x000000006eb2a000-0x000000006eb2afff] ACPI NVS Jan 29 11:35:12.455789 kernel: BIOS-e820: [mem 0x000000006eb2b000-0x000000006eb2bfff] reserved Jan 29 11:35:12.455793 kernel: BIOS-e820: [mem 0x000000006eb2c000-0x0000000077fc4fff] usable Jan 29 11:35:12.455798 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Jan 29 11:35:12.455804 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Jan 29 11:35:12.455809 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Jan 29 11:35:12.455813 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Jan 29 11:35:12.455818 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Jan 29 11:35:12.455822 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Jan 29 11:35:12.455827 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:35:12.455832 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 29 11:35:12.455836 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 29 11:35:12.455841 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 11:35:12.455845 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 29 11:35:12.455851 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Jan 29 11:35:12.455856 kernel: NX (Execute Disable) protection: active Jan 29 11:35:12.455860 kernel: APIC: Static calls initialized Jan 29 11:35:12.455865 kernel: SMBIOS 3.2.1 present. Jan 29 11:35:12.455869 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Jan 29 11:35:12.455874 kernel: tsc: Detected 3400.000 MHz processor Jan 29 11:35:12.455879 kernel: tsc: Detected 3399.906 MHz TSC Jan 29 11:35:12.455883 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:35:12.455888 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:35:12.455893 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Jan 29 11:35:12.455898 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 29 11:35:12.455904 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:35:12.455909 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Jan 29 11:35:12.455914 kernel: Using GB pages for direct mapping Jan 29 11:35:12.455919 kernel: ACPI: Early table checksum verification disabled Jan 29 11:35:12.455924 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 29 11:35:12.455931 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 29 11:35:12.455936 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Jan 29 11:35:12.455942 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 29 11:35:12.455947 kernel: ACPI: FACS 0x0000000079662F80 000040 Jan 29 11:35:12.455952 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Jan 29 11:35:12.455957 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Jan 29 11:35:12.455962 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 29 11:35:12.455967 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 29 11:35:12.455972 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 29 11:35:12.455978 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 29 11:35:12.455983 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 29 11:35:12.455988 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 29 11:35:12.455993 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.455998 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 29 11:35:12.456003 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 29 11:35:12.456008 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456013 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456019 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 29 11:35:12.456024 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 29 11:35:12.456029 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456034 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 29 11:35:12.456039 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 29 11:35:12.456044 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:35:12.456049 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 29 11:35:12.456055 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 29 11:35:12.456060 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 29 11:35:12.456066 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xacn 01072009 AMI 00010013) Jan 29 11:35:12.456071 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 29 11:35:12.456076 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 29 11:35:12.456081 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 29 11:35:12.456086 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 29 11:35:12.456091 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 29 11:35:12.456096 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Jan 29 11:35:12.456101 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Jan 29 11:35:12.456106 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Jan 29 11:35:12.456112 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Jan 29 11:35:12.456117 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Jan 29 11:35:12.456122 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Jan 29 11:35:12.456127 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Jan 29 11:35:12.456132 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Jan 29 11:35:12.456137 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Jan 29 11:35:12.456142 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Jan 29 11:35:12.456147 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Jan 29 11:35:12.456152 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Jan 29 11:35:12.456158 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Jan 29 11:35:12.456163 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Jan 29 11:35:12.456168 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Jan 29 11:35:12.456173 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Jan 29 11:35:12.456178 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Jan 29 11:35:12.456183 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Jan 29 11:35:12.456188 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Jan 29 11:35:12.456193 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Jan 29 11:35:12.456198 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Jan 29 11:35:12.456204 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Jan 29 11:35:12.456209 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Jan 29 11:35:12.456214 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Jan 29 11:35:12.456219 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Jan 29 11:35:12.456224 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Jan 29 11:35:12.456229 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Jan 29 11:35:12.456234 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Jan 29 11:35:12.456239 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Jan 29 11:35:12.456244 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Jan 29 11:35:12.456250 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Jan 29 11:35:12.456255 kernel: No NUMA configuration found Jan 29 11:35:12.456260 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Jan 29 11:35:12.456265 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Jan 29 11:35:12.456270 kernel: Zone ranges: Jan 29 11:35:12.456275 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:35:12.456280 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 11:35:12.456285 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Jan 29 11:35:12.456290 kernel: Movable zone start for each node Jan 29 11:35:12.456297 kernel: Early memory node ranges Jan 29 11:35:12.456304 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 29 11:35:12.456331 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 29 11:35:12.456336 kernel: node 0: [mem 0x0000000040400000-0x000000006eb29fff] Jan 29 11:35:12.456356 kernel: node 0: [mem 0x000000006eb2c000-0x0000000077fc4fff] Jan 29 11:35:12.456375 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Jan 29 11:35:12.456381 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Jan 29 11:35:12.456390 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Jan 29 11:35:12.456395 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Jan 29 11:35:12.456401 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:35:12.456406 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 29 11:35:12.456413 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 29 11:35:12.456418 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 29 11:35:12.456423 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jan 29 11:35:12.456429 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jan 29 11:35:12.456434 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Jan 29 11:35:12.456440 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Jan 29 11:35:12.456445 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 29 11:35:12.456452 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 11:35:12.456457 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 11:35:12.456463 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 11:35:12.456468 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 11:35:12.456473 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 11:35:12.456478 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 11:35:12.456484 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 11:35:12.456489 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 11:35:12.456494 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 11:35:12.456501 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 11:35:12.456506 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 11:35:12.456511 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 11:35:12.456517 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 11:35:12.456522 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 11:35:12.456527 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 11:35:12.456532 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 11:35:12.456538 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 29 11:35:12.456543 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:35:12.456550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:35:12.456555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:35:12.456560 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:35:12.456566 kernel: TSC deadline timer available Jan 29 11:35:12.456571 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 29 11:35:12.456577 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Jan 29 11:35:12.456582 kernel: Booting paravirtualized kernel on bare hardware Jan 29 11:35:12.456588 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:35:12.456593 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 11:35:12.456599 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 11:35:12.456605 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 11:35:12.456610 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 11:35:12.456616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:35:12.456622 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:35:12.456627 kernel: random: crng init done Jan 29 11:35:12.456632 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 29 11:35:12.456638 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 29 11:35:12.456644 kernel: Fallback order for Node 0: 0 Jan 29 11:35:12.456650 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Jan 29 11:35:12.456655 kernel: Policy zone: Normal Jan 29 11:35:12.456661 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:35:12.456666 kernel: software IO TLB: area num 16. Jan 29 11:35:12.456672 kernel: Memory: 32679316K/33411988K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 732412K reserved, 0K cma-reserved) Jan 29 11:35:12.456677 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 11:35:12.456682 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:35:12.456688 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:35:12.456694 kernel: Dynamic Preempt: voluntary Jan 29 11:35:12.456700 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:35:12.456705 kernel: rcu: RCU event tracing is enabled. Jan 29 11:35:12.456711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 11:35:12.456716 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:35:12.456722 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:35:12.456727 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:35:12.456732 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:35:12.456738 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 11:35:12.456744 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 29 11:35:12.456750 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:35:12.456755 kernel: Console: colour VGA+ 80x25 Jan 29 11:35:12.456760 kernel: printk: console [tty0] enabled Jan 29 11:35:12.456766 kernel: printk: console [ttyS1] enabled Jan 29 11:35:12.456771 kernel: ACPI: Core revision 20230628 Jan 29 11:35:12.456776 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jan 29 11:35:12.456782 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:35:12.456787 kernel: DMAR: Host address width 39 Jan 29 11:35:12.456794 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jan 29 11:35:12.456799 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jan 29 11:35:12.456805 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 29 11:35:12.456810 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 29 11:35:12.456815 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Jan 29 11:35:12.456821 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Jan 29 11:35:12.456826 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jan 29 11:35:12.456832 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 29 11:35:12.456837 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 29 11:35:12.456843 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 29 11:35:12.456849 kernel: x2apic enabled Jan 29 11:35:12.456854 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 29 11:35:12.456860 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:35:12.456865 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 29 11:35:12.456871 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 29 11:35:12.456876 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 29 11:35:12.456881 kernel: process: using mwait in idle threads Jan 29 11:35:12.456887 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:35:12.456893 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:35:12.456899 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:35:12.456904 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:35:12.456909 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:35:12.456915 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 11:35:12.456920 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:35:12.456926 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 11:35:12.456931 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 11:35:12.456937 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:35:12.456943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:35:12.456949 kernel: TAA: Mitigation: TSX disabled Jan 29 11:35:12.456954 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 29 11:35:12.456959 kernel: SRBDS: Mitigation: Microcode Jan 29 11:35:12.456965 kernel: GDS: Mitigation: Microcode Jan 29 11:35:12.456970 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:35:12.456975 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:35:12.456981 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:35:12.456986 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 11:35:12.456993 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 11:35:12.456998 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:35:12.457003 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 11:35:12.457009 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 11:35:12.457014 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 29 11:35:12.457020 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:35:12.457025 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:35:12.457030 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:35:12.457036 kernel: landlock: Up and running. Jan 29 11:35:12.457042 kernel: SELinux: Initializing. Jan 29 11:35:12.457047 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.457053 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.457058 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 11:35:12.457064 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:35:12.457069 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:35:12.457075 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 11:35:12.457080 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 29 11:35:12.457085 kernel: ... version: 4 Jan 29 11:35:12.457092 kernel: ... bit width: 48 Jan 29 11:35:12.457097 kernel: ... generic registers: 4 Jan 29 11:35:12.457102 kernel: ... value mask: 0000ffffffffffff Jan 29 11:35:12.457108 kernel: ... max period: 00007fffffffffff Jan 29 11:35:12.457113 kernel: ... fixed-purpose events: 3 Jan 29 11:35:12.457118 kernel: ... event mask: 000000070000000f Jan 29 11:35:12.457124 kernel: signal: max sigframe size: 2032 Jan 29 11:35:12.457129 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 29 11:35:12.457134 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:35:12.457141 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:35:12.457146 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 29 11:35:12.457152 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:35:12.457157 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:35:12.457163 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 29 11:35:12.457168 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:35:12.457174 kernel: smp: Brought up 1 node, 16 CPUs Jan 29 11:35:12.457179 kernel: smpboot: Max logical packages: 1 Jan 29 11:35:12.457184 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 29 11:35:12.457191 kernel: devtmpfs: initialized Jan 29 11:35:12.457196 kernel: x86/mm: Memory block size: 128MB Jan 29 11:35:12.457202 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6eb2a000-0x6eb2afff] (4096 bytes) Jan 29 11:35:12.457207 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Jan 29 11:35:12.457212 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:35:12.457218 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 11:35:12.457223 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:35:12.457229 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:35:12.457235 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:35:12.457241 kernel: audit: type=2000 audit(1738150507.129:1): state=initialized audit_enabled=0 res=1 Jan 29 11:35:12.457246 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:35:12.457251 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:35:12.457257 kernel: cpuidle: using governor menu Jan 29 11:35:12.457262 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:35:12.457267 kernel: dca service started, version 1.12.1 Jan 29 11:35:12.457273 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 11:35:12.457278 kernel: PCI: Using configuration type 1 for base access Jan 29 11:35:12.457285 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 29 11:35:12.457290 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:35:12.457297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:35:12.457303 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:35:12.457308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:35:12.457338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:35:12.457344 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:35:12.457349 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:35:12.457355 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:35:12.457377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:35:12.457382 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 29 11:35:12.457388 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457393 kernel: ACPI: SSDT 0xFFFFA08D02095800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 29 11:35:12.457399 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457404 kernel: ACPI: SSDT 0xFFFFA08D02089000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 29 11:35:12.457409 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457414 kernel: ACPI: SSDT 0xFFFFA08D0172F300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 29 11:35:12.457420 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457425 kernel: ACPI: SSDT 0xFFFFA08D0208C000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 29 11:35:12.457432 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457437 kernel: ACPI: SSDT 0xFFFFA08D0209D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 29 11:35:12.457442 kernel: ACPI: Dynamic OEM Table Load: Jan 29 11:35:12.457447 kernel: ACPI: SSDT 0xFFFFA08D01032C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 29 11:35:12.457453 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 29 11:35:12.457458 kernel: ACPI: Interpreter enabled Jan 29 11:35:12.457464 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:35:12.457469 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:35:12.457474 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 29 11:35:12.457481 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 29 11:35:12.457486 kernel: HEST: Table parsing has been initialized. Jan 29 11:35:12.457491 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 29 11:35:12.457497 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:35:12.457502 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:35:12.457508 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 29 11:35:12.457513 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 29 11:35:12.457519 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 29 11:35:12.457524 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 29 11:35:12.457530 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 29 11:35:12.457536 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 29 11:35:12.457542 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 29 11:35:12.457547 kernel: ACPI: \_TZ_.FN00: New power resource Jan 29 11:35:12.457552 kernel: ACPI: \_TZ_.FN01: New power resource Jan 29 11:35:12.457558 kernel: ACPI: \_TZ_.FN02: New power resource Jan 29 11:35:12.457563 kernel: ACPI: \_TZ_.FN03: New power resource Jan 29 11:35:12.457569 kernel: ACPI: \_TZ_.FN04: New power resource Jan 29 11:35:12.457574 kernel: ACPI: \PIN_: New power resource Jan 29 11:35:12.457580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 29 11:35:12.457653 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:35:12.457706 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 29 11:35:12.457753 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 29 11:35:12.457761 kernel: PCI host bridge to bus 0000:00 Jan 29 11:35:12.457812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:35:12.457853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:35:12.457897 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:35:12.457938 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Jan 29 11:35:12.457979 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 29 11:35:12.458019 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 29 11:35:12.458074 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 29 11:35:12.458128 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 29 11:35:12.458179 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.458230 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jan 29 11:35:12.458279 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.458375 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jan 29 11:35:12.458437 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Jan 29 11:35:12.458485 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jan 29 11:35:12.458533 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jan 29 11:35:12.458586 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 29 11:35:12.458633 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Jan 29 11:35:12.458684 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 29 11:35:12.458731 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Jan 29 11:35:12.458782 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 29 11:35:12.458831 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Jan 29 11:35:12.458885 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 29 11:35:12.458935 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 29 11:35:12.458983 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Jan 29 11:35:12.459028 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Jan 29 11:35:12.459080 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 29 11:35:12.459128 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 11:35:12.459181 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 29 11:35:12.459228 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 11:35:12.459281 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 29 11:35:12.459386 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Jan 29 11:35:12.459435 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 29 11:35:12.459485 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 29 11:35:12.459534 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Jan 29 11:35:12.459583 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 29 11:35:12.459633 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 29 11:35:12.459680 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Jan 29 11:35:12.459726 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 29 11:35:12.459780 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 29 11:35:12.459826 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Jan 29 11:35:12.459873 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Jan 29 11:35:12.459918 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jan 29 11:35:12.459965 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jan 29 11:35:12.460011 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jan 29 11:35:12.460057 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Jan 29 11:35:12.460106 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 29 11:35:12.460158 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 29 11:35:12.460206 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460257 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 29 11:35:12.460308 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460420 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 29 11:35:12.460468 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460519 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 29 11:35:12.460567 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460620 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jan 29 11:35:12.460668 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.460719 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 29 11:35:12.460769 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 29 11:35:12.460820 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 29 11:35:12.460870 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 29 11:35:12.460918 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Jan 29 11:35:12.460964 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 29 11:35:12.461018 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 29 11:35:12.461067 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 29 11:35:12.461115 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:35:12.461170 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jan 29 11:35:12.461219 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 29 11:35:12.461267 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Jan 29 11:35:12.461345 kernel: pci 0000:02:00.0: PME# supported from D3cold Jan 29 11:35:12.461423 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 11:35:12.461473 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 11:35:12.461529 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jan 29 11:35:12.461577 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 29 11:35:12.461625 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Jan 29 11:35:12.461673 kernel: pci 0000:02:00.1: PME# supported from D3cold Jan 29 11:35:12.461720 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 29 11:35:12.461767 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 29 11:35:12.461818 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 29 11:35:12.461865 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 29 11:35:12.461911 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 11:35:12.461959 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 29 11:35:12.462010 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 29 11:35:12.462059 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 29 11:35:12.462106 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Jan 29 11:35:12.462157 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jan 29 11:35:12.462204 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Jan 29 11:35:12.462252 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.462302 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 29 11:35:12.462401 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 11:35:12.462450 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 29 11:35:12.462502 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jan 29 11:35:12.462551 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jan 29 11:35:12.462602 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Jan 29 11:35:12.462650 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jan 29 11:35:12.462697 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Jan 29 11:35:12.462745 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:35:12.462793 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 29 11:35:12.462841 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 11:35:12.462888 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 29 11:35:12.462938 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 29 11:35:12.462995 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jan 29 11:35:12.463053 kernel: pci 0000:07:00.0: enabling Extended Tags Jan 29 11:35:12.463103 kernel: pci 0000:07:00.0: supports D1 D2 Jan 29 11:35:12.463150 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:35:12.463198 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 29 11:35:12.463245 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.463295 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.463384 kernel: pci_bus 0000:08: extended config space not accessible Jan 29 11:35:12.463439 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jan 29 11:35:12.463491 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Jan 29 11:35:12.463540 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Jan 29 11:35:12.463591 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jan 29 11:35:12.463639 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:35:12.463690 kernel: pci 0000:08:00.0: supports D1 D2 Jan 29 11:35:12.463745 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:35:12.463793 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 29 11:35:12.463843 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.463891 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.463899 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 29 11:35:12.463905 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 29 11:35:12.463911 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 29 11:35:12.463917 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 29 11:35:12.463924 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 29 11:35:12.463930 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 29 11:35:12.463936 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 29 11:35:12.463942 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 29 11:35:12.463948 kernel: iommu: Default domain type: Translated Jan 29 11:35:12.463954 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:35:12.463960 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:35:12.463965 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:35:12.463971 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 29 11:35:12.463978 kernel: e820: reserve RAM buffer [mem 0x6eb2a000-0x6fffffff] Jan 29 11:35:12.463983 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Jan 29 11:35:12.463989 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Jan 29 11:35:12.463994 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Jan 29 11:35:12.464000 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Jan 29 11:35:12.464051 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jan 29 11:35:12.464100 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jan 29 11:35:12.464151 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:35:12.464161 kernel: vgaarb: loaded Jan 29 11:35:12.464167 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 11:35:12.464173 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jan 29 11:35:12.464178 kernel: clocksource: Switched to clocksource tsc-early Jan 29 11:35:12.464184 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:35:12.464189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:35:12.464195 kernel: pnp: PnP ACPI init Jan 29 11:35:12.464244 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 29 11:35:12.464292 kernel: pnp 00:02: [dma 0 disabled] Jan 29 11:35:12.464378 kernel: pnp 00:03: [dma 0 disabled] Jan 29 11:35:12.464423 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 29 11:35:12.464467 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 29 11:35:12.464515 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 29 11:35:12.464562 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 29 11:35:12.464605 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 29 11:35:12.464651 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 29 11:35:12.464695 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 29 11:35:12.464739 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 29 11:35:12.464783 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 29 11:35:12.464828 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 29 11:35:12.464871 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 29 11:35:12.464916 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 29 11:35:12.464961 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 29 11:35:12.465003 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 29 11:35:12.465046 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 29 11:35:12.465088 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 29 11:35:12.465130 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 29 11:35:12.465172 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 29 11:35:12.465219 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 29 11:35:12.465229 kernel: pnp: PnP ACPI: found 10 devices Jan 29 11:35:12.465235 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:35:12.465241 kernel: NET: Registered PF_INET protocol family Jan 29 11:35:12.465247 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:35:12.465253 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 29 11:35:12.465259 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:35:12.465264 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:35:12.465270 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 11:35:12.465277 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 29 11:35:12.465283 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.465289 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:35:12.465297 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:35:12.465303 kernel: NET: Registered PF_XDP protocol family Jan 29 11:35:12.465386 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Jan 29 11:35:12.465434 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Jan 29 11:35:12.465481 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Jan 29 11:35:12.465528 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:35:12.465579 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465628 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465678 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465725 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 29 11:35:12.465776 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 29 11:35:12.465822 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jan 29 11:35:12.465871 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 11:35:12.465919 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jan 29 11:35:12.465966 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jan 29 11:35:12.466012 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 29 11:35:12.466059 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jan 29 11:35:12.466105 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jan 29 11:35:12.466155 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 29 11:35:12.466201 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jan 29 11:35:12.466248 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jan 29 11:35:12.466298 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jan 29 11:35:12.466381 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.466429 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.466475 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jan 29 11:35:12.466523 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jan 29 11:35:12.466569 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jan 29 11:35:12.466615 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 29 11:35:12.466657 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:35:12.466699 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:35:12.466740 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:35:12.466782 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Jan 29 11:35:12.466823 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 29 11:35:12.466870 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Jan 29 11:35:12.466914 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 29 11:35:12.466966 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jan 29 11:35:12.467009 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Jan 29 11:35:12.467058 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 11:35:12.467101 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Jan 29 11:35:12.467148 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 29 11:35:12.467192 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Jan 29 11:35:12.467240 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jan 29 11:35:12.467286 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Jan 29 11:35:12.467296 kernel: PCI: CLS 64 bytes, default 64 Jan 29 11:35:12.467302 kernel: DMAR: No ATSR found Jan 29 11:35:12.467308 kernel: DMAR: No SATC found Jan 29 11:35:12.467335 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jan 29 11:35:12.467341 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jan 29 11:35:12.467347 kernel: DMAR: IOMMU feature nwfs inconsistent Jan 29 11:35:12.467355 kernel: DMAR: IOMMU feature pasid inconsistent Jan 29 11:35:12.467361 kernel: DMAR: IOMMU feature eafs inconsistent Jan 29 11:35:12.467380 kernel: DMAR: IOMMU feature prs inconsistent Jan 29 11:35:12.467386 kernel: DMAR: IOMMU feature nest inconsistent Jan 29 11:35:12.467391 kernel: DMAR: IOMMU feature mts inconsistent Jan 29 11:35:12.467397 kernel: DMAR: IOMMU feature sc_support inconsistent Jan 29 11:35:12.467403 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jan 29 11:35:12.467408 kernel: DMAR: dmar0: Using Queued invalidation Jan 29 11:35:12.467414 kernel: DMAR: dmar1: Using Queued invalidation Jan 29 11:35:12.467461 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jan 29 11:35:12.467513 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jan 29 11:35:12.467560 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jan 29 11:35:12.467608 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jan 29 11:35:12.467654 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jan 29 11:35:12.467701 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jan 29 11:35:12.467747 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jan 29 11:35:12.467794 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jan 29 11:35:12.467840 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jan 29 11:35:12.467889 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jan 29 11:35:12.467934 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jan 29 11:35:12.467982 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jan 29 11:35:12.468029 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jan 29 11:35:12.468075 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jan 29 11:35:12.468122 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jan 29 11:35:12.468169 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jan 29 11:35:12.468216 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jan 29 11:35:12.468264 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jan 29 11:35:12.468337 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jan 29 11:35:12.468397 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jan 29 11:35:12.468444 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jan 29 11:35:12.468490 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jan 29 11:35:12.468537 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jan 29 11:35:12.468585 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jan 29 11:35:12.468633 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jan 29 11:35:12.468685 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 29 11:35:12.468733 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jan 29 11:35:12.468781 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jan 29 11:35:12.468830 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jan 29 11:35:12.468838 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 29 11:35:12.468844 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:35:12.468850 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Jan 29 11:35:12.468856 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jan 29 11:35:12.468864 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 29 11:35:12.468870 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 29 11:35:12.468875 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 29 11:35:12.468881 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jan 29 11:35:12.468931 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 29 11:35:12.468940 kernel: Initialise system trusted keyrings Jan 29 11:35:12.468946 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 29 11:35:12.468951 kernel: Key type asymmetric registered Jan 29 11:35:12.468957 kernel: Asymmetric key parser 'x509' registered Jan 29 11:35:12.468964 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:35:12.468970 kernel: io scheduler mq-deadline registered Jan 29 11:35:12.468976 kernel: io scheduler kyber registered Jan 29 11:35:12.468981 kernel: io scheduler bfq registered Jan 29 11:35:12.469029 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jan 29 11:35:12.469077 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jan 29 11:35:12.469124 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jan 29 11:35:12.469171 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jan 29 11:35:12.469221 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jan 29 11:35:12.469268 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jan 29 11:35:12.469399 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jan 29 11:35:12.469452 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 29 11:35:12.469461 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 29 11:35:12.469467 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 29 11:35:12.469473 kernel: pstore: Using crash dump compression: deflate Jan 29 11:35:12.469481 kernel: pstore: Registered erst as persistent store backend Jan 29 11:35:12.469487 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:35:12.469492 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:35:12.469498 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:35:12.469504 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 11:35:12.469552 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 29 11:35:12.469560 kernel: i8042: PNP: No PS/2 controller found. Jan 29 11:35:12.469603 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 29 11:35:12.469649 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 29 11:35:12.469692 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-29T11:35:11 UTC (1738150511) Jan 29 11:35:12.469735 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 29 11:35:12.469743 kernel: intel_pstate: Intel P-state driver initializing Jan 29 11:35:12.469749 kernel: intel_pstate: Disabling energy efficiency optimization Jan 29 11:35:12.469755 kernel: intel_pstate: HWP enabled Jan 29 11:35:12.469761 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:35:12.469766 kernel: Segment Routing with IPv6 Jan 29 11:35:12.469772 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:35:12.469780 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:35:12.469786 kernel: Key type dns_resolver registered Jan 29 11:35:12.469791 kernel: microcode: Microcode Update Driver: v2.2. Jan 29 11:35:12.469797 kernel: IPI shorthand broadcast: enabled Jan 29 11:35:12.469803 kernel: sched_clock: Marking stable (2725000694, 1457395427)->(4688428546, -506032425) Jan 29 11:35:12.469809 kernel: registered taskstats version 1 Jan 29 11:35:12.469815 kernel: Loading compiled-in X.509 certificates Jan 29 11:35:12.469820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:35:12.469826 kernel: Key type .fscrypt registered Jan 29 11:35:12.469833 kernel: Key type fscrypt-provisioning registered Jan 29 11:35:12.469839 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:35:12.469844 kernel: ima: No architecture policies found Jan 29 11:35:12.469850 kernel: clk: Disabling unused clocks Jan 29 11:35:12.469856 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:35:12.469861 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:35:12.469867 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:35:12.469873 kernel: Run /init as init process Jan 29 11:35:12.469878 kernel: with arguments: Jan 29 11:35:12.469885 kernel: /init Jan 29 11:35:12.469891 kernel: with environment: Jan 29 11:35:12.469897 kernel: HOME=/ Jan 29 11:35:12.469903 kernel: TERM=linux Jan 29 11:35:12.469909 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:35:12.469915 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:35:12.469922 systemd[1]: Detected architecture x86-64. Jan 29 11:35:12.469929 systemd[1]: Running in initrd. Jan 29 11:35:12.469935 systemd[1]: No hostname configured, using default hostname. Jan 29 11:35:12.469941 systemd[1]: Hostname set to . Jan 29 11:35:12.469947 systemd[1]: Initializing machine ID from random generator. Jan 29 11:35:12.469953 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:35:12.469959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:35:12.469965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:35:12.469971 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:35:12.469979 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:35:12.469985 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:35:12.469991 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:35:12.469997 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:35:12.470004 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:35:12.470010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:35:12.470015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:12.470022 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:35:12.470028 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:35:12.470034 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:35:12.470040 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:35:12.470046 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:35:12.470052 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:35:12.470058 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:35:12.470064 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:35:12.470070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:35:12.470077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:35:12.470083 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:35:12.470089 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:35:12.470095 kernel: tsc: Refined TSC clocksource calibration: 3407.985 MHz Jan 29 11:35:12.470101 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc5a980c, max_idle_ns: 440795300013 ns Jan 29 11:35:12.470107 kernel: clocksource: Switched to clocksource tsc Jan 29 11:35:12.470113 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:35:12.470119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:35:12.470126 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:35:12.470132 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:35:12.470138 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:35:12.470154 systemd-journald[269]: Collecting audit messages is disabled. Jan 29 11:35:12.470170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:35:12.470177 systemd-journald[269]: Journal started Jan 29 11:35:12.470193 systemd-journald[269]: Runtime Journal (/run/log/journal/bffac9580d864c6ba7c3736a55de36ef) is 8.0M, max 639.1M, 631.1M free. Jan 29 11:35:12.472711 systemd-modules-load[271]: Inserted module 'overlay' Jan 29 11:35:12.490432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:12.512342 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:35:12.517316 kernel: Bridge firewalling registered Jan 29 11:35:12.517352 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:35:12.524541 systemd-modules-load[271]: Inserted module 'br_netfilter' Jan 29 11:35:12.542821 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:35:12.543095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:35:12.543197 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:35:12.543298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:35:12.551490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:35:12.579664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:35:12.633845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:35:12.645464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:12.663842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:12.683817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:35:12.713984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:12.756668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:35:12.759765 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:35:12.761546 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:35:12.769842 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:35:12.774528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:12.775084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:35:12.781250 systemd-resolved[298]: Positive Trust Anchors: Jan 29 11:35:12.781254 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:35:12.781277 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:35:12.782885 systemd-resolved[298]: Defaulting to hostname 'linux'. Jan 29 11:35:12.796728 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:35:12.813622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:12.940100 dracut-cmdline[312]: dracut-dracut-053 Jan 29 11:35:12.947515 dracut-cmdline[312]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:35:13.119329 kernel: SCSI subsystem initialized Jan 29 11:35:13.130298 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:35:13.142382 kernel: iscsi: registered transport (tcp) Jan 29 11:35:13.162458 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:35:13.162475 kernel: QLogic iSCSI HBA Driver Jan 29 11:35:13.185402 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:35:13.196561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:35:13.294890 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:35:13.294919 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:35:13.303651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:35:13.339326 kernel: raid6: avx2x4 gen() 53758 MB/s Jan 29 11:35:13.360326 kernel: raid6: avx2x2 gen() 54886 MB/s Jan 29 11:35:13.386424 kernel: raid6: avx2x1 gen() 45840 MB/s Jan 29 11:35:13.386444 kernel: raid6: using algorithm avx2x2 gen() 54886 MB/s Jan 29 11:35:13.413510 kernel: raid6: .... xor() 30432 MB/s, rmw enabled Jan 29 11:35:13.413529 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:35:13.434299 kernel: xor: automatically using best checksumming function avx Jan 29 11:35:13.539343 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:35:13.545079 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:35:13.566717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:13.575875 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jan 29 11:35:13.587629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:13.612486 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:35:13.651983 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jan 29 11:35:13.669804 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:35:13.679598 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:35:13.782707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:35:13.806626 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 11:35:13.806678 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 11:35:13.806692 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:35:13.822298 kernel: PTP clock support registered Jan 29 11:35:13.822346 kernel: ACPI: bus type USB registered Jan 29 11:35:13.827298 kernel: usbcore: registered new interface driver usbfs Jan 29 11:35:13.827316 kernel: usbcore: registered new interface driver hub Jan 29 11:35:13.827324 kernel: usbcore: registered new device driver usb Jan 29 11:35:13.846302 kernel: libata version 3.00 loaded. Jan 29 11:35:13.862634 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:35:13.862733 kernel: AES CTR mode by8 optimization enabled Jan 29 11:35:13.866301 kernel: ahci 0000:00:17.0: version 3.0 Jan 29 11:35:14.073864 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 11:35:14.073944 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 29 11:35:14.074007 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Jan 29 11:35:14.074068 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 29 11:35:14.074136 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 29 11:35:14.074214 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 29 11:35:14.074272 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 29 11:35:14.074339 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 29 11:35:14.074397 kernel: scsi host0: ahci Jan 29 11:35:14.074459 kernel: hub 1-0:1.0: USB hub found Jan 29 11:35:14.074528 kernel: scsi host1: ahci Jan 29 11:35:14.074588 kernel: hub 1-0:1.0: 16 ports detected Jan 29 11:35:14.074652 kernel: scsi host2: ahci Jan 29 11:35:14.074709 kernel: hub 2-0:1.0: USB hub found Jan 29 11:35:14.074776 kernel: scsi host3: ahci Jan 29 11:35:14.074833 kernel: hub 2-0:1.0: 10 ports detected Jan 29 11:35:14.074895 kernel: scsi host4: ahci Jan 29 11:35:14.074953 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 29 11:35:14.074962 kernel: scsi host5: ahci Jan 29 11:35:14.075021 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 29 11:35:14.075029 kernel: scsi host6: ahci Jan 29 11:35:14.075083 kernel: scsi host7: ahci Jan 29 11:35:14.075137 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Jan 29 11:35:14.075145 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Jan 29 11:35:14.075152 kernel: pps pps0: new PPS source ptp0 Jan 29 11:35:14.075229 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Jan 29 11:35:14.075238 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Jan 29 11:35:14.075245 kernel: igb 0000:04:00.0: added PHC on eth0 Jan 29 11:35:14.093740 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Jan 29 11:35:14.093750 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 11:35:14.093833 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Jan 29 11:35:14.093847 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1e:1e Jan 29 11:35:14.093948 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Jan 29 11:35:14.093964 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jan 29 11:35:14.094061 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Jan 29 11:35:14.094070 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 11:35:13.867474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:35:14.142626 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Jan 29 11:35:14.676444 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 11:35:14.676526 kernel: pps pps1: new PPS source ptp1 Jan 29 11:35:14.676594 kernel: igb 0000:05:00.0: added PHC on eth1 Jan 29 11:35:14.676660 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 29 11:35:14.676722 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1e:1f Jan 29 11:35:14.676784 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jan 29 11:35:14.676843 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 29 11:35:14.676908 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 29 11:35:14.760856 kernel: hub 1-14:1.0: USB hub found Jan 29 11:35:14.760945 kernel: hub 1-14:1.0: 4 ports detected Jan 29 11:35:14.761015 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 11:35:14.761084 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jan 29 11:35:14.761147 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761156 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761163 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 11:35:14.761171 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761180 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761187 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 29 11:35:14.761195 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761202 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:35:14.761209 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 11:35:14.761216 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jan 29 11:35:14.761223 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 11:35:14.761230 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 29 11:35:14.761238 kernel: ata2.00: Features: NCQ-prio Jan 29 11:35:14.761246 kernel: ata1.00: Features: NCQ-prio Jan 29 11:35:14.761254 kernel: ata2.00: configured for UDMA/133 Jan 29 11:35:14.761261 kernel: ata1.00: configured for UDMA/133 Jan 29 11:35:14.761268 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 11:35:14.761382 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jan 29 11:35:14.761452 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jan 29 11:35:14.761519 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 11:35:14.761527 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jan 29 11:35:14.761596 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:14.761604 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 11:35:14.761665 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jan 29 11:35:14.761724 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jan 29 11:35:14.761782 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 29 11:35:14.761840 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 29 11:35:14.761896 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 29 11:35:14.761954 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 29 11:35:14.762011 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:35:14.762068 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 29 11:35:14.762126 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:14.762135 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:35:14.762142 kernel: GPT:9289727 != 937703087 Jan 29 11:35:14.762150 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:35:14.762157 kernel: GPT:9289727 != 937703087 Jan 29 11:35:14.762164 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:35:14.762173 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:35:14.762180 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 29 11:35:14.762237 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 29 11:35:14.762339 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 29 11:35:14.762401 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:35:14.762459 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 29 11:35:14.762518 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 11:35:14.762582 kernel: ata1.00: Enabling discard_zeroes_data Jan 29 11:35:14.762591 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Jan 29 11:35:15.221096 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jan 29 11:35:15.221292 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 29 11:35:15.221489 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (542) Jan 29 11:35:15.221512 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (572) Jan 29 11:35:15.221535 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:35:15.221542 kernel: usbcore: registered new interface driver usbhid Jan 29 11:35:15.221555 kernel: usbhid: USB HID core driver Jan 29 11:35:15.221562 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 29 11:35:15.221569 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:15.221576 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 29 11:35:15.221671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:35:15.221699 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 29 11:35:15.221712 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 29 11:35:15.221788 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jan 29 11:35:15.221854 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jan 29 11:35:15.221916 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 11:35:13.965571 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:35:15.243741 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jan 29 11:35:15.243829 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jan 29 11:35:14.090442 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:35:14.119489 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:35:14.173454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:35:14.183396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:35:14.183465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:14.201420 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:35:15.310561 disk-uuid[718]: Primary Header is updated. Jan 29 11:35:15.310561 disk-uuid[718]: Secondary Entries is updated. Jan 29 11:35:15.310561 disk-uuid[718]: Secondary Header is updated. Jan 29 11:35:14.222453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:35:14.232376 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:35:14.232447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:14.243394 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:14.260444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:14.270617 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:35:14.293138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:14.313410 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:35:14.324478 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:14.729461 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jan 29 11:35:14.760044 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jan 29 11:35:14.788339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 29 11:35:14.813476 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 11:35:14.824374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jan 29 11:35:14.841440 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:35:15.875789 kernel: ata2.00: Enabling discard_zeroes_data Jan 29 11:35:15.884121 disk-uuid[719]: The operation has completed successfully. Jan 29 11:35:15.892410 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:35:15.922178 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:35:15.922287 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:35:15.966445 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:35:15.991424 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:35:15.991438 sh[748]: Success Jan 29 11:35:16.026577 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:35:16.043198 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:35:16.049748 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:35:16.099050 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:35:16.099070 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:16.108686 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:35:16.115715 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:35:16.121565 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:35:16.134326 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:35:16.137170 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:35:16.137609 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:35:16.210427 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:16.210451 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:16.210464 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:35:16.210480 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:35:16.210493 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:35:16.148648 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:35:16.231664 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:16.152012 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:35:16.211999 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:35:16.225122 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:35:16.248865 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:35:16.304771 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:35:16.335470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:35:16.347585 systemd-networkd[932]: lo: Link UP Jan 29 11:35:16.347587 systemd-networkd[932]: lo: Gained carrier Jan 29 11:35:16.359479 ignition[824]: Ignition 2.20.0 Jan 29 11:35:16.350233 systemd-networkd[932]: Enumeration completed Jan 29 11:35:16.359484 ignition[824]: Stage: fetch-offline Jan 29 11:35:16.350316 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:35:16.359502 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:16.351093 systemd-networkd[932]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.359508 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:16.362532 systemd[1]: Reached target network.target - Network. Jan 29 11:35:16.359557 ignition[824]: parsed url from cmdline: "" Jan 29 11:35:16.362791 unknown[824]: fetched base config from "system" Jan 29 11:35:16.359559 ignition[824]: no config URL provided Jan 29 11:35:16.362795 unknown[824]: fetched user config from "system" Jan 29 11:35:16.359562 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:35:16.369634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:35:16.359587 ignition[824]: parsing config with SHA512: dddb000a6e029f3cd6d647af5241af8837b416a41a17eea5ce856311aad67376e3ff445748ef39f687b58b14c336de1a81e60360b6cb944f6fd7641b22976f45 Jan 29 11:35:16.379520 systemd-networkd[932]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.363027 ignition[824]: fetch-offline: fetch-offline passed Jan 29 11:35:16.383815 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:35:16.363030 ignition[824]: POST message to Packet Timeline Jan 29 11:35:16.395454 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:35:16.363033 ignition[824]: POST Status error: resource requires networking Jan 29 11:35:16.407352 systemd-networkd[932]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.591507 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 29 11:35:16.363072 ignition[824]: Ignition finished successfully Jan 29 11:35:16.583403 systemd-networkd[932]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:16.406210 ignition[947]: Ignition 2.20.0 Jan 29 11:35:16.406215 ignition[947]: Stage: kargs Jan 29 11:35:16.406350 ignition[947]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:16.406358 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:16.406987 ignition[947]: kargs: kargs passed Jan 29 11:35:16.406991 ignition[947]: POST message to Packet Timeline Jan 29 11:35:16.407004 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:16.407410 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55000->[::1]:53: read: connection refused Jan 29 11:35:16.607875 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Jan 29 11:35:16.608863 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41901->[::1]:53: read: connection refused Jan 29 11:35:16.809420 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 29 11:35:16.810252 systemd-networkd[932]: eno1: Link UP Jan 29 11:35:16.810389 systemd-networkd[932]: eno2: Link UP Jan 29 11:35:16.810504 systemd-networkd[932]: enp2s0f0np0: Link UP Jan 29 11:35:16.810639 systemd-networkd[932]: enp2s0f0np0: Gained carrier Jan 29 11:35:16.824579 systemd-networkd[932]: enp2s0f1np1: Link UP Jan 29 11:35:16.855567 systemd-networkd[932]: enp2s0f0np0: DHCPv4 address 139.178.70.53/31, gateway 139.178.70.52 acquired from 145.40.83.140 Jan 29 11:35:17.009282 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Jan 29 11:35:17.010362 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34455->[::1]:53: read: connection refused Jan 29 11:35:17.631037 systemd-networkd[932]: enp2s0f1np1: Gained carrier Jan 29 11:35:17.810797 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Jan 29 11:35:17.811986 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47254->[::1]:53: read: connection refused Jan 29 11:35:18.462805 systemd-networkd[932]: enp2s0f0np0: Gained IPv6LL Jan 29 11:35:18.782728 systemd-networkd[932]: enp2s0f1np1: Gained IPv6LL Jan 29 11:35:19.413571 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Jan 29 11:35:19.414715 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37084->[::1]:53: read: connection refused Jan 29 11:35:22.618253 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Jan 29 11:35:23.139218 ignition[947]: GET result: OK Jan 29 11:35:23.526059 ignition[947]: Ignition finished successfully Jan 29 11:35:23.531725 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:35:23.554609 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:35:23.563015 ignition[965]: Ignition 2.20.0 Jan 29 11:35:23.563019 ignition[965]: Stage: disks Jan 29 11:35:23.563116 ignition[965]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:23.563122 ignition[965]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:23.563682 ignition[965]: disks: disks passed Jan 29 11:35:23.563684 ignition[965]: POST message to Packet Timeline Jan 29 11:35:23.563696 ignition[965]: GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:24.031772 ignition[965]: GET result: OK Jan 29 11:35:24.438149 ignition[965]: Ignition finished successfully Jan 29 11:35:24.441262 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:35:24.458719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:35:24.465825 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:35:24.483769 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:35:24.504772 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:35:24.531609 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:35:24.563569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:35:24.595629 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:35:24.605811 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:35:24.620559 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:35:24.699298 kernel: EXT4-fs (sda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:35:24.699459 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:35:24.699781 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:35:24.724472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:35:24.768857 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (995) Jan 29 11:35:24.768872 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:24.768880 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:24.732536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:35:24.804524 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:35:24.804536 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:35:24.804543 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:35:24.804826 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:35:24.805245 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 29 11:35:24.824651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:35:24.824676 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:35:24.871233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:35:24.888651 coreos-metadata[1012]: Jan 29 11:35:24.885 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 11:35:24.888571 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:35:24.924446 coreos-metadata[1013]: Jan 29 11:35:24.885 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 11:35:24.916512 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:35:24.958489 initrd-setup-root[1027]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:35:24.968353 initrd-setup-root[1034]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:35:24.978369 initrd-setup-root[1041]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:35:24.987411 initrd-setup-root[1048]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:35:25.000076 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:35:25.011518 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:35:25.033298 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:25.050597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:35:25.051183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:35:25.068433 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:35:25.088544 ignition[1119]: INFO : Ignition 2.20.0 Jan 29 11:35:25.088544 ignition[1119]: INFO : Stage: mount Jan 29 11:35:25.088544 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:25.088544 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:25.088544 ignition[1119]: INFO : mount: mount passed Jan 29 11:35:25.088544 ignition[1119]: INFO : POST message to Packet Timeline Jan 29 11:35:25.088544 ignition[1119]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:25.134218 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:35:25.168636 coreos-metadata[1012]: Jan 29 11:35:25.091 INFO Fetch successful Jan 29 11:35:25.168636 coreos-metadata[1012]: Jan 29 11:35:25.132 INFO wrote hostname ci-4152.2.0-a-23f4c5510f to /sysroot/etc/hostname Jan 29 11:35:25.661766 coreos-metadata[1013]: Jan 29 11:35:25.661 INFO Fetch successful Jan 29 11:35:25.698163 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 29 11:35:25.698222 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 29 11:35:25.734471 ignition[1119]: INFO : GET result: OK Jan 29 11:35:26.115027 ignition[1119]: INFO : Ignition finished successfully Jan 29 11:35:26.118055 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:35:26.145528 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:35:26.149205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:35:26.197300 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1144) Jan 29 11:35:26.214831 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:35:26.214849 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:35:26.220731 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:35:26.235313 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:35:26.235331 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:35:26.236981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:35:26.259146 ignition[1161]: INFO : Ignition 2.20.0 Jan 29 11:35:26.259146 ignition[1161]: INFO : Stage: files Jan 29 11:35:26.273348 ignition[1161]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:26.273348 ignition[1161]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:26.273348 ignition[1161]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:35:26.273348 ignition[1161]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:35:26.273348 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:35:26.263260 unknown[1161]: wrote ssh authorized keys file for user: core Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:26.436519 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:26.687646 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:35:26.914599 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:35:27.055696 ignition[1161]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:35:27.055696 ignition[1161]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:35:27.086541 ignition[1161]: INFO : files: files passed Jan 29 11:35:27.086541 ignition[1161]: INFO : POST message to Packet Timeline Jan 29 11:35:27.086541 ignition[1161]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:27.277562 ignition[1161]: INFO : GET result: OK Jan 29 11:35:27.651887 ignition[1161]: INFO : Ignition finished successfully Jan 29 11:35:27.654828 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:35:27.685517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:35:27.685985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:35:27.703776 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:35:27.703837 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:35:27.746683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:35:27.761875 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:35:27.792789 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:27.792789 initrd-setup-root-after-ignition[1199]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:27.806729 initrd-setup-root-after-ignition[1203]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:27.799718 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:35:27.860222 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:35:27.860274 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:35:27.879685 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:35:27.890587 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:35:27.907725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:35:27.916538 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:35:28.000186 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:35:28.018716 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:35:28.037864 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:28.058599 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:35:28.058793 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:35:28.078739 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:35:28.078850 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:35:28.113834 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:35:28.134915 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:35:28.154034 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:35:28.172912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:35:28.193906 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:35:28.214928 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:35:28.234919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:35:28.255948 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:35:28.277932 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:35:28.297906 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:35:28.315803 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:35:28.316205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:35:28.342144 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:28.361934 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:35:28.382805 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:35:28.383270 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:35:28.404792 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:35:28.405184 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:35:28.436885 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:35:28.437355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:35:28.457105 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:35:28.475767 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:35:28.479576 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:35:28.496914 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:35:28.514910 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:35:28.533878 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:35:28.534181 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:35:28.553944 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:35:28.554252 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:35:28.576991 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:35:28.577423 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:35:28.597990 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:35:28.706458 ignition[1224]: INFO : Ignition 2.20.0 Jan 29 11:35:28.706458 ignition[1224]: INFO : Stage: umount Jan 29 11:35:28.706458 ignition[1224]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:28.706458 ignition[1224]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 29 11:35:28.706458 ignition[1224]: INFO : umount: umount passed Jan 29 11:35:28.706458 ignition[1224]: INFO : POST message to Packet Timeline Jan 29 11:35:28.706458 ignition[1224]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 29 11:35:28.598384 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:35:28.616984 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:35:28.617394 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:35:28.645581 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:35:28.680571 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:35:28.698381 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:35:28.698471 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:35:28.717554 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:35:28.882524 ignition[1224]: INFO : GET result: OK Jan 29 11:35:28.717669 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:35:28.757541 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:35:28.762073 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:35:28.762336 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:35:28.837262 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:35:28.837390 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:35:29.240719 ignition[1224]: INFO : Ignition finished successfully Jan 29 11:35:29.243653 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:35:29.243948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:35:29.260595 systemd[1]: Stopped target network.target - Network. Jan 29 11:35:29.275581 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:35:29.275779 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:35:29.293729 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:35:29.293893 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:35:29.311721 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:35:29.311882 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:35:29.329690 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:35:29.329850 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:35:29.348691 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:35:29.348860 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:35:29.368204 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:35:29.379428 systemd-networkd[932]: enp2s0f1np1: DHCPv6 lease lost Jan 29 11:35:29.386546 systemd-networkd[932]: enp2s0f0np0: DHCPv6 lease lost Jan 29 11:35:29.386788 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:35:29.395661 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:35:29.395946 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:35:29.425613 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:35:29.425988 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:35:29.446398 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:35:29.446527 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:35:29.473443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:35:29.481616 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:35:29.481661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:35:29.509559 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:35:29.509634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:29.528684 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:35:29.528826 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:35:29.547808 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:35:29.547976 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:29.568916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:29.588475 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:35:29.588832 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:29.631465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:35:29.631614 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:35:29.635855 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:35:29.635958 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:35:29.663560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:35:29.663701 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:35:29.694754 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:35:29.694926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:35:29.722677 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:35:29.722843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:29.765697 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:35:29.783469 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:35:30.026489 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Jan 29 11:35:29.783633 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:35:29.805605 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:35:29.805749 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:35:29.827593 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:35:29.827737 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:35:29.846583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:35:29.846724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:29.869647 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:35:29.869886 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:35:29.905128 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:35:29.905428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:35:29.908840 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:35:29.949527 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:35:29.970077 systemd[1]: Switching root. Jan 29 11:35:30.147588 systemd-journald[269]: Journal stopped Jan 29 11:35:31.718328 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:35:31.718345 kernel: SELinux: policy capability open_perms=1 Jan 29 11:35:31.718354 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:35:31.718359 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:35:31.718365 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:35:31.718370 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:35:31.718376 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:35:31.718382 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:35:31.718388 kernel: audit: type=1403 audit(1738150530.304:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:35:31.718395 systemd[1]: Successfully loaded SELinux policy in 72.651ms. Jan 29 11:35:31.718402 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.182ms. Jan 29 11:35:31.718409 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:35:31.718416 systemd[1]: Detected architecture x86-64. Jan 29 11:35:31.718422 systemd[1]: Detected first boot. Jan 29 11:35:31.718430 systemd[1]: Hostname set to . Jan 29 11:35:31.718437 systemd[1]: Initializing machine ID from random generator. Jan 29 11:35:31.718444 zram_generator::config[1291]: No configuration found. Jan 29 11:35:31.718451 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:35:31.718457 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:35:31.718465 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:35:31.718472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:35:31.718478 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:35:31.718485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:35:31.718491 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:35:31.718498 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:35:31.718505 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:35:31.718512 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:35:31.718519 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:35:31.718526 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:35:31.718533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:35:31.718540 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:35:31.718546 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:35:31.718553 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:35:31.718560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:35:31.718568 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 29 11:35:31.718575 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:35:31.718582 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:35:31.718589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:35:31.718595 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:35:31.718602 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:35:31.718611 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:35:31.718618 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:35:31.718625 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:35:31.718632 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:35:31.718639 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:35:31.718646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:35:31.718653 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:35:31.718660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:35:31.718667 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:35:31.718674 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:35:31.718682 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:35:31.718689 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:35:31.718696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:31.718703 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:35:31.718710 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:35:31.718718 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:35:31.718725 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:35:31.718732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:31.718739 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:35:31.718746 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:35:31.718753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:31.718760 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:35:31.718767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:31.718775 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:35:31.718782 kernel: ACPI: bus type drm_connector registered Jan 29 11:35:31.718789 kernel: fuse: init (API version 7.39) Jan 29 11:35:31.718795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:31.718802 kernel: loop: module loaded Jan 29 11:35:31.718808 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:35:31.718816 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:35:31.718823 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:35:31.718830 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:35:31.718847 systemd-journald[1413]: Collecting audit messages is disabled. Jan 29 11:35:31.718862 systemd-journald[1413]: Journal started Jan 29 11:35:31.718879 systemd-journald[1413]: Runtime Journal (/run/log/journal/077f0b80ef194a5c905f116a48ad2983) is 8.0M, max 639.1M, 631.1M free. Jan 29 11:35:31.732378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:35:31.755341 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:35:31.777382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:35:31.798336 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:35:31.824349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:31.832356 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:35:31.842005 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:35:31.851425 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:35:31.861435 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:35:31.871463 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:35:31.881580 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:35:31.891563 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:35:31.901658 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:35:31.912645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:35:31.923633 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:35:31.923744 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:35:31.934740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:31.934871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:31.945648 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:35:31.945765 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:35:31.955644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:31.955759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:31.966739 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:35:31.966869 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:35:31.976644 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:31.976761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:31.986751 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:35:31.996788 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:35:32.008800 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:35:32.020790 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:35:32.038021 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:35:32.060628 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:35:32.071410 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:35:32.081476 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:35:32.082961 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:35:32.093304 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:35:32.104442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:35:32.111952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:35:32.114581 systemd-journald[1413]: Time spent on flushing to /var/log/journal/077f0b80ef194a5c905f116a48ad2983 is 13.162ms for 1384 entries. Jan 29 11:35:32.114581 systemd-journald[1413]: System Journal (/var/log/journal/077f0b80ef194a5c905f116a48ad2983) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:35:32.137724 systemd-journald[1413]: Received client request to flush runtime journal. Jan 29 11:35:32.129422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:35:32.130213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:35:32.141228 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:35:32.153192 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:35:32.164253 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Jan 29 11:35:32.164264 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Jan 29 11:35:32.165309 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:35:32.176489 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:35:32.187569 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:35:32.198545 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:35:32.210537 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:32.221537 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:35:32.235027 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:35:32.258462 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:35:32.268742 udevadm[1457]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:35:32.276077 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:35:32.296467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:35:32.303874 systemd-tmpfiles[1470]: ACLs are not supported, ignoring. Jan 29 11:35:32.303883 systemd-tmpfiles[1470]: ACLs are not supported, ignoring. Jan 29 11:35:32.307670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:35:32.471072 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:35:32.495540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:32.507632 systemd-udevd[1477]: Using default interface naming scheme 'v255'. Jan 29 11:35:32.524307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:32.540820 systemd[1]: Found device dev-ttyS1.device - /dev/ttyS1. Jan 29 11:35:32.556375 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 29 11:35:32.556425 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 11:35:32.556460 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1546) Jan 29 11:35:32.568304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:35:32.582303 kernel: IPMI message handler: version 39.2 Jan 29 11:35:32.582350 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:35:32.588304 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:35:32.622902 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 29 11:35:32.645568 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 29 11:35:32.645653 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 29 11:35:32.645740 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jan 29 11:35:32.645816 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 29 11:35:32.652076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jan 29 11:35:32.665299 kernel: ipmi device interface Jan 29 11:35:32.665319 kernel: iTCO_vendor_support: vendor-support=0 Jan 29 11:35:32.688995 kernel: ipmi_si: IPMI System Interface driver Jan 29 11:35:32.689050 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Jan 29 11:35:32.700403 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 29 11:35:32.710076 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 29 11:35:32.710109 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 29 11:35:32.710120 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 29 11:35:32.740176 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 29 11:35:32.740260 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 29 11:35:32.740353 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 29 11:35:32.740372 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 29 11:35:32.753783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:35:32.764289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:32.776497 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:35:32.785505 kernel: intel_rapl_common: Found RAPL domain package Jan 29 11:35:32.785540 kernel: intel_rapl_common: Found RAPL domain core Jan 29 11:35:32.786301 kernel: intel_rapl_common: Found RAPL domain uncore Jan 29 11:35:32.786320 kernel: intel_rapl_common: Found RAPL domain dram Jan 29 11:35:32.818303 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 29 11:35:32.834335 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Jan 29 11:35:32.847487 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:35:32.848299 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 29 11:35:32.850301 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 29 11:35:32.875567 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:35:32.895674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:32.910389 systemd-networkd[1564]: lo: Link UP Jan 29 11:35:32.910393 systemd-networkd[1564]: lo: Gained carrier Jan 29 11:35:32.912862 systemd-networkd[1564]: bond0: netdev ready Jan 29 11:35:32.913748 systemd-networkd[1564]: Enumeration completed Jan 29 11:35:32.921680 systemd-networkd[1564]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:a6:3c.network. Jan 29 11:35:32.922381 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:35:32.930055 lvm[1589]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:35:32.933425 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:35:32.950353 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:35:32.965079 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:35:32.977487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:32.996392 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:35:32.998705 lvm[1593]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:35:33.030062 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:35:33.042488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:35:33.054339 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:35:33.054354 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:35:33.064333 systemd[1]: Reached target machines.target - Containers. Jan 29 11:35:33.073984 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:35:33.095437 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:35:33.108072 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:35:33.118427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:33.124696 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:35:33.136304 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:35:33.148277 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:35:33.148797 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:35:33.170303 kernel: loop0: detected capacity change from 0 to 140992 Jan 29 11:35:33.171345 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:35:33.171999 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:35:33.185581 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:35:33.199329 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:35:33.234332 kernel: loop1: detected capacity change from 0 to 138184 Jan 29 11:35:33.295338 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 11:35:33.344302 kernel: loop3: detected capacity change from 0 to 8 Jan 29 11:35:33.387341 kernel: loop4: detected capacity change from 0 to 140992 Jan 29 11:35:33.414326 kernel: loop5: detected capacity change from 0 to 138184 Jan 29 11:35:33.429300 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jan 29 11:35:33.429478 kernel: loop6: detected capacity change from 0 to 210664 Jan 29 11:35:33.435299 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Jan 29 11:35:33.445766 systemd-networkd[1564]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:a6:3d.network. Jan 29 11:35:33.459300 kernel: loop7: detected capacity change from 0 to 8 Jan 29 11:35:33.459458 (sd-merge)[1616]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 29 11:35:33.459709 (sd-merge)[1616]: Merged extensions into '/usr'. Jan 29 11:35:33.461962 systemd[1]: Reloading requested from client PID 1602 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:35:33.461969 systemd[1]: Reloading... Jan 29 11:35:33.465239 ldconfig[1598]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:35:33.491302 zram_generator::config[1645]: No configuration found. Jan 29 11:35:33.564614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:33.595352 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jan 29 11:35:33.605951 systemd-networkd[1564]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 29 11:35:33.606299 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Jan 29 11:35:33.607671 systemd-networkd[1564]: enp2s0f0np0: Link UP Jan 29 11:35:33.607878 systemd-networkd[1564]: enp2s0f0np0: Gained carrier Jan 29 11:35:33.615331 systemd[1]: Reloading finished in 153 ms. Jan 29 11:35:33.617303 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 29 11:35:33.623678 systemd-networkd[1564]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:a6:3c.network. Jan 29 11:35:33.623820 systemd-networkd[1564]: enp2s0f1np1: Link UP Jan 29 11:35:33.624024 systemd-networkd[1564]: enp2s0f1np1: Gained carrier Jan 29 11:35:33.632253 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:35:33.640459 systemd-networkd[1564]: bond0: Link UP Jan 29 11:35:33.640679 systemd-networkd[1564]: bond0: Gained carrier Jan 29 11:35:33.643520 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:35:33.669455 systemd[1]: Starting ensure-sysext.service... Jan 29 11:35:33.678135 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:35:33.691611 systemd[1]: Reloading requested from client PID 1707 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:35:33.691618 systemd[1]: Reloading... Jan 29 11:35:33.699114 systemd-tmpfiles[1709]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:35:33.699335 systemd-tmpfiles[1709]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:35:33.699848 systemd-tmpfiles[1709]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:35:33.700026 systemd-tmpfiles[1709]: ACLs are not supported, ignoring. Jan 29 11:35:33.700063 systemd-tmpfiles[1709]: ACLs are not supported, ignoring. Jan 29 11:35:33.701707 systemd-tmpfiles[1709]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:35:33.701711 systemd-tmpfiles[1709]: Skipping /boot Jan 29 11:35:33.705942 systemd-tmpfiles[1709]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:35:33.705945 systemd-tmpfiles[1709]: Skipping /boot Jan 29 11:35:33.717307 zram_generator::config[1738]: No configuration found. Jan 29 11:35:33.717359 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Jan 29 11:35:33.729768 kernel: bond0: active interface up! Jan 29 11:35:33.787425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:33.838023 systemd[1]: Reloading finished in 146 ms. Jan 29 11:35:33.840342 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jan 29 11:35:33.853056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:33.875760 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:33.885379 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:35:33.898301 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:35:33.903594 augenrules[1824]: No rules Jan 29 11:35:33.911570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:35:33.923239 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:35:33.935676 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:33.935828 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:33.946626 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:35:33.957610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:35:33.978286 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:35:33.987377 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:35:33.988925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:33.989061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:33.989754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:34.000288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:34.008681 systemd-resolved[1830]: Positive Trust Anchors: Jan 29 11:35:34.008687 systemd-resolved[1830]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:35:34.008711 systemd-resolved[1830]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:35:34.011541 systemd-resolved[1830]: Using system hostname 'ci-4152.2.0-a-23f4c5510f'. Jan 29 11:35:34.012067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:34.022434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:34.022512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:35:34.022561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:34.023095 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:35:34.033710 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:35:34.044614 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:35:34.054576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:34.054657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:34.065575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:34.065656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:34.076573 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:34.076652 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:34.089280 systemd[1]: Reached target network.target - Network. Jan 29 11:35:34.097425 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:34.108402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:34.108522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:34.118442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:34.128980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:34.139959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:34.149410 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:34.149480 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:35:34.149528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:34.150107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:34.150187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:34.161615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:34.161693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:34.172568 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:34.172643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:34.184593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:34.194477 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:34.201376 augenrules[1863]: /sbin/augenrules: No change Jan 29 11:35:34.202493 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:34.203500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:34.205165 augenrules[1881]: No rules Jan 29 11:35:34.214049 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:35:34.224031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:34.244731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:34.254447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:34.254523 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:35:34.254575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:34.255241 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:34.255411 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:34.265644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:34.265723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:34.276592 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:35:34.276670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:35:34.286567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:34.286644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:34.297574 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:34.297650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:34.308364 systemd[1]: Finished ensure-sysext.service. Jan 29 11:35:34.318107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:35:34.318139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:35:34.327457 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:35:34.366535 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:35:34.377466 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:35:34.387413 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:35:34.398389 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:35:34.409383 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:35:34.420370 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:35:34.420386 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:35:34.428364 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:35:34.437435 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:35:34.447416 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:35:34.458362 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:35:34.466837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:35:34.477208 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:35:34.486153 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:35:34.495746 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:35:34.505405 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:35:34.515376 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:35:34.523457 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:35:34.523479 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:35:34.523492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:35:34.534372 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:35:34.545206 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:35:34.555037 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:35:34.564086 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:35:34.567546 coreos-metadata[1908]: Jan 29 11:35:34.567 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 11:35:34.574180 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:35:34.575357 dbus-daemon[1909]: [system] SELinux support is enabled Jan 29 11:35:34.576078 jq[1912]: false Jan 29 11:35:34.583399 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:35:34.584169 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:35:34.591240 extend-filesystems[1914]: Found loop4 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found loop5 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found loop6 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found loop7 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda1 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda2 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda3 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found usr Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda4 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda6 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda7 Jan 29 11:35:34.606391 extend-filesystems[1914]: Found sda9 Jan 29 11:35:34.606391 extend-filesystems[1914]: Checking size of /dev/sda9 Jan 29 11:35:34.606391 extend-filesystems[1914]: Resized partition /dev/sda9 Jan 29 11:35:34.780333 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jan 29 11:35:34.780360 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1485) Jan 29 11:35:34.594036 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:35:34.780438 extend-filesystems[1925]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:35:34.607589 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:35:34.631272 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:35:34.647866 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:35:34.665487 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 29 11:35:34.799719 sshd_keygen[1942]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:35:34.679190 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:35:34.799835 update_engine[1943]: I20250129 11:35:34.712204 1943 main.cc:92] Flatcar Update Engine starting Jan 29 11:35:34.799835 update_engine[1943]: I20250129 11:35:34.713047 1943 update_check_scheduler.cc:74] Next update check in 9m25s Jan 29 11:35:34.693155 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:35:34.800011 jq[1944]: true Jan 29 11:35:34.703597 systemd-logind[1938]: Watching system buttons on /dev/input/event3 (Power Button) Jan 29 11:35:34.703608 systemd-logind[1938]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 11:35:34.703618 systemd-logind[1938]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 29 11:35:34.703936 systemd-logind[1938]: New seat seat0. Jan 29 11:35:34.705136 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:35:34.736579 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:35:34.772448 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:35:34.772585 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:35:34.772726 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:35:34.772846 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:35:34.791816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:35:34.791935 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:35:34.799731 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:35:34.830224 jq[1958]: true Jan 29 11:35:34.831017 (ntainerd)[1959]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:35:34.834426 tar[1956]: linux-amd64/helm Jan 29 11:35:34.834523 dbus-daemon[1909]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:35:34.838128 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 29 11:35:34.838259 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 29 11:35:34.843580 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:35:34.846359 systemd-networkd[1564]: bond0: Gained IPv6LL Jan 29 11:35:34.870438 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:35:34.878698 bash[1989]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:35:34.879370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:35:34.879483 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:35:34.891396 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:35:34.891475 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:35:34.903759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:35:34.913436 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:35:34.926366 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:35:34.931787 locksmithd[1998]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:35:34.938631 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:35:34.938762 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:35:34.958497 systemd[1]: Starting sshkeys.service... Jan 29 11:35:34.966191 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:35:34.978308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:35:34.990013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:35:34.997279 containerd[1959]: time="2025-01-29T11:35:34.997238427Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:35:35.006349 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:35:35.009211 containerd[1959]: time="2025-01-29T11:35:35.009166106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010057 containerd[1959]: time="2025-01-29T11:35:35.010026093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010097 containerd[1959]: time="2025-01-29T11:35:35.010056480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:35:35.010097 containerd[1959]: time="2025-01-29T11:35:35.010079733Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:35:35.010239 containerd[1959]: time="2025-01-29T11:35:35.010227435Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:35:35.010271 containerd[1959]: time="2025-01-29T11:35:35.010243679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010312 containerd[1959]: time="2025-01-29T11:35:35.010298723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010348 containerd[1959]: time="2025-01-29T11:35:35.010312031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010454 containerd[1959]: time="2025-01-29T11:35:35.010441599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010494 containerd[1959]: time="2025-01-29T11:35:35.010453374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010494 containerd[1959]: time="2025-01-29T11:35:35.010465591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010494 containerd[1959]: time="2025-01-29T11:35:35.010475398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010568 containerd[1959]: time="2025-01-29T11:35:35.010537975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010686 containerd[1959]: time="2025-01-29T11:35:35.010676104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010773 containerd[1959]: time="2025-01-29T11:35:35.010762186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:35.010806 containerd[1959]: time="2025-01-29T11:35:35.010772539Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:35:35.010844 containerd[1959]: time="2025-01-29T11:35:35.010833314Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:35:35.010885 containerd[1959]: time="2025-01-29T11:35:35.010874352Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:35:35.018467 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:35:35.022800 containerd[1959]: time="2025-01-29T11:35:35.022787050Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:35:35.022839 containerd[1959]: time="2025-01-29T11:35:35.022812998Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:35:35.022839 containerd[1959]: time="2025-01-29T11:35:35.022829702Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:35:35.022900 containerd[1959]: time="2025-01-29T11:35:35.022843654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:35:35.022900 containerd[1959]: time="2025-01-29T11:35:35.022857511Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:35:35.022957 containerd[1959]: time="2025-01-29T11:35:35.022943716Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:35:35.024388 containerd[1959]: time="2025-01-29T11:35:35.024376715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:35:35.024465 containerd[1959]: time="2025-01-29T11:35:35.024454833Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:35:35.024499 containerd[1959]: time="2025-01-29T11:35:35.024468221Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:35:35.024499 containerd[1959]: time="2025-01-29T11:35:35.024481989Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:35:35.024499 containerd[1959]: time="2025-01-29T11:35:35.024495974Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024580 containerd[1959]: time="2025-01-29T11:35:35.024509062Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024580 containerd[1959]: time="2025-01-29T11:35:35.024521934Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024580 containerd[1959]: time="2025-01-29T11:35:35.024534542Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024580 containerd[1959]: time="2025-01-29T11:35:35.024548954Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024580 containerd[1959]: time="2025-01-29T11:35:35.024561037Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024580 containerd[1959]: time="2025-01-29T11:35:35.024573231Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024584647Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024604639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024618322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024630535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024642649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024654763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024668385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024679520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024692520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024705600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.024727 containerd[1959]: time="2025-01-29T11:35:35.024719999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024731383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024744031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024756315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024774282Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024794483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024819066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024831257Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024866511Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024881987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024892565Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024904636Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024919055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024933163Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:35:35.025001 containerd[1959]: time="2025-01-29T11:35:35.024943655Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:35:35.025345 containerd[1959]: time="2025-01-29T11:35:35.024953641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:35:35.025374 containerd[1959]: time="2025-01-29T11:35:35.025209175Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:35:35.025374 containerd[1959]: time="2025-01-29T11:35:35.025252854Z" level=info msg="Connect containerd service" Jan 29 11:35:35.025374 containerd[1959]: time="2025-01-29T11:35:35.025277418Z" level=info msg="using legacy CRI server" Jan 29 11:35:35.025374 containerd[1959]: time="2025-01-29T11:35:35.025284511Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:35:35.025562 containerd[1959]: time="2025-01-29T11:35:35.025384106Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:35:35.025737 containerd[1959]: time="2025-01-29T11:35:35.025724646Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:35:35.025872 containerd[1959]: time="2025-01-29T11:35:35.025848831Z" level=info msg="Start subscribing containerd event" Jan 29 11:35:35.025907 containerd[1959]: time="2025-01-29T11:35:35.025881448Z" level=info msg="Start recovering state" Jan 29 11:35:35.025934 containerd[1959]: time="2025-01-29T11:35:35.025913050Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:35:35.025934 containerd[1959]: time="2025-01-29T11:35:35.025917723Z" level=info msg="Start event monitor" Jan 29 11:35:35.025934 containerd[1959]: time="2025-01-29T11:35:35.025929141Z" level=info msg="Start snapshots syncer" Jan 29 11:35:35.026009 containerd[1959]: time="2025-01-29T11:35:35.025934960Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:35:35.026009 containerd[1959]: time="2025-01-29T11:35:35.025941098Z" level=info msg="Start streaming server" Jan 29 11:35:35.026009 containerd[1959]: time="2025-01-29T11:35:35.025948015Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:35:35.026009 containerd[1959]: time="2025-01-29T11:35:35.025989691Z" level=info msg="containerd successfully booted in 0.029196s" Jan 29 11:35:35.036468 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:35:35.047123 coreos-metadata[2030]: Jan 29 11:35:35.047 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 29 11:35:35.048106 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:35:35.057261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:35.067134 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:35:35.076037 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 29 11:35:35.086515 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:35:35.095936 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:35:35.098676 tar[1956]: linux-amd64/LICENSE Jan 29 11:35:35.098676 tar[1956]: linux-amd64/README.md Jan 29 11:35:35.125021 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:35:35.134298 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jan 29 11:35:35.141640 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:35:35.159709 extend-filesystems[1925]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:35:35.159709 extend-filesystems[1925]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 29 11:35:35.159709 extend-filesystems[1925]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jan 29 11:35:35.188485 extend-filesystems[1914]: Resized filesystem in /dev/sda9 Jan 29 11:35:35.188485 extend-filesystems[1914]: Found sdb Jan 29 11:35:35.160397 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:35:35.160524 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:35:35.755950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:35.773518 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:35:36.216307 kubelet[2065]: E0129 11:35:36.216189 2065 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:35:36.217627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:35:36.217731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:35:36.410836 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Jan 29 11:35:36.410986 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Jan 29 11:35:38.145113 coreos-metadata[2030]: Jan 29 11:35:38.145 INFO Fetch successful Jan 29 11:35:38.222058 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:35:38.227474 unknown[2030]: wrote ssh authorized keys file for user: core Jan 29 11:35:38.238705 systemd[1]: Started sshd@0-139.178.70.53:22-139.178.89.65:52106.service - OpenSSH per-connection server daemon (139.178.89.65:52106). Jan 29 11:35:38.262574 coreos-metadata[1908]: Jan 29 11:35:38.262 INFO Fetch successful Jan 29 11:35:38.281460 update-ssh-keys[2090]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:35:38.281831 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:35:38.306872 systemd[1]: Finished sshkeys.service. Jan 29 11:35:38.317506 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:35:38.328730 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 29 11:35:38.359472 sshd[2089]: Accepted publickey for core from 139.178.89.65 port 52106 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:35:38.360547 sshd-session[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:38.366038 systemd-logind[1938]: New session 1 of user core. Jan 29 11:35:38.366664 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:35:38.388610 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:35:38.401231 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:35:38.427629 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:35:38.438046 (systemd)[2112]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:35:38.543864 systemd[2112]: Queued start job for default target default.target. Jan 29 11:35:38.544040 systemd[2112]: Created slice app.slice - User Application Slice. Jan 29 11:35:38.544052 systemd[2112]: Reached target paths.target - Paths. Jan 29 11:35:38.544060 systemd[2112]: Reached target timers.target - Timers. Jan 29 11:35:38.558519 systemd[2112]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:35:38.561783 systemd[2112]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:35:38.561809 systemd[2112]: Reached target sockets.target - Sockets. Jan 29 11:35:38.561818 systemd[2112]: Reached target basic.target - Basic System. Jan 29 11:35:38.561838 systemd[2112]: Reached target default.target - Main User Target. Jan 29 11:35:38.561852 systemd[2112]: Startup finished in 117ms. Jan 29 11:35:38.562024 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:35:38.572249 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:35:38.645449 systemd[1]: Started sshd@1-139.178.70.53:22-139.178.89.65:52118.service - OpenSSH per-connection server daemon (139.178.89.65:52118). Jan 29 11:35:38.674894 sshd[2124]: Accepted publickey for core from 139.178.89.65 port 52118 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:35:38.675472 sshd-session[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:38.677936 systemd-logind[1938]: New session 2 of user core. Jan 29 11:35:38.689460 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:35:38.727825 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 29 11:35:38.739270 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:35:38.748697 systemd[1]: Startup finished in 21.382s (kernel) + 8.515s (userspace) = 29.898s. Jan 29 11:35:38.757590 sshd[2127]: Connection closed by 139.178.89.65 port 52118 Jan 29 11:35:38.757942 sshd-session[2124]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:38.761426 systemd[1]: Started sshd@2-139.178.70.53:22-139.178.89.65:52126.service - OpenSSH per-connection server daemon (139.178.89.65:52126). Jan 29 11:35:38.762093 systemd[1]: sshd@1-139.178.70.53:22-139.178.89.65:52118.service: Deactivated successfully. Jan 29 11:35:38.763639 systemd-logind[1938]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:35:38.763972 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:35:38.764690 systemd-logind[1938]: Removed session 2. Jan 29 11:35:38.769784 login[2038]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:35:38.772608 systemd-logind[1938]: New session 3 of user core. Jan 29 11:35:38.773125 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:35:38.775981 login[2034]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:35:38.778242 systemd-logind[1938]: New session 4 of user core. Jan 29 11:35:38.779055 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:35:38.795554 sshd[2132]: Accepted publickey for core from 139.178.89.65 port 52126 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:35:38.796324 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:38.798831 systemd-logind[1938]: New session 5 of user core. Jan 29 11:35:38.813664 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:35:38.860070 sshd[2164]: Connection closed by 139.178.89.65 port 52126 Jan 29 11:35:38.860482 sshd-session[2132]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:38.877951 systemd[1]: Started sshd@3-139.178.70.53:22-139.178.89.65:52128.service - OpenSSH per-connection server daemon (139.178.89.65:52128). Jan 29 11:35:38.880078 systemd[1]: sshd@2-139.178.70.53:22-139.178.89.65:52126.service: Deactivated successfully. Jan 29 11:35:38.883972 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:35:38.886125 systemd-logind[1938]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:35:38.889143 systemd-logind[1938]: Removed session 5. Jan 29 11:35:38.948242 sshd[2167]: Accepted publickey for core from 139.178.89.65 port 52128 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:35:38.948887 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:38.951568 systemd-logind[1938]: New session 6 of user core. Jan 29 11:35:38.961603 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:35:39.011959 sshd[2172]: Connection closed by 139.178.89.65 port 52128 Jan 29 11:35:39.012241 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:39.035112 systemd[1]: Started sshd@4-139.178.70.53:22-139.178.89.65:52136.service - OpenSSH per-connection server daemon (139.178.89.65:52136). Jan 29 11:35:39.037690 systemd[1]: sshd@3-139.178.70.53:22-139.178.89.65:52128.service: Deactivated successfully. Jan 29 11:35:39.041646 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:35:39.043801 systemd-logind[1938]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:35:39.046872 systemd-logind[1938]: Removed session 6. Jan 29 11:35:39.099809 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 52136 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:35:39.100410 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:39.102830 systemd-logind[1938]: New session 7 of user core. Jan 29 11:35:39.123612 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:35:39.144649 systemd-timesyncd[1902]: Contacted time server 208.67.75.242:123 (0.flatcar.pool.ntp.org). Jan 29 11:35:39.144683 systemd-timesyncd[1902]: Initial clock synchronization to Wed 2025-01-29 11:35:39.132838 UTC. Jan 29 11:35:39.180221 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:35:39.180378 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:39.196057 sudo[2181]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:39.196871 sshd[2180]: Connection closed by 139.178.89.65 port 52136 Jan 29 11:35:39.197050 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:39.211619 systemd[1]: Started sshd@5-139.178.70.53:22-139.178.89.65:52148.service - OpenSSH per-connection server daemon (139.178.89.65:52148). Jan 29 11:35:39.212302 systemd[1]: sshd@4-139.178.70.53:22-139.178.89.65:52136.service: Deactivated successfully. Jan 29 11:35:39.213498 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:35:39.214186 systemd-logind[1938]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:35:39.215198 systemd-logind[1938]: Removed session 7. Jan 29 11:35:39.243741 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 52148 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:35:39.244406 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:39.246917 systemd-logind[1938]: New session 8 of user core. Jan 29 11:35:39.260640 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:35:39.315849 sudo[2191]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:35:39.316603 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:39.324512 sudo[2191]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:39.332635 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:35:39.332770 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:39.353857 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:39.377237 augenrules[2213]: No rules Jan 29 11:35:39.377912 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:39.378193 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:39.379502 sudo[2190]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:39.380849 sshd[2189]: Connection closed by 139.178.89.65 port 52148 Jan 29 11:35:39.381230 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:39.397912 systemd[1]: Started sshd@6-139.178.70.53:22-139.178.89.65:52160.service - OpenSSH per-connection server daemon (139.178.89.65:52160). Jan 29 11:35:39.398897 systemd[1]: sshd@5-139.178.70.53:22-139.178.89.65:52148.service: Deactivated successfully. Jan 29 11:35:39.403532 systemd-logind[1938]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:35:39.405014 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:35:39.407763 systemd-logind[1938]: Removed session 8. Jan 29 11:35:39.436760 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 52160 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:35:39.437342 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:39.439838 systemd-logind[1938]: New session 9 of user core. Jan 29 11:35:39.451610 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:35:39.500644 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:35:39.500910 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:39.804636 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:35:39.804776 (dockerd)[2252]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:35:40.052109 dockerd[2252]: time="2025-01-29T11:35:40.052081743Z" level=info msg="Starting up" Jan 29 11:35:40.449907 dockerd[2252]: time="2025-01-29T11:35:40.449844852Z" level=info msg="Loading containers: start." Jan 29 11:35:40.564363 kernel: Initializing XFRM netlink socket Jan 29 11:35:40.667964 systemd-networkd[1564]: docker0: Link UP Jan 29 11:35:40.694378 dockerd[2252]: time="2025-01-29T11:35:40.694308696Z" level=info msg="Loading containers: done." Jan 29 11:35:40.702919 dockerd[2252]: time="2025-01-29T11:35:40.702843224Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:35:40.702919 dockerd[2252]: time="2025-01-29T11:35:40.702889065Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:35:40.703002 dockerd[2252]: time="2025-01-29T11:35:40.702941425Z" level=info msg="Daemon has completed initialization" Jan 29 11:35:40.717885 dockerd[2252]: time="2025-01-29T11:35:40.717823316Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:35:40.717978 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:35:41.542301 containerd[1959]: time="2025-01-29T11:35:41.542232652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:35:42.271663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068603253.mount: Deactivated successfully. Jan 29 11:35:43.096203 containerd[1959]: time="2025-01-29T11:35:43.096150153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:43.096439 containerd[1959]: time="2025-01-29T11:35:43.096300190Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:35:43.096815 containerd[1959]: time="2025-01-29T11:35:43.096775820Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:43.098355 containerd[1959]: time="2025-01-29T11:35:43.098338055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:43.099427 containerd[1959]: time="2025-01-29T11:35:43.099387847Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.557132972s" Jan 29 11:35:43.099427 containerd[1959]: time="2025-01-29T11:35:43.099403499Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:35:43.109903 containerd[1959]: time="2025-01-29T11:35:43.109885034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:35:44.292845 containerd[1959]: time="2025-01-29T11:35:44.292796685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:44.293074 containerd[1959]: time="2025-01-29T11:35:44.293003646Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:35:44.293461 containerd[1959]: time="2025-01-29T11:35:44.293426270Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:44.294931 containerd[1959]: time="2025-01-29T11:35:44.294895299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:44.295591 containerd[1959]: time="2025-01-29T11:35:44.295555349Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.185649729s" Jan 29 11:35:44.295591 containerd[1959]: time="2025-01-29T11:35:44.295570660Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:35:44.306892 containerd[1959]: time="2025-01-29T11:35:44.306872369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:35:45.118715 containerd[1959]: time="2025-01-29T11:35:45.118692988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:45.118936 containerd[1959]: time="2025-01-29T11:35:45.118887742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:35:45.119302 containerd[1959]: time="2025-01-29T11:35:45.119266191Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:45.120802 containerd[1959]: time="2025-01-29T11:35:45.120768485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:45.121861 containerd[1959]: time="2025-01-29T11:35:45.121823092Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 814.92971ms" Jan 29 11:35:45.121861 containerd[1959]: time="2025-01-29T11:35:45.121841015Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:35:45.133217 containerd[1959]: time="2025-01-29T11:35:45.133199150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:35:45.874009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673439760.mount: Deactivated successfully. Jan 29 11:35:46.067012 containerd[1959]: time="2025-01-29T11:35:46.066946371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:46.067343 containerd[1959]: time="2025-01-29T11:35:46.067299687Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:35:46.067810 containerd[1959]: time="2025-01-29T11:35:46.067765005Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:46.068657 containerd[1959]: time="2025-01-29T11:35:46.068612912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:46.069081 containerd[1959]: time="2025-01-29T11:35:46.069036547Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 935.817263ms" Jan 29 11:35:46.069081 containerd[1959]: time="2025-01-29T11:35:46.069051555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:35:46.079693 containerd[1959]: time="2025-01-29T11:35:46.079628709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:35:46.467888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:35:46.478573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:46.688493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:46.691262 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:35:46.722904 kubelet[2610]: E0129 11:35:46.722801 2610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:35:46.724943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:35:46.725025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:35:46.732669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528351833.mount: Deactivated successfully. Jan 29 11:35:47.192687 containerd[1959]: time="2025-01-29T11:35:47.192600102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:47.192920 containerd[1959]: time="2025-01-29T11:35:47.192718621Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:35:47.193232 containerd[1959]: time="2025-01-29T11:35:47.193217370Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:47.194782 containerd[1959]: time="2025-01-29T11:35:47.194768594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:47.195909 containerd[1959]: time="2025-01-29T11:35:47.195893536Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.1162446s" Jan 29 11:35:47.195945 containerd[1959]: time="2025-01-29T11:35:47.195910007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:35:47.207481 containerd[1959]: time="2025-01-29T11:35:47.207463354Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:35:47.699777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907624262.mount: Deactivated successfully. Jan 29 11:35:47.701038 containerd[1959]: time="2025-01-29T11:35:47.701019056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:47.701223 containerd[1959]: time="2025-01-29T11:35:47.701203005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:35:47.701752 containerd[1959]: time="2025-01-29T11:35:47.701739809Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:47.702959 containerd[1959]: time="2025-01-29T11:35:47.702925436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:47.703518 containerd[1959]: time="2025-01-29T11:35:47.703471543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 495.989766ms" Jan 29 11:35:47.703518 containerd[1959]: time="2025-01-29T11:35:47.703486335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:35:47.715555 containerd[1959]: time="2025-01-29T11:35:47.715494674Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:35:48.194377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883506511.mount: Deactivated successfully. Jan 29 11:35:49.270363 containerd[1959]: time="2025-01-29T11:35:49.270305473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:49.270595 containerd[1959]: time="2025-01-29T11:35:49.270497513Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:35:49.271048 containerd[1959]: time="2025-01-29T11:35:49.271007412Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:49.272672 containerd[1959]: time="2025-01-29T11:35:49.272632009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:49.273853 containerd[1959]: time="2025-01-29T11:35:49.273813304Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 1.558284413s" Jan 29 11:35:49.273853 containerd[1959]: time="2025-01-29T11:35:49.273827442Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:35:50.836880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:50.854657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:50.863764 systemd[1]: Reloading requested from client PID 2904 ('systemctl') (unit session-9.scope)... Jan 29 11:35:50.863771 systemd[1]: Reloading... Jan 29 11:35:50.896369 zram_generator::config[2943]: No configuration found. Jan 29 11:35:50.968948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:51.025668 systemd[1]: Reloading finished in 161 ms. Jan 29 11:35:51.055211 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:35:51.055274 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:35:51.055437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:51.065705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:51.255858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:51.259509 (kubelet)[3017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:35:51.281914 kubelet[3017]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:51.281914 kubelet[3017]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:35:51.281914 kubelet[3017]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:51.283003 kubelet[3017]: I0129 11:35:51.282956 3017 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:35:51.525216 kubelet[3017]: I0129 11:35:51.525167 3017 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:35:51.525216 kubelet[3017]: I0129 11:35:51.525180 3017 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:35:51.525341 kubelet[3017]: I0129 11:35:51.525332 3017 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:35:51.538466 kubelet[3017]: I0129 11:35:51.538413 3017 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:35:51.540702 kubelet[3017]: E0129 11:35:51.540665 3017 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.557083 kubelet[3017]: I0129 11:35:51.557074 3017 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:35:51.559354 kubelet[3017]: I0129 11:35:51.559303 3017 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:35:51.559491 kubelet[3017]: I0129 11:35:51.559335 3017 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.0-a-23f4c5510f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:35:51.560023 kubelet[3017]: I0129 11:35:51.559988 3017 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:35:51.560023 kubelet[3017]: I0129 11:35:51.559996 3017 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:35:51.560079 kubelet[3017]: I0129 11:35:51.560069 3017 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:51.560863 kubelet[3017]: I0129 11:35:51.560827 3017 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:35:51.560863 kubelet[3017]: I0129 11:35:51.560835 3017 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:35:51.560863 kubelet[3017]: I0129 11:35:51.560846 3017 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:35:51.560863 kubelet[3017]: I0129 11:35:51.560858 3017 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:35:51.561261 kubelet[3017]: W0129 11:35:51.561212 3017 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-23f4c5510f&limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.561261 kubelet[3017]: E0129 11:35:51.561260 3017 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-23f4c5510f&limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.561350 kubelet[3017]: W0129 11:35:51.561277 3017 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.561350 kubelet[3017]: E0129 11:35:51.561309 3017 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.564035 kubelet[3017]: I0129 11:35:51.564007 3017 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:35:51.565203 kubelet[3017]: I0129 11:35:51.565167 3017 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:35:51.565203 kubelet[3017]: W0129 11:35:51.565193 3017 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:35:51.565591 kubelet[3017]: I0129 11:35:51.565582 3017 server.go:1264] "Started kubelet" Jan 29 11:35:51.565707 kubelet[3017]: I0129 11:35:51.565671 3017 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:35:51.565744 kubelet[3017]: I0129 11:35:51.565705 3017 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:35:51.565839 kubelet[3017]: I0129 11:35:51.565828 3017 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:35:51.566260 kubelet[3017]: I0129 11:35:51.566253 3017 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:35:51.566298 kubelet[3017]: I0129 11:35:51.566284 3017 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:35:51.566335 kubelet[3017]: I0129 11:35:51.566306 3017 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:35:51.566335 kubelet[3017]: E0129 11:35:51.566303 3017 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-23f4c5510f\" not found" Jan 29 11:35:51.566389 kubelet[3017]: I0129 11:35:51.566341 3017 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:35:51.566486 kubelet[3017]: E0129 11:35:51.566456 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-23f4c5510f?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="200ms" Jan 29 11:35:51.566519 kubelet[3017]: I0129 11:35:51.566490 3017 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:35:51.566545 kubelet[3017]: W0129 11:35:51.566511 3017 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.566545 kubelet[3017]: E0129 11:35:51.566539 3017 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.566636 kubelet[3017]: I0129 11:35:51.566627 3017 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:35:51.566689 kubelet[3017]: I0129 11:35:51.566680 3017 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:35:51.571380 kubelet[3017]: E0129 11:35:51.571362 3017 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:35:51.571538 kubelet[3017]: I0129 11:35:51.571531 3017 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:35:51.572081 kubelet[3017]: E0129 11:35:51.571986 3017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.53:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-23f4c5510f.181f26b7b0371395 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-23f4c5510f,UID:ci-4152.2.0-a-23f4c5510f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-23f4c5510f,},FirstTimestamp:2025-01-29 11:35:51.565570965 +0000 UTC m=+0.303881164,LastTimestamp:2025-01-29 11:35:51.565570965 +0000 UTC m=+0.303881164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-23f4c5510f,}" Jan 29 11:35:51.579207 kubelet[3017]: I0129 11:35:51.579169 3017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:35:51.579764 kubelet[3017]: I0129 11:35:51.579722 3017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:35:51.579764 kubelet[3017]: I0129 11:35:51.579736 3017 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:35:51.579764 kubelet[3017]: I0129 11:35:51.579745 3017 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:35:51.579854 kubelet[3017]: E0129 11:35:51.579791 3017 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:35:51.580061 kubelet[3017]: W0129 11:35:51.580038 3017 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.580097 kubelet[3017]: E0129 11:35:51.580069 3017 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:51.680387 kubelet[3017]: E0129 11:35:51.680269 3017 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:35:51.737216 kubelet[3017]: I0129 11:35:51.737162 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:51.737979 kubelet[3017]: E0129 11:35:51.737913 3017 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:51.738467 kubelet[3017]: I0129 11:35:51.738422 3017 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:35:51.738467 kubelet[3017]: I0129 11:35:51.738465 3017 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:35:51.738769 kubelet[3017]: I0129 11:35:51.738510 3017 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:51.740453 kubelet[3017]: I0129 11:35:51.740438 3017 policy_none.go:49] "None policy: Start" Jan 29 11:35:51.740956 kubelet[3017]: I0129 11:35:51.740941 3017 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:35:51.741013 kubelet[3017]: I0129 11:35:51.740963 3017 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:35:51.744466 kubelet[3017]: I0129 11:35:51.744457 3017 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:35:51.744590 kubelet[3017]: I0129 11:35:51.744571 3017 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:35:51.744651 kubelet[3017]: I0129 11:35:51.744644 3017 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:35:51.745132 kubelet[3017]: E0129 11:35:51.745122 3017 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-a-23f4c5510f\" not found" Jan 29 11:35:51.767496 kubelet[3017]: E0129 11:35:51.767427 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-23f4c5510f?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="400ms" Jan 29 11:35:51.881069 kubelet[3017]: I0129 11:35:51.880780 3017 topology_manager.go:215] "Topology Admit Handler" podUID="c2e1c37892669461a8a0d350f6d23597" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:51.885253 kubelet[3017]: I0129 11:35:51.885157 3017 topology_manager.go:215] "Topology Admit Handler" podUID="e3c6ddfc5fc8b1c07bb5732f4c205853" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:51.889219 kubelet[3017]: I0129 11:35:51.889121 3017 topology_manager.go:215] "Topology Admit Handler" podUID="2a4f9fd6c28d983910a100896edfae10" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:51.942588 kubelet[3017]: I0129 11:35:51.942485 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:51.943328 kubelet[3017]: E0129 11:35:51.943194 3017 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.068943 kubelet[3017]: I0129 11:35:52.068813 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a4f9fd6c28d983910a100896edfae10-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" (UID: \"2a4f9fd6c28d983910a100896edfae10\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.068943 kubelet[3017]: I0129 11:35:52.068924 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.069331 kubelet[3017]: I0129 11:35:52.069002 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3c6ddfc5fc8b1c07bb5732f4c205853-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-23f4c5510f\" (UID: \"e3c6ddfc5fc8b1c07bb5732f4c205853\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.069331 kubelet[3017]: I0129 11:35:52.069059 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a4f9fd6c28d983910a100896edfae10-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" (UID: \"2a4f9fd6c28d983910a100896edfae10\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.069331 kubelet[3017]: I0129 11:35:52.069110 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a4f9fd6c28d983910a100896edfae10-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" (UID: \"2a4f9fd6c28d983910a100896edfae10\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.069331 kubelet[3017]: I0129 11:35:52.069159 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.069331 kubelet[3017]: I0129 11:35:52.069211 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.069833 kubelet[3017]: I0129 11:35:52.069265 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.069833 kubelet[3017]: I0129 11:35:52.069346 3017 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.169459 kubelet[3017]: E0129 11:35:52.169168 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-23f4c5510f?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="800ms" Jan 29 11:35:52.195119 containerd[1959]: time="2025-01-29T11:35:52.194983504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-23f4c5510f,Uid:c2e1c37892669461a8a0d350f6d23597,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:52.196099 containerd[1959]: time="2025-01-29T11:35:52.195407170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-23f4c5510f,Uid:e3c6ddfc5fc8b1c07bb5732f4c205853,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:52.196302 containerd[1959]: time="2025-01-29T11:35:52.196248967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-23f4c5510f,Uid:2a4f9fd6c28d983910a100896edfae10,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:52.345098 kubelet[3017]: I0129 11:35:52.345054 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.345302 kubelet[3017]: E0129 11:35:52.345221 3017 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:52.673420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502293276.mount: Deactivated successfully. Jan 29 11:35:52.675110 containerd[1959]: time="2025-01-29T11:35:52.675065143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:52.675230 containerd[1959]: time="2025-01-29T11:35:52.675194747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:35:52.675949 containerd[1959]: time="2025-01-29T11:35:52.675908226Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:52.677211 containerd[1959]: time="2025-01-29T11:35:52.677177171Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:52.677743 containerd[1959]: time="2025-01-29T11:35:52.677704773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:52.678070 containerd[1959]: time="2025-01-29T11:35:52.678020320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:35:52.678214 containerd[1959]: time="2025-01-29T11:35:52.678187116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:52.678290 containerd[1959]: time="2025-01-29T11:35:52.678271887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:35:52.678742 containerd[1959]: time="2025-01-29T11:35:52.678701168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.356343ms" Jan 29 11:35:52.681036 containerd[1959]: time="2025-01-29T11:35:52.680999572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.393402ms" Jan 29 11:35:52.681629 containerd[1959]: time="2025-01-29T11:35:52.681576234Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.37669ms" Jan 29 11:35:52.767099 containerd[1959]: time="2025-01-29T11:35:52.767044087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:52.767099 containerd[1959]: time="2025-01-29T11:35:52.767074320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:52.767099 containerd[1959]: time="2025-01-29T11:35:52.767084952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.767403 containerd[1959]: time="2025-01-29T11:35:52.767386524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.767909 containerd[1959]: time="2025-01-29T11:35:52.767689147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:52.767941 containerd[1959]: time="2025-01-29T11:35:52.767910701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:52.767941 containerd[1959]: time="2025-01-29T11:35:52.767919435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.767985 containerd[1959]: time="2025-01-29T11:35:52.767963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.769228 containerd[1959]: time="2025-01-29T11:35:52.769191111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:52.769228 containerd[1959]: time="2025-01-29T11:35:52.769221196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:52.769309 containerd[1959]: time="2025-01-29T11:35:52.769232268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.769309 containerd[1959]: time="2025-01-29T11:35:52.769280291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.809046 kubelet[3017]: W0129 11:35:52.808991 3017 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:52.809046 kubelet[3017]: E0129 11:35:52.809049 3017 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 29 11:35:52.809768 containerd[1959]: time="2025-01-29T11:35:52.809747852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-23f4c5510f,Uid:2a4f9fd6c28d983910a100896edfae10,Namespace:kube-system,Attempt:0,} returns sandbox id \"210dd20b27be95489386ddce7886c820ec10b0c647504eebcfe6ea391cd2c0a2\"" Jan 29 11:35:52.809809 containerd[1959]: time="2025-01-29T11:35:52.809769078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-23f4c5510f,Uid:e3c6ddfc5fc8b1c07bb5732f4c205853,Namespace:kube-system,Attempt:0,} returns sandbox id \"8303597e831903397030b2c31e05c24586254925b37da69df1a986669eee2e08\"" Jan 29 11:35:52.809929 containerd[1959]: time="2025-01-29T11:35:52.809917153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-23f4c5510f,Uid:c2e1c37892669461a8a0d350f6d23597,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bced2addb4f00fd6cbd4567ad6a20592f809305a828bccabbeee29617b4bb53\"" Jan 29 11:35:52.811558 containerd[1959]: time="2025-01-29T11:35:52.811545435Z" level=info msg="CreateContainer within sandbox \"8303597e831903397030b2c31e05c24586254925b37da69df1a986669eee2e08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:35:52.811611 containerd[1959]: time="2025-01-29T11:35:52.811597285Z" level=info msg="CreateContainer within sandbox \"210dd20b27be95489386ddce7886c820ec10b0c647504eebcfe6ea391cd2c0a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:35:52.811643 containerd[1959]: time="2025-01-29T11:35:52.811606608Z" level=info msg="CreateContainer within sandbox \"2bced2addb4f00fd6cbd4567ad6a20592f809305a828bccabbeee29617b4bb53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:35:52.817934 containerd[1959]: time="2025-01-29T11:35:52.817886841Z" level=info msg="CreateContainer within sandbox \"8303597e831903397030b2c31e05c24586254925b37da69df1a986669eee2e08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70b014510063ae400af18bea0a150bcb547999bc9a9d7c37b2c88c7acb857135\"" Jan 29 11:35:52.818136 containerd[1959]: time="2025-01-29T11:35:52.818121970Z" level=info msg="StartContainer for \"70b014510063ae400af18bea0a150bcb547999bc9a9d7c37b2c88c7acb857135\"" Jan 29 11:35:52.818893 containerd[1959]: time="2025-01-29T11:35:52.818844891Z" level=info msg="CreateContainer within sandbox \"210dd20b27be95489386ddce7886c820ec10b0c647504eebcfe6ea391cd2c0a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fed28ca23fa8166e67cd4cb7cfd8179af079e4d5ad1df691386b5345050d9b9f\"" Jan 29 11:35:52.818987 containerd[1959]: time="2025-01-29T11:35:52.818959645Z" level=info msg="StartContainer for \"fed28ca23fa8166e67cd4cb7cfd8179af079e4d5ad1df691386b5345050d9b9f\"" Jan 29 11:35:52.820711 containerd[1959]: time="2025-01-29T11:35:52.820668104Z" level=info msg="CreateContainer within sandbox \"2bced2addb4f00fd6cbd4567ad6a20592f809305a828bccabbeee29617b4bb53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09c2975c7192febf6c54e230063e05a6f3181f90aff1a8a332ea65a5b88c9b3f\"" Jan 29 11:35:52.820867 containerd[1959]: time="2025-01-29T11:35:52.820822489Z" level=info msg="StartContainer for \"09c2975c7192febf6c54e230063e05a6f3181f90aff1a8a332ea65a5b88c9b3f\"" Jan 29 11:35:52.868520 containerd[1959]: time="2025-01-29T11:35:52.868495704Z" level=info msg="StartContainer for \"fed28ca23fa8166e67cd4cb7cfd8179af079e4d5ad1df691386b5345050d9b9f\" returns successfully" Jan 29 11:35:52.868520 containerd[1959]: time="2025-01-29T11:35:52.868519628Z" level=info msg="StartContainer for \"70b014510063ae400af18bea0a150bcb547999bc9a9d7c37b2c88c7acb857135\" returns successfully" Jan 29 11:35:52.868630 containerd[1959]: time="2025-01-29T11:35:52.868545070Z" level=info msg="StartContainer for \"09c2975c7192febf6c54e230063e05a6f3181f90aff1a8a332ea65a5b88c9b3f\" returns successfully" Jan 29 11:35:52.869579 kubelet[3017]: E0129 11:35:52.869511 3017 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.53:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-23f4c5510f.181f26b7b0371395 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-23f4c5510f,UID:ci-4152.2.0-a-23f4c5510f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-23f4c5510f,},FirstTimestamp:2025-01-29 11:35:51.565570965 +0000 UTC m=+0.303881164,LastTimestamp:2025-01-29 11:35:51.565570965 +0000 UTC m=+0.303881164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-23f4c5510f,}" Jan 29 11:35:53.147408 kubelet[3017]: I0129 11:35:53.147358 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:53.494102 kubelet[3017]: E0129 11:35:53.494037 3017 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.0-a-23f4c5510f\" not found" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:53.561304 kubelet[3017]: I0129 11:35:53.561273 3017 apiserver.go:52] "Watching apiserver" Jan 29 11:35:53.566426 kubelet[3017]: I0129 11:35:53.566417 3017 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:35:53.591919 kubelet[3017]: I0129 11:35:53.591897 3017 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:54.594140 kubelet[3017]: W0129 11:35:54.594115 3017 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:35:55.696534 systemd[1]: Reloading requested from client PID 3331 ('systemctl') (unit session-9.scope)... Jan 29 11:35:55.696541 systemd[1]: Reloading... Jan 29 11:35:55.732380 zram_generator::config[3370]: No configuration found. Jan 29 11:35:55.812793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:55.873484 systemd[1]: Reloading finished in 176 ms. Jan 29 11:35:55.898540 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:55.904188 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:35:55.904370 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:55.927762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:56.142662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:56.148995 (kubelet)[3444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:35:56.172617 kubelet[3444]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:56.172617 kubelet[3444]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:35:56.172617 kubelet[3444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:56.172871 kubelet[3444]: I0129 11:35:56.172645 3444 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:35:56.175015 kubelet[3444]: I0129 11:35:56.174976 3444 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:35:56.175015 kubelet[3444]: I0129 11:35:56.174986 3444 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:35:56.175084 kubelet[3444]: I0129 11:35:56.175079 3444 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:35:56.175829 kubelet[3444]: I0129 11:35:56.175791 3444 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:35:56.176994 kubelet[3444]: I0129 11:35:56.176982 3444 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:35:56.185101 kubelet[3444]: I0129 11:35:56.185063 3444 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:35:56.185290 kubelet[3444]: I0129 11:35:56.185277 3444 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:35:56.185412 kubelet[3444]: I0129 11:35:56.185291 3444 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.0-a-23f4c5510f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:35:56.185412 kubelet[3444]: I0129 11:35:56.185394 3444 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:35:56.185412 kubelet[3444]: I0129 11:35:56.185399 3444 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:35:56.185507 kubelet[3444]: I0129 11:35:56.185422 3444 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:56.185507 kubelet[3444]: I0129 11:35:56.185468 3444 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:35:56.185507 kubelet[3444]: I0129 11:35:56.185475 3444 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:35:56.185507 kubelet[3444]: I0129 11:35:56.185486 3444 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:35:56.185507 kubelet[3444]: I0129 11:35:56.185495 3444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:35:56.185745 kubelet[3444]: I0129 11:35:56.185735 3444 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:35:56.185833 kubelet[3444]: I0129 11:35:56.185826 3444 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:35:56.186053 kubelet[3444]: I0129 11:35:56.186045 3444 server.go:1264] "Started kubelet" Jan 29 11:35:56.186115 kubelet[3444]: I0129 11:35:56.186101 3444 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:35:56.186153 kubelet[3444]: I0129 11:35:56.186125 3444 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:35:56.186568 kubelet[3444]: I0129 11:35:56.186555 3444 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:35:56.187931 kubelet[3444]: I0129 11:35:56.187918 3444 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:35:56.187997 kubelet[3444]: E0129 11:35:56.187938 3444 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:35:56.188041 kubelet[3444]: I0129 11:35:56.187998 3444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:35:56.188041 kubelet[3444]: I0129 11:35:56.188033 3444 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:35:56.188092 kubelet[3444]: E0129 11:35:56.188035 3444 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152.2.0-a-23f4c5510f\" not found" Jan 29 11:35:56.188092 kubelet[3444]: I0129 11:35:56.188064 3444 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:35:56.188163 kubelet[3444]: I0129 11:35:56.188156 3444 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:35:56.188345 kubelet[3444]: I0129 11:35:56.188337 3444 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:35:56.188389 kubelet[3444]: I0129 11:35:56.188376 3444 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:35:56.188953 kubelet[3444]: I0129 11:35:56.188945 3444 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:35:56.192697 kubelet[3444]: I0129 11:35:56.192677 3444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:35:56.193216 kubelet[3444]: I0129 11:35:56.193206 3444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:35:56.193251 kubelet[3444]: I0129 11:35:56.193225 3444 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:35:56.193251 kubelet[3444]: I0129 11:35:56.193239 3444 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:35:56.193311 kubelet[3444]: E0129 11:35:56.193269 3444 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:35:56.208607 kubelet[3444]: I0129 11:35:56.208557 3444 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:35:56.208607 kubelet[3444]: I0129 11:35:56.208567 3444 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:35:56.208607 kubelet[3444]: I0129 11:35:56.208579 3444 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:56.208714 kubelet[3444]: I0129 11:35:56.208665 3444 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:35:56.208714 kubelet[3444]: I0129 11:35:56.208672 3444 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:35:56.208714 kubelet[3444]: I0129 11:35:56.208683 3444 policy_none.go:49] "None policy: Start" Jan 29 11:35:56.208931 kubelet[3444]: I0129 11:35:56.208888 3444 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:35:56.208931 kubelet[3444]: I0129 11:35:56.208898 3444 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:35:56.208991 kubelet[3444]: I0129 11:35:56.208983 3444 state_mem.go:75] "Updated machine memory state" Jan 29 11:35:56.209558 kubelet[3444]: I0129 11:35:56.209522 3444 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:35:56.209653 kubelet[3444]: I0129 11:35:56.209608 3444 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:35:56.209682 kubelet[3444]: I0129 11:35:56.209660 3444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:35:56.294232 kubelet[3444]: I0129 11:35:56.294067 3444 topology_manager.go:215] "Topology Admit Handler" podUID="2a4f9fd6c28d983910a100896edfae10" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.294461 kubelet[3444]: I0129 11:35:56.294324 3444 topology_manager.go:215] "Topology Admit Handler" podUID="c2e1c37892669461a8a0d350f6d23597" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.294566 kubelet[3444]: I0129 11:35:56.294489 3444 topology_manager.go:215] "Topology Admit Handler" podUID="e3c6ddfc5fc8b1c07bb5732f4c205853" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.296220 kubelet[3444]: I0129 11:35:56.296157 3444 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.302751 kubelet[3444]: W0129 11:35:56.302685 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:35:56.303031 kubelet[3444]: W0129 11:35:56.302846 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:35:56.303364 kubelet[3444]: W0129 11:35:56.303322 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:35:56.303588 kubelet[3444]: E0129 11:35:56.303474 3444 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.306622 kubelet[3444]: I0129 11:35:56.306566 3444 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.306889 kubelet[3444]: I0129 11:35:56.306746 3444 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.490490 kubelet[3444]: I0129 11:35:56.490227 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a4f9fd6c28d983910a100896edfae10-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" (UID: \"2a4f9fd6c28d983910a100896edfae10\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.490490 kubelet[3444]: I0129 11:35:56.490367 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a4f9fd6c28d983910a100896edfae10-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" (UID: \"2a4f9fd6c28d983910a100896edfae10\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.490818 kubelet[3444]: I0129 11:35:56.490486 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.490818 kubelet[3444]: I0129 11:35:56.490564 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3c6ddfc5fc8b1c07bb5732f4c205853-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-23f4c5510f\" (UID: \"e3c6ddfc5fc8b1c07bb5732f4c205853\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.490818 kubelet[3444]: I0129 11:35:56.490633 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a4f9fd6c28d983910a100896edfae10-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" (UID: \"2a4f9fd6c28d983910a100896edfae10\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.490818 kubelet[3444]: I0129 11:35:56.490729 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.490818 kubelet[3444]: I0129 11:35:56.490799 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.491289 kubelet[3444]: I0129 11:35:56.490885 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:56.491289 kubelet[3444]: I0129 11:35:56.490951 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2e1c37892669461a8a0d350f6d23597-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" (UID: \"c2e1c37892669461a8a0d350f6d23597\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:57.186167 kubelet[3444]: I0129 11:35:57.186091 3444 apiserver.go:52] "Watching apiserver" Jan 29 11:35:57.206057 kubelet[3444]: W0129 11:35:57.205990 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:35:57.206057 kubelet[3444]: W0129 11:35:57.206034 3444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:35:57.206441 kubelet[3444]: E0129 11:35:57.206189 3444 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.0-a-23f4c5510f\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:57.206441 kubelet[3444]: E0129 11:35:57.206190 3444 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152.2.0-a-23f4c5510f\" already exists" pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" Jan 29 11:35:57.244917 kubelet[3444]: I0129 11:35:57.244837 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.0-a-23f4c5510f" podStartSLOduration=3.24481814 podStartE2EDuration="3.24481814s" podCreationTimestamp="2025-01-29 11:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:35:57.23899859 +0000 UTC m=+1.087866083" watchObservedRunningTime="2025-01-29 11:35:57.24481814 +0000 UTC m=+1.093685624" Jan 29 11:35:57.251143 kubelet[3444]: I0129 11:35:57.251067 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-23f4c5510f" podStartSLOduration=1.251050202 podStartE2EDuration="1.251050202s" podCreationTimestamp="2025-01-29 11:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:35:57.250959988 +0000 UTC m=+1.099827465" watchObservedRunningTime="2025-01-29 11:35:57.251050202 +0000 UTC m=+1.099917687" Jan 29 11:35:57.251239 kubelet[3444]: I0129 11:35:57.251158 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.0-a-23f4c5510f" podStartSLOduration=1.251149985 podStartE2EDuration="1.251149985s" podCreationTimestamp="2025-01-29 11:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:35:57.244968835 +0000 UTC m=+1.093836318" watchObservedRunningTime="2025-01-29 11:35:57.251149985 +0000 UTC m=+1.100017460" Jan 29 11:35:57.288607 kubelet[3444]: I0129 11:35:57.288579 3444 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:36:00.331635 sudo[2226]: pam_unix(sudo:session): session closed for user root Jan 29 11:36:00.332278 sshd[2225]: Connection closed by 139.178.89.65 port 52160 Jan 29 11:36:00.332502 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:00.334507 systemd[1]: sshd@6-139.178.70.53:22-139.178.89.65:52160.service: Deactivated successfully. Jan 29 11:36:00.335433 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:36:00.335445 systemd-logind[1938]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:36:00.336074 systemd-logind[1938]: Removed session 9. Jan 29 11:36:11.389275 kubelet[3444]: I0129 11:36:11.389236 3444 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:36:11.389835 containerd[1959]: time="2025-01-29T11:36:11.389700594Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:36:11.390179 kubelet[3444]: I0129 11:36:11.389989 3444 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:36:12.400389 kubelet[3444]: I0129 11:36:12.400316 3444 topology_manager.go:215] "Topology Admit Handler" podUID="6c5157a0-a078-434c-aaa3-6874359e8b1d" podNamespace="kube-system" podName="kube-proxy-2jq7t" Jan 29 11:36:12.413387 kubelet[3444]: I0129 11:36:12.413324 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c5157a0-a078-434c-aaa3-6874359e8b1d-kube-proxy\") pod \"kube-proxy-2jq7t\" (UID: \"6c5157a0-a078-434c-aaa3-6874359e8b1d\") " pod="kube-system/kube-proxy-2jq7t" Jan 29 11:36:12.413387 kubelet[3444]: I0129 11:36:12.413364 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c5157a0-a078-434c-aaa3-6874359e8b1d-xtables-lock\") pod \"kube-proxy-2jq7t\" (UID: \"6c5157a0-a078-434c-aaa3-6874359e8b1d\") " pod="kube-system/kube-proxy-2jq7t" Jan 29 11:36:12.413387 kubelet[3444]: I0129 11:36:12.413386 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c5157a0-a078-434c-aaa3-6874359e8b1d-lib-modules\") pod \"kube-proxy-2jq7t\" (UID: \"6c5157a0-a078-434c-aaa3-6874359e8b1d\") " pod="kube-system/kube-proxy-2jq7t" Jan 29 11:36:12.413591 kubelet[3444]: I0129 11:36:12.413416 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc4b6\" (UniqueName: \"kubernetes.io/projected/6c5157a0-a078-434c-aaa3-6874359e8b1d-kube-api-access-vc4b6\") pod \"kube-proxy-2jq7t\" (UID: \"6c5157a0-a078-434c-aaa3-6874359e8b1d\") " pod="kube-system/kube-proxy-2jq7t" Jan 29 11:36:12.559897 kubelet[3444]: I0129 11:36:12.559818 3444 topology_manager.go:215] "Topology Admit Handler" podUID="050ed224-79d9-4aad-bc54-458e2e0803a9" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-6lgnt" Jan 29 11:36:12.614995 kubelet[3444]: I0129 11:36:12.614861 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/050ed224-79d9-4aad-bc54-458e2e0803a9-var-lib-calico\") pod \"tigera-operator-7bc55997bb-6lgnt\" (UID: \"050ed224-79d9-4aad-bc54-458e2e0803a9\") " pod="tigera-operator/tigera-operator-7bc55997bb-6lgnt" Jan 29 11:36:12.614995 kubelet[3444]: I0129 11:36:12.614989 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6spz\" (UniqueName: \"kubernetes.io/projected/050ed224-79d9-4aad-bc54-458e2e0803a9-kube-api-access-w6spz\") pod \"tigera-operator-7bc55997bb-6lgnt\" (UID: \"050ed224-79d9-4aad-bc54-458e2e0803a9\") " pod="tigera-operator/tigera-operator-7bc55997bb-6lgnt" Jan 29 11:36:12.706501 containerd[1959]: time="2025-01-29T11:36:12.706259555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jq7t,Uid:6c5157a0-a078-434c-aaa3-6874359e8b1d,Namespace:kube-system,Attempt:0,}" Jan 29 11:36:12.717251 containerd[1959]: time="2025-01-29T11:36:12.717202189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:12.717581 containerd[1959]: time="2025-01-29T11:36:12.717479475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:12.717581 containerd[1959]: time="2025-01-29T11:36:12.717494016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:12.717633 containerd[1959]: time="2025-01-29T11:36:12.717576928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:12.747620 containerd[1959]: time="2025-01-29T11:36:12.747593272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jq7t,Uid:6c5157a0-a078-434c-aaa3-6874359e8b1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4b1e6521fb4ad43cfe37429e207c8b03033adf3a0270d28406a2ac2f0a19523\"" Jan 29 11:36:12.749431 containerd[1959]: time="2025-01-29T11:36:12.749413007Z" level=info msg="CreateContainer within sandbox \"c4b1e6521fb4ad43cfe37429e207c8b03033adf3a0270d28406a2ac2f0a19523\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:36:12.755365 containerd[1959]: time="2025-01-29T11:36:12.755290316Z" level=info msg="CreateContainer within sandbox \"c4b1e6521fb4ad43cfe37429e207c8b03033adf3a0270d28406a2ac2f0a19523\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"770fcc9b3177f891ec39a87d851b14db68230dbe0417648034c4e7851b66e0cb\"" Jan 29 11:36:12.755591 containerd[1959]: time="2025-01-29T11:36:12.755545126Z" level=info msg="StartContainer for \"770fcc9b3177f891ec39a87d851b14db68230dbe0417648034c4e7851b66e0cb\"" Jan 29 11:36:12.806563 containerd[1959]: time="2025-01-29T11:36:12.806535944Z" level=info msg="StartContainer for \"770fcc9b3177f891ec39a87d851b14db68230dbe0417648034c4e7851b66e0cb\" returns successfully" Jan 29 11:36:12.866603 containerd[1959]: time="2025-01-29T11:36:12.866546313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-6lgnt,Uid:050ed224-79d9-4aad-bc54-458e2e0803a9,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:36:12.876497 containerd[1959]: time="2025-01-29T11:36:12.876414763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:12.876497 containerd[1959]: time="2025-01-29T11:36:12.876439520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:12.876497 containerd[1959]: time="2025-01-29T11:36:12.876446283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:12.876608 containerd[1959]: time="2025-01-29T11:36:12.876485344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:12.921278 containerd[1959]: time="2025-01-29T11:36:12.921252535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-6lgnt,Uid:050ed224-79d9-4aad-bc54-458e2e0803a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4236fd33e8fd2d4e1ffa0a8f92040ef52f5ad62472c0bd14b19639bf1d5e3c53\"" Jan 29 11:36:12.922203 containerd[1959]: time="2025-01-29T11:36:12.922186255Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:36:13.243878 kubelet[3444]: I0129 11:36:13.243814 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2jq7t" podStartSLOduration=1.243801633 podStartE2EDuration="1.243801633s" podCreationTimestamp="2025-01-29 11:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:36:13.243756082 +0000 UTC m=+17.092623550" watchObservedRunningTime="2025-01-29 11:36:13.243801633 +0000 UTC m=+17.092669102" Jan 29 11:36:14.218730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987628771.mount: Deactivated successfully. Jan 29 11:36:14.436990 containerd[1959]: time="2025-01-29T11:36:14.436930929Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:14.437206 containerd[1959]: time="2025-01-29T11:36:14.437158117Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:36:14.437515 containerd[1959]: time="2025-01-29T11:36:14.437475670Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:14.438576 containerd[1959]: time="2025-01-29T11:36:14.438535904Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:14.439010 containerd[1959]: time="2025-01-29T11:36:14.438969883Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.516759888s" Jan 29 11:36:14.439010 containerd[1959]: time="2025-01-29T11:36:14.438984981Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:36:14.440034 containerd[1959]: time="2025-01-29T11:36:14.439993595Z" level=info msg="CreateContainer within sandbox \"4236fd33e8fd2d4e1ffa0a8f92040ef52f5ad62472c0bd14b19639bf1d5e3c53\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:36:14.443264 containerd[1959]: time="2025-01-29T11:36:14.443250234Z" level=info msg="CreateContainer within sandbox \"4236fd33e8fd2d4e1ffa0a8f92040ef52f5ad62472c0bd14b19639bf1d5e3c53\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e6c5329c7bce875e8cba7e2137d45f5d3fea3c00c90ed3228a559324ed5f757e\"" Jan 29 11:36:14.443422 containerd[1959]: time="2025-01-29T11:36:14.443408770Z" level=info msg="StartContainer for \"e6c5329c7bce875e8cba7e2137d45f5d3fea3c00c90ed3228a559324ed5f757e\"" Jan 29 11:36:14.481240 containerd[1959]: time="2025-01-29T11:36:14.481179862Z" level=info msg="StartContainer for \"e6c5329c7bce875e8cba7e2137d45f5d3fea3c00c90ed3228a559324ed5f757e\" returns successfully" Jan 29 11:36:15.257881 kubelet[3444]: I0129 11:36:15.257802 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-6lgnt" podStartSLOduration=1.740289426 podStartE2EDuration="3.257791776s" podCreationTimestamp="2025-01-29 11:36:12 +0000 UTC" firstStartedPulling="2025-01-29 11:36:12.92192043 +0000 UTC m=+16.770787905" lastFinishedPulling="2025-01-29 11:36:14.439422786 +0000 UTC m=+18.288290255" observedRunningTime="2025-01-29 11:36:15.257615128 +0000 UTC m=+19.106482596" watchObservedRunningTime="2025-01-29 11:36:15.257791776 +0000 UTC m=+19.106659243" Jan 29 11:36:17.358683 kubelet[3444]: I0129 11:36:17.358640 3444 topology_manager.go:215] "Topology Admit Handler" podUID="a710c809-189f-4bcc-a855-44488800b153" podNamespace="calico-system" podName="calico-typha-55589854b5-n7ztw" Jan 29 11:36:17.375783 kubelet[3444]: I0129 11:36:17.375753 3444 topology_manager.go:215] "Topology Admit Handler" podUID="e35ca0b4-9e22-43f6-a380-2cab75b1f396" podNamespace="calico-system" podName="calico-node-9dmps" Jan 29 11:36:17.447336 kubelet[3444]: I0129 11:36:17.447213 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-var-lib-calico\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.447613 kubelet[3444]: I0129 11:36:17.447345 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-cni-net-dir\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.447613 kubelet[3444]: I0129 11:36:17.447416 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79r2\" (UniqueName: \"kubernetes.io/projected/a710c809-189f-4bcc-a855-44488800b153-kube-api-access-v79r2\") pod \"calico-typha-55589854b5-n7ztw\" (UID: \"a710c809-189f-4bcc-a855-44488800b153\") " pod="calico-system/calico-typha-55589854b5-n7ztw" Jan 29 11:36:17.447613 kubelet[3444]: I0129 11:36:17.447502 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-cni-log-dir\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.447919 kubelet[3444]: I0129 11:36:17.447643 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-xtables-lock\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.447919 kubelet[3444]: I0129 11:36:17.447773 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-policysync\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.447919 kubelet[3444]: I0129 11:36:17.447855 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e35ca0b4-9e22-43f6-a380-2cab75b1f396-tigera-ca-bundle\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.447919 kubelet[3444]: I0129 11:36:17.447915 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e35ca0b4-9e22-43f6-a380-2cab75b1f396-node-certs\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.448318 kubelet[3444]: I0129 11:36:17.447965 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-cni-bin-dir\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.448318 kubelet[3444]: I0129 11:36:17.448102 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgpsr\" (UniqueName: \"kubernetes.io/projected/e35ca0b4-9e22-43f6-a380-2cab75b1f396-kube-api-access-fgpsr\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.448318 kubelet[3444]: I0129 11:36:17.448211 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a710c809-189f-4bcc-a855-44488800b153-typha-certs\") pod \"calico-typha-55589854b5-n7ztw\" (UID: \"a710c809-189f-4bcc-a855-44488800b153\") " pod="calico-system/calico-typha-55589854b5-n7ztw" Jan 29 11:36:17.448318 kubelet[3444]: I0129 11:36:17.448273 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a710c809-189f-4bcc-a855-44488800b153-tigera-ca-bundle\") pod \"calico-typha-55589854b5-n7ztw\" (UID: \"a710c809-189f-4bcc-a855-44488800b153\") " pod="calico-system/calico-typha-55589854b5-n7ztw" Jan 29 11:36:17.448721 kubelet[3444]: I0129 11:36:17.448353 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-var-run-calico\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.448721 kubelet[3444]: I0129 11:36:17.448461 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-flexvol-driver-host\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.448721 kubelet[3444]: I0129 11:36:17.448574 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e35ca0b4-9e22-43f6-a380-2cab75b1f396-lib-modules\") pod \"calico-node-9dmps\" (UID: \"e35ca0b4-9e22-43f6-a380-2cab75b1f396\") " pod="calico-system/calico-node-9dmps" Jan 29 11:36:17.504571 kubelet[3444]: I0129 11:36:17.504476 3444 topology_manager.go:215] "Topology Admit Handler" podUID="07bb4f50-b419-4106-95cb-874077564881" podNamespace="calico-system" podName="csi-node-driver-4frhm" Jan 29 11:36:17.505261 kubelet[3444]: E0129 11:36:17.505203 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:17.549607 kubelet[3444]: I0129 11:36:17.549555 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/07bb4f50-b419-4106-95cb-874077564881-varrun\") pod \"csi-node-driver-4frhm\" (UID: \"07bb4f50-b419-4106-95cb-874077564881\") " pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:17.549868 kubelet[3444]: I0129 11:36:17.549682 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07bb4f50-b419-4106-95cb-874077564881-kubelet-dir\") pod \"csi-node-driver-4frhm\" (UID: \"07bb4f50-b419-4106-95cb-874077564881\") " pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:17.550023 kubelet[3444]: I0129 11:36:17.549973 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/07bb4f50-b419-4106-95cb-874077564881-socket-dir\") pod \"csi-node-driver-4frhm\" (UID: \"07bb4f50-b419-4106-95cb-874077564881\") " pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:17.550465 kubelet[3444]: E0129 11:36:17.550443 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.550465 kubelet[3444]: W0129 11:36:17.550466 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.550613 kubelet[3444]: E0129 11:36:17.550494 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.550795 kubelet[3444]: E0129 11:36:17.550772 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.550795 kubelet[3444]: W0129 11:36:17.550788 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.550982 kubelet[3444]: E0129 11:36:17.550805 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.550982 kubelet[3444]: I0129 11:36:17.550836 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/07bb4f50-b419-4106-95cb-874077564881-registration-dir\") pod \"csi-node-driver-4frhm\" (UID: \"07bb4f50-b419-4106-95cb-874077564881\") " pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:17.551083 kubelet[3444]: E0129 11:36:17.551066 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.551134 kubelet[3444]: W0129 11:36:17.551081 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.551134 kubelet[3444]: E0129 11:36:17.551100 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.551441 kubelet[3444]: E0129 11:36:17.551397 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.551441 kubelet[3444]: W0129 11:36:17.551415 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.551441 kubelet[3444]: E0129 11:36:17.551439 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.551732 kubelet[3444]: E0129 11:36:17.551715 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.551788 kubelet[3444]: W0129 11:36:17.551733 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.551788 kubelet[3444]: E0129 11:36:17.551760 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.552046 kubelet[3444]: E0129 11:36:17.552030 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.552106 kubelet[3444]: W0129 11:36:17.552046 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.552106 kubelet[3444]: E0129 11:36:17.552066 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.552383 kubelet[3444]: E0129 11:36:17.552368 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.552464 kubelet[3444]: W0129 11:36:17.552383 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.552464 kubelet[3444]: E0129 11:36:17.552402 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.552650 kubelet[3444]: E0129 11:36:17.552634 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.552705 kubelet[3444]: W0129 11:36:17.552649 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.552705 kubelet[3444]: E0129 11:36:17.552668 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.552927 kubelet[3444]: E0129 11:36:17.552908 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.552927 kubelet[3444]: W0129 11:36:17.552922 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.553059 kubelet[3444]: E0129 11:36:17.552985 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.553136 kubelet[3444]: E0129 11:36:17.553123 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.553191 kubelet[3444]: W0129 11:36:17.553136 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.553252 kubelet[3444]: E0129 11:36:17.553198 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.553410 kubelet[3444]: E0129 11:36:17.553363 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.553410 kubelet[3444]: W0129 11:36:17.553374 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.553572 kubelet[3444]: E0129 11:36:17.553409 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.553690 kubelet[3444]: E0129 11:36:17.553645 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.553690 kubelet[3444]: W0129 11:36:17.553664 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.553690 kubelet[3444]: E0129 11:36:17.553688 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.554032 kubelet[3444]: E0129 11:36:17.554015 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.554102 kubelet[3444]: W0129 11:36:17.554032 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.554102 kubelet[3444]: E0129 11:36:17.554053 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.554379 kubelet[3444]: E0129 11:36:17.554323 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.554379 kubelet[3444]: W0129 11:36:17.554343 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.554379 kubelet[3444]: E0129 11:36:17.554362 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.554656 kubelet[3444]: E0129 11:36:17.554618 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.554656 kubelet[3444]: W0129 11:36:17.554631 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.554779 kubelet[3444]: E0129 11:36:17.554670 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.554933 kubelet[3444]: E0129 11:36:17.554914 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.554987 kubelet[3444]: W0129 11:36:17.554937 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.555035 kubelet[3444]: E0129 11:36:17.554974 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.555236 kubelet[3444]: E0129 11:36:17.555219 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.555305 kubelet[3444]: W0129 11:36:17.555239 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.555305 kubelet[3444]: E0129 11:36:17.555275 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.555611 kubelet[3444]: E0129 11:36:17.555584 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.555611 kubelet[3444]: W0129 11:36:17.555603 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.555791 kubelet[3444]: E0129 11:36:17.555628 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.555945 kubelet[3444]: E0129 11:36:17.555926 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.556021 kubelet[3444]: W0129 11:36:17.555947 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.556021 kubelet[3444]: E0129 11:36:17.555976 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.556237 kubelet[3444]: E0129 11:36:17.556220 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.556313 kubelet[3444]: W0129 11:36:17.556240 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.556368 kubelet[3444]: E0129 11:36:17.556289 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.556598 kubelet[3444]: E0129 11:36:17.556555 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.556598 kubelet[3444]: W0129 11:36:17.556569 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.556814 kubelet[3444]: E0129 11:36:17.556614 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.556934 kubelet[3444]: E0129 11:36:17.556911 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.557028 kubelet[3444]: W0129 11:36:17.556932 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.557028 kubelet[3444]: E0129 11:36:17.556983 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.557226 kubelet[3444]: E0129 11:36:17.557203 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.557226 kubelet[3444]: W0129 11:36:17.557222 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.557434 kubelet[3444]: E0129 11:36:17.557272 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.557531 kubelet[3444]: E0129 11:36:17.557504 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.557531 kubelet[3444]: W0129 11:36:17.557516 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.557698 kubelet[3444]: E0129 11:36:17.557580 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.557792 kubelet[3444]: E0129 11:36:17.557755 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.557792 kubelet[3444]: W0129 11:36:17.557766 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.557971 kubelet[3444]: E0129 11:36:17.557793 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.558075 kubelet[3444]: E0129 11:36:17.558053 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.558075 kubelet[3444]: W0129 11:36:17.558072 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.558188 kubelet[3444]: E0129 11:36:17.558120 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.558373 kubelet[3444]: E0129 11:36:17.558330 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.558373 kubelet[3444]: W0129 11:36:17.558343 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.558532 kubelet[3444]: E0129 11:36:17.558383 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.558596 kubelet[3444]: E0129 11:36:17.558583 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.558648 kubelet[3444]: W0129 11:36:17.558595 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.558696 kubelet[3444]: E0129 11:36:17.558669 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.558915 kubelet[3444]: E0129 11:36:17.558872 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.558915 kubelet[3444]: W0129 11:36:17.558886 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.559033 kubelet[3444]: E0129 11:36:17.558960 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.559149 kubelet[3444]: E0129 11:36:17.559135 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.559199 kubelet[3444]: W0129 11:36:17.559149 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.559199 kubelet[3444]: E0129 11:36:17.559177 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.559397 kubelet[3444]: E0129 11:36:17.559356 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.559397 kubelet[3444]: W0129 11:36:17.559368 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.559397 kubelet[3444]: E0129 11:36:17.559386 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.559561 kubelet[3444]: I0129 11:36:17.559418 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68plp\" (UniqueName: \"kubernetes.io/projected/07bb4f50-b419-4106-95cb-874077564881-kube-api-access-68plp\") pod \"csi-node-driver-4frhm\" (UID: \"07bb4f50-b419-4106-95cb-874077564881\") " pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:17.559781 kubelet[3444]: E0129 11:36:17.559763 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.559837 kubelet[3444]: W0129 11:36:17.559783 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.559837 kubelet[3444]: E0129 11:36:17.559806 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.560136 kubelet[3444]: E0129 11:36:17.560115 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.560136 kubelet[3444]: W0129 11:36:17.560136 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.560265 kubelet[3444]: E0129 11:36:17.560172 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.560468 kubelet[3444]: E0129 11:36:17.560405 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.560468 kubelet[3444]: W0129 11:36:17.560421 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.560667 kubelet[3444]: E0129 11:36:17.560500 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.560747 kubelet[3444]: E0129 11:36:17.560684 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.560747 kubelet[3444]: W0129 11:36:17.560699 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.560868 kubelet[3444]: E0129 11:36:17.560742 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.560967 kubelet[3444]: E0129 11:36:17.560951 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.561032 kubelet[3444]: W0129 11:36:17.560969 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.561098 kubelet[3444]: E0129 11:36:17.561028 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.561276 kubelet[3444]: E0129 11:36:17.561260 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.561344 kubelet[3444]: W0129 11:36:17.561280 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.561410 kubelet[3444]: E0129 11:36:17.561380 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.561656 kubelet[3444]: E0129 11:36:17.561618 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.561656 kubelet[3444]: W0129 11:36:17.561638 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.561757 kubelet[3444]: E0129 11:36:17.561712 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.561922 kubelet[3444]: E0129 11:36:17.561908 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.561974 kubelet[3444]: W0129 11:36:17.561923 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.561974 kubelet[3444]: E0129 11:36:17.561956 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.562164 kubelet[3444]: E0129 11:36:17.562151 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.562164 kubelet[3444]: W0129 11:36:17.562163 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.562269 kubelet[3444]: E0129 11:36:17.562188 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.562486 kubelet[3444]: E0129 11:36:17.562455 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.562486 kubelet[3444]: W0129 11:36:17.562476 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.562669 kubelet[3444]: E0129 11:36:17.562510 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.562760 kubelet[3444]: E0129 11:36:17.562743 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.562821 kubelet[3444]: W0129 11:36:17.562759 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.562821 kubelet[3444]: E0129 11:36:17.562789 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.563010 kubelet[3444]: E0129 11:36:17.562997 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.563010 kubelet[3444]: W0129 11:36:17.563010 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.563163 kubelet[3444]: E0129 11:36:17.563038 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.563269 kubelet[3444]: E0129 11:36:17.563252 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.563269 kubelet[3444]: W0129 11:36:17.563264 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.563459 kubelet[3444]: E0129 11:36:17.563311 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.563610 kubelet[3444]: E0129 11:36:17.563587 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.563610 kubelet[3444]: W0129 11:36:17.563602 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.563760 kubelet[3444]: E0129 11:36:17.563672 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.563886 kubelet[3444]: E0129 11:36:17.563864 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.563944 kubelet[3444]: W0129 11:36:17.563884 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.563992 kubelet[3444]: E0129 11:36:17.563928 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.564219 kubelet[3444]: E0129 11:36:17.564196 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.564359 kubelet[3444]: W0129 11:36:17.564218 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.564359 kubelet[3444]: E0129 11:36:17.564263 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.564583 kubelet[3444]: E0129 11:36:17.564562 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.564676 kubelet[3444]: W0129 11:36:17.564582 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.564676 kubelet[3444]: E0129 11:36:17.564606 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.564957 kubelet[3444]: E0129 11:36:17.564933 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.565076 kubelet[3444]: W0129 11:36:17.564955 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.565076 kubelet[3444]: E0129 11:36:17.564980 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.565334 kubelet[3444]: E0129 11:36:17.565291 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.565431 kubelet[3444]: W0129 11:36:17.565334 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.565431 kubelet[3444]: E0129 11:36:17.565359 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.567141 kubelet[3444]: E0129 11:36:17.567119 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.567141 kubelet[3444]: W0129 11:36:17.567138 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.567276 kubelet[3444]: E0129 11:36:17.567159 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.569193 kubelet[3444]: E0129 11:36:17.569143 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.569193 kubelet[3444]: W0129 11:36:17.569162 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.569193 kubelet[3444]: E0129 11:36:17.569180 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.573870 kubelet[3444]: E0129 11:36:17.573848 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.573870 kubelet[3444]: W0129 11:36:17.573869 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.573997 kubelet[3444]: E0129 11:36:17.573892 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.663325 kubelet[3444]: E0129 11:36:17.663081 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.663325 kubelet[3444]: W0129 11:36:17.663127 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.663325 kubelet[3444]: E0129 11:36:17.663171 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.663968 kubelet[3444]: E0129 11:36:17.663867 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.663968 kubelet[3444]: W0129 11:36:17.663896 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.663968 kubelet[3444]: E0129 11:36:17.663936 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.664489 containerd[1959]: time="2025-01-29T11:36:17.663804226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55589854b5-n7ztw,Uid:a710c809-189f-4bcc-a855-44488800b153,Namespace:calico-system,Attempt:0,}" Jan 29 11:36:17.665307 kubelet[3444]: E0129 11:36:17.664613 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.665307 kubelet[3444]: W0129 11:36:17.664647 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.665307 kubelet[3444]: E0129 11:36:17.664689 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.665307 kubelet[3444]: E0129 11:36:17.665193 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.665307 kubelet[3444]: W0129 11:36:17.665223 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.665307 kubelet[3444]: E0129 11:36:17.665239 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.665484 kubelet[3444]: E0129 11:36:17.665398 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.665484 kubelet[3444]: W0129 11:36:17.665404 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.665484 kubelet[3444]: E0129 11:36:17.665431 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.665564 kubelet[3444]: E0129 11:36:17.665521 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.665564 kubelet[3444]: W0129 11:36:17.665527 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.665564 kubelet[3444]: E0129 11:36:17.665539 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.665652 kubelet[3444]: E0129 11:36:17.665644 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.665652 kubelet[3444]: W0129 11:36:17.665651 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.665703 kubelet[3444]: E0129 11:36:17.665659 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.665800 kubelet[3444]: E0129 11:36:17.665790 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.665800 kubelet[3444]: W0129 11:36:17.665798 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.665860 kubelet[3444]: E0129 11:36:17.665809 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.665944 kubelet[3444]: E0129 11:36:17.665936 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.665944 kubelet[3444]: W0129 11:36:17.665943 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.665999 kubelet[3444]: E0129 11:36:17.665952 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666055 kubelet[3444]: E0129 11:36:17.666049 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666055 kubelet[3444]: W0129 11:36:17.666054 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666105 kubelet[3444]: E0129 11:36:17.666061 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666178 kubelet[3444]: E0129 11:36:17.666172 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666205 kubelet[3444]: W0129 11:36:17.666179 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666205 kubelet[3444]: E0129 11:36:17.666188 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666280 kubelet[3444]: E0129 11:36:17.666275 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666280 kubelet[3444]: W0129 11:36:17.666280 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666341 kubelet[3444]: E0129 11:36:17.666287 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666387 kubelet[3444]: E0129 11:36:17.666378 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666387 kubelet[3444]: W0129 11:36:17.666382 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666434 kubelet[3444]: E0129 11:36:17.666389 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666463 kubelet[3444]: E0129 11:36:17.666458 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666463 kubelet[3444]: W0129 11:36:17.666463 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666496 kubelet[3444]: E0129 11:36:17.666467 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666567 kubelet[3444]: E0129 11:36:17.666561 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666585 kubelet[3444]: W0129 11:36:17.666568 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666585 kubelet[3444]: E0129 11:36:17.666575 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666658 kubelet[3444]: E0129 11:36:17.666654 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666675 kubelet[3444]: W0129 11:36:17.666660 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666675 kubelet[3444]: E0129 11:36:17.666667 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666768 kubelet[3444]: E0129 11:36:17.666763 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666786 kubelet[3444]: W0129 11:36:17.666769 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666786 kubelet[3444]: E0129 11:36:17.666780 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666892 kubelet[3444]: E0129 11:36:17.666887 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.666909 kubelet[3444]: W0129 11:36:17.666892 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.666909 kubelet[3444]: E0129 11:36:17.666902 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.666983 kubelet[3444]: E0129 11:36:17.666977 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.667002 kubelet[3444]: W0129 11:36:17.666983 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.667002 kubelet[3444]: E0129 11:36:17.666993 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.667077 kubelet[3444]: E0129 11:36:17.667072 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.667096 kubelet[3444]: W0129 11:36:17.667077 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.667096 kubelet[3444]: E0129 11:36:17.667083 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.667164 kubelet[3444]: E0129 11:36:17.667159 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.667182 kubelet[3444]: W0129 11:36:17.667164 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.667182 kubelet[3444]: E0129 11:36:17.667172 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.667315 kubelet[3444]: E0129 11:36:17.667309 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.667338 kubelet[3444]: W0129 11:36:17.667315 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.667338 kubelet[3444]: E0129 11:36:17.667322 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.668044 kubelet[3444]: E0129 11:36:17.667410 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.668044 kubelet[3444]: W0129 11:36:17.667414 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.668044 kubelet[3444]: E0129 11:36:17.667420 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.668044 kubelet[3444]: E0129 11:36:17.667502 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.668044 kubelet[3444]: W0129 11:36:17.667506 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.668044 kubelet[3444]: E0129 11:36:17.667511 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.669659 kubelet[3444]: E0129 11:36:17.669620 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.669659 kubelet[3444]: W0129 11:36:17.669630 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.669659 kubelet[3444]: E0129 11:36:17.669637 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.671064 kubelet[3444]: E0129 11:36:17.671054 3444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:36:17.671064 kubelet[3444]: W0129 11:36:17.671061 3444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:36:17.671170 kubelet[3444]: E0129 11:36:17.671068 3444 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:36:17.677433 containerd[1959]: time="2025-01-29T11:36:17.677364618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:17.677612 containerd[1959]: time="2025-01-29T11:36:17.677564569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:17.677612 containerd[1959]: time="2025-01-29T11:36:17.677575083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:17.677662 containerd[1959]: time="2025-01-29T11:36:17.677617876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:17.678590 containerd[1959]: time="2025-01-29T11:36:17.678537439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dmps,Uid:e35ca0b4-9e22-43f6-a380-2cab75b1f396,Namespace:calico-system,Attempt:0,}" Jan 29 11:36:17.688638 containerd[1959]: time="2025-01-29T11:36:17.688532634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:17.688638 containerd[1959]: time="2025-01-29T11:36:17.688563117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:17.688638 containerd[1959]: time="2025-01-29T11:36:17.688570130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:17.688638 containerd[1959]: time="2025-01-29T11:36:17.688612182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:17.756097 containerd[1959]: time="2025-01-29T11:36:17.756015431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9dmps,Uid:e35ca0b4-9e22-43f6-a380-2cab75b1f396,Namespace:calico-system,Attempt:0,} returns sandbox id \"abf1198b511bebad0cf7bd2f338ca6970547bfdc5dfd72245d568d435706895b\"" Jan 29 11:36:17.759290 containerd[1959]: time="2025-01-29T11:36:17.759229883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:36:17.773022 containerd[1959]: time="2025-01-29T11:36:17.773001507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55589854b5-n7ztw,Uid:a710c809-189f-4bcc-a855-44488800b153,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0ef38b8078aaed4d353f88c68a6c8a55ebe43adfe052f653d4156c1adf9bfb9\"" Jan 29 11:36:19.038094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296025250.mount: Deactivated successfully. Jan 29 11:36:19.076734 containerd[1959]: time="2025-01-29T11:36:19.076710473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:19.076977 containerd[1959]: time="2025-01-29T11:36:19.076942675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:36:19.077228 containerd[1959]: time="2025-01-29T11:36:19.077218300Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:19.078153 containerd[1959]: time="2025-01-29T11:36:19.078141178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:19.078617 containerd[1959]: time="2025-01-29T11:36:19.078564070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.319271162s" Jan 29 11:36:19.078617 containerd[1959]: time="2025-01-29T11:36:19.078593012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:36:19.079106 containerd[1959]: time="2025-01-29T11:36:19.079068554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:36:19.079555 containerd[1959]: time="2025-01-29T11:36:19.079511678Z" level=info msg="CreateContainer within sandbox \"abf1198b511bebad0cf7bd2f338ca6970547bfdc5dfd72245d568d435706895b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:36:19.084041 containerd[1959]: time="2025-01-29T11:36:19.084026323Z" level=info msg="CreateContainer within sandbox \"abf1198b511bebad0cf7bd2f338ca6970547bfdc5dfd72245d568d435706895b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"96385e2daf572514f2036535489a28cc0ccde9610067311e4c49c037b03a0a0d\"" Jan 29 11:36:19.084275 containerd[1959]: time="2025-01-29T11:36:19.084264872Z" level=info msg="StartContainer for \"96385e2daf572514f2036535489a28cc0ccde9610067311e4c49c037b03a0a0d\"" Jan 29 11:36:19.133367 containerd[1959]: time="2025-01-29T11:36:19.133300745Z" level=info msg="StartContainer for \"96385e2daf572514f2036535489a28cc0ccde9610067311e4c49c037b03a0a0d\" returns successfully" Jan 29 11:36:19.193961 kubelet[3444]: E0129 11:36:19.193865 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:19.566452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96385e2daf572514f2036535489a28cc0ccde9610067311e4c49c037b03a0a0d-rootfs.mount: Deactivated successfully. Jan 29 11:36:19.767743 containerd[1959]: time="2025-01-29T11:36:19.767713431Z" level=info msg="shim disconnected" id=96385e2daf572514f2036535489a28cc0ccde9610067311e4c49c037b03a0a0d namespace=k8s.io Jan 29 11:36:19.767743 containerd[1959]: time="2025-01-29T11:36:19.767741725Z" level=warning msg="cleaning up after shim disconnected" id=96385e2daf572514f2036535489a28cc0ccde9610067311e4c49c037b03a0a0d namespace=k8s.io Jan 29 11:36:19.767743 containerd[1959]: time="2025-01-29T11:36:19.767747355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:19.870690 update_engine[1943]: I20250129 11:36:19.870458 1943 update_attempter.cc:509] Updating boot flags... Jan 29 11:36:19.916341 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (4170) Jan 29 11:36:19.937306 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (4173) Jan 29 11:36:19.964305 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (4173) Jan 29 11:36:21.194132 kubelet[3444]: E0129 11:36:21.193974 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:21.416161 containerd[1959]: time="2025-01-29T11:36:21.416106046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:21.416421 containerd[1959]: time="2025-01-29T11:36:21.416236727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:36:21.416625 containerd[1959]: time="2025-01-29T11:36:21.416610606Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:21.417598 containerd[1959]: time="2025-01-29T11:36:21.417584052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:21.418015 containerd[1959]: time="2025-01-29T11:36:21.418003697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.338920996s" Jan 29 11:36:21.418053 containerd[1959]: time="2025-01-29T11:36:21.418018733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:36:21.418499 containerd[1959]: time="2025-01-29T11:36:21.418485040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:36:21.421504 containerd[1959]: time="2025-01-29T11:36:21.421489181Z" level=info msg="CreateContainer within sandbox \"c0ef38b8078aaed4d353f88c68a6c8a55ebe43adfe052f653d4156c1adf9bfb9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:36:21.425192 containerd[1959]: time="2025-01-29T11:36:21.425148544Z" level=info msg="CreateContainer within sandbox \"c0ef38b8078aaed4d353f88c68a6c8a55ebe43adfe052f653d4156c1adf9bfb9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c9a0f008a0de934c06d3bc5979d7480ca587a1c376bdbd7b9ef1c141ba0ab280\"" Jan 29 11:36:21.425429 containerd[1959]: time="2025-01-29T11:36:21.425416020Z" level=info msg="StartContainer for \"c9a0f008a0de934c06d3bc5979d7480ca587a1c376bdbd7b9ef1c141ba0ab280\"" Jan 29 11:36:21.478448 containerd[1959]: time="2025-01-29T11:36:21.478390238Z" level=info msg="StartContainer for \"c9a0f008a0de934c06d3bc5979d7480ca587a1c376bdbd7b9ef1c141ba0ab280\" returns successfully" Jan 29 11:36:23.193803 kubelet[3444]: E0129 11:36:23.193742 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:23.264801 kubelet[3444]: I0129 11:36:23.264784 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:23.627699 containerd[1959]: time="2025-01-29T11:36:23.627675990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:23.627935 containerd[1959]: time="2025-01-29T11:36:23.627914160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:36:23.628261 containerd[1959]: time="2025-01-29T11:36:23.628251596Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:23.630058 containerd[1959]: time="2025-01-29T11:36:23.630015723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:23.630503 containerd[1959]: time="2025-01-29T11:36:23.630463470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.211961899s" Jan 29 11:36:23.630503 containerd[1959]: time="2025-01-29T11:36:23.630477469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:36:23.631468 containerd[1959]: time="2025-01-29T11:36:23.631434876Z" level=info msg="CreateContainer within sandbox \"abf1198b511bebad0cf7bd2f338ca6970547bfdc5dfd72245d568d435706895b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:36:23.635732 containerd[1959]: time="2025-01-29T11:36:23.635690842Z" level=info msg="CreateContainer within sandbox \"abf1198b511bebad0cf7bd2f338ca6970547bfdc5dfd72245d568d435706895b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d45ff87b2efb51ba54204de52daa02008b277236823f6d2a6825cb0588336770\"" Jan 29 11:36:23.635950 containerd[1959]: time="2025-01-29T11:36:23.635939490Z" level=info msg="StartContainer for \"d45ff87b2efb51ba54204de52daa02008b277236823f6d2a6825cb0588336770\"" Jan 29 11:36:23.674149 containerd[1959]: time="2025-01-29T11:36:23.674104242Z" level=info msg="StartContainer for \"d45ff87b2efb51ba54204de52daa02008b277236823f6d2a6825cb0588336770\" returns successfully" Jan 29 11:36:24.235079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d45ff87b2efb51ba54204de52daa02008b277236823f6d2a6825cb0588336770-rootfs.mount: Deactivated successfully. Jan 29 11:36:24.280574 kubelet[3444]: I0129 11:36:24.280513 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55589854b5-n7ztw" podStartSLOduration=3.636303114 podStartE2EDuration="7.28048926s" podCreationTimestamp="2025-01-29 11:36:17 +0000 UTC" firstStartedPulling="2025-01-29 11:36:17.774237086 +0000 UTC m=+21.623104554" lastFinishedPulling="2025-01-29 11:36:21.41842323 +0000 UTC m=+25.267290700" observedRunningTime="2025-01-29 11:36:22.270537844 +0000 UTC m=+26.119405321" watchObservedRunningTime="2025-01-29 11:36:24.28048926 +0000 UTC m=+28.129356755" Jan 29 11:36:24.324855 kubelet[3444]: I0129 11:36:24.324765 3444 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:36:24.361918 kubelet[3444]: I0129 11:36:24.361801 3444 topology_manager.go:215] "Topology Admit Handler" podUID="c7ba5fb2-2386-4664-9fd0-5429e827f425" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:24.362855 kubelet[3444]: I0129 11:36:24.362752 3444 topology_manager.go:215] "Topology Admit Handler" podUID="db110898-7bf6-46df-89f1-60bff5c819f3" podNamespace="calico-system" podName="calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:24.364093 kubelet[3444]: I0129 11:36:24.364043 3444 topology_manager.go:215] "Topology Admit Handler" podUID="fa522645-0537-44d1-a6b0-ea427a3e77da" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:24.365498 kubelet[3444]: I0129 11:36:24.365393 3444 topology_manager.go:215] "Topology Admit Handler" podUID="5acf5fde-1589-42fe-98f2-79caa06a6364" podNamespace="calico-apiserver" podName="calico-apiserver-66755f7995-4p25x" Jan 29 11:36:24.367054 kubelet[3444]: I0129 11:36:24.366983 3444 topology_manager.go:215] "Topology Admit Handler" podUID="bd3d55b5-a627-418c-9156-9c5f11420d4d" podNamespace="calico-apiserver" podName="calico-apiserver-66755f7995-nr82b" Jan 29 11:36:24.410559 kubelet[3444]: I0129 11:36:24.410481 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jl99\" (UniqueName: \"kubernetes.io/projected/5acf5fde-1589-42fe-98f2-79caa06a6364-kube-api-access-7jl99\") pod \"calico-apiserver-66755f7995-4p25x\" (UID: \"5acf5fde-1589-42fe-98f2-79caa06a6364\") " pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:24.410821 kubelet[3444]: I0129 11:36:24.410584 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd3d55b5-a627-418c-9156-9c5f11420d4d-calico-apiserver-certs\") pod \"calico-apiserver-66755f7995-nr82b\" (UID: \"bd3d55b5-a627-418c-9156-9c5f11420d4d\") " pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:24.410821 kubelet[3444]: I0129 11:36:24.410641 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5cr8\" (UniqueName: \"kubernetes.io/projected/bd3d55b5-a627-418c-9156-9c5f11420d4d-kube-api-access-m5cr8\") pod \"calico-apiserver-66755f7995-nr82b\" (UID: \"bd3d55b5-a627-418c-9156-9c5f11420d4d\") " pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:24.410821 kubelet[3444]: I0129 11:36:24.410710 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltvf\" (UniqueName: \"kubernetes.io/projected/c7ba5fb2-2386-4664-9fd0-5429e827f425-kube-api-access-hltvf\") pod \"coredns-7db6d8ff4d-bs4sr\" (UID: \"c7ba5fb2-2386-4664-9fd0-5429e827f425\") " pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:24.410821 kubelet[3444]: I0129 11:36:24.410768 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa522645-0537-44d1-a6b0-ea427a3e77da-config-volume\") pod \"coredns-7db6d8ff4d-jgtzn\" (UID: \"fa522645-0537-44d1-a6b0-ea427a3e77da\") " pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:24.411259 kubelet[3444]: I0129 11:36:24.410826 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7ba5fb2-2386-4664-9fd0-5429e827f425-config-volume\") pod \"coredns-7db6d8ff4d-bs4sr\" (UID: \"c7ba5fb2-2386-4664-9fd0-5429e827f425\") " pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:24.411259 kubelet[3444]: I0129 11:36:24.410880 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4bl2\" (UniqueName: \"kubernetes.io/projected/db110898-7bf6-46df-89f1-60bff5c819f3-kube-api-access-t4bl2\") pod \"calico-kube-controllers-5849fd7c66-k8fpk\" (UID: \"db110898-7bf6-46df-89f1-60bff5c819f3\") " pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:24.411259 kubelet[3444]: I0129 11:36:24.410937 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db110898-7bf6-46df-89f1-60bff5c819f3-tigera-ca-bundle\") pod \"calico-kube-controllers-5849fd7c66-k8fpk\" (UID: \"db110898-7bf6-46df-89f1-60bff5c819f3\") " pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:24.411259 kubelet[3444]: I0129 11:36:24.410987 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9h5\" (UniqueName: \"kubernetes.io/projected/fa522645-0537-44d1-a6b0-ea427a3e77da-kube-api-access-fs9h5\") pod \"coredns-7db6d8ff4d-jgtzn\" (UID: \"fa522645-0537-44d1-a6b0-ea427a3e77da\") " pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:24.411259 kubelet[3444]: I0129 11:36:24.411033 3444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5acf5fde-1589-42fe-98f2-79caa06a6364-calico-apiserver-certs\") pod \"calico-apiserver-66755f7995-4p25x\" (UID: \"5acf5fde-1589-42fe-98f2-79caa06a6364\") " pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:24.671347 containerd[1959]: time="2025-01-29T11:36:24.671234678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:0,}" Jan 29 11:36:24.673180 containerd[1959]: time="2025-01-29T11:36:24.673123911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:0,}" Jan 29 11:36:24.677980 containerd[1959]: time="2025-01-29T11:36:24.677905527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:0,}" Jan 29 11:36:24.679711 containerd[1959]: time="2025-01-29T11:36:24.679645899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:36:24.682614 containerd[1959]: time="2025-01-29T11:36:24.682535121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:36:24.924279 containerd[1959]: time="2025-01-29T11:36:24.924187246Z" level=info msg="shim disconnected" id=d45ff87b2efb51ba54204de52daa02008b277236823f6d2a6825cb0588336770 namespace=k8s.io Jan 29 11:36:24.924279 containerd[1959]: time="2025-01-29T11:36:24.924213789Z" level=warning msg="cleaning up after shim disconnected" id=d45ff87b2efb51ba54204de52daa02008b277236823f6d2a6825cb0588336770 namespace=k8s.io Jan 29 11:36:24.924279 containerd[1959]: time="2025-01-29T11:36:24.924237840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:24.963528 containerd[1959]: time="2025-01-29T11:36:24.963461017Z" level=error msg="Failed to destroy network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.963699 containerd[1959]: time="2025-01-29T11:36:24.963685095Z" level=error msg="encountered an error cleaning up failed sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.963735 containerd[1959]: time="2025-01-29T11:36:24.963725161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.963904 kubelet[3444]: E0129 11:36:24.963874 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.963964 kubelet[3444]: E0129 11:36:24.963928 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:24.963964 kubelet[3444]: E0129 11:36:24.963948 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:24.964017 kubelet[3444]: E0129 11:36:24.963988 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" podUID="5acf5fde-1589-42fe-98f2-79caa06a6364" Jan 29 11:36:24.964058 containerd[1959]: time="2025-01-29T11:36:24.963962969Z" level=error msg="Failed to destroy network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964086 containerd[1959]: time="2025-01-29T11:36:24.964062010Z" level=error msg="Failed to destroy network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964138 containerd[1959]: time="2025-01-29T11:36:24.964113375Z" level=error msg="Failed to destroy network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964211 containerd[1959]: time="2025-01-29T11:36:24.964194360Z" level=error msg="encountered an error cleaning up failed sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964247 containerd[1959]: time="2025-01-29T11:36:24.964211223Z" level=error msg="encountered an error cleaning up failed sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964247 containerd[1959]: time="2025-01-29T11:36:24.964226724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964247 containerd[1959]: time="2025-01-29T11:36:24.964237753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964357 containerd[1959]: time="2025-01-29T11:36:24.964311293Z" level=error msg="encountered an error cleaning up failed sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964357 containerd[1959]: time="2025-01-29T11:36:24.964336717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964436 kubelet[3444]: E0129 11:36:24.964304 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964436 kubelet[3444]: E0129 11:36:24.964324 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:24.964436 kubelet[3444]: E0129 11:36:24.964334 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:24.964524 containerd[1959]: time="2025-01-29T11:36:24.964366560Z" level=error msg="Failed to destroy network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964524 containerd[1959]: time="2025-01-29T11:36:24.964508991Z" level=error msg="encountered an error cleaning up failed sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964583 kubelet[3444]: E0129 11:36:24.964353 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jgtzn" podUID="fa522645-0537-44d1-a6b0-ea427a3e77da" Jan 29 11:36:24.964583 kubelet[3444]: E0129 11:36:24.964329 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964583 kubelet[3444]: E0129 11:36:24.964375 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:24.964688 containerd[1959]: time="2025-01-29T11:36:24.964530318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964717 kubelet[3444]: E0129 11:36:24.964385 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:24.964717 kubelet[3444]: E0129 11:36:24.964394 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964717 kubelet[3444]: E0129 11:36:24.964416 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:24.964717 kubelet[3444]: E0129 11:36:24.964427 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:24.964835 kubelet[3444]: E0129 11:36:24.964444 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" podUID="bd3d55b5-a627-418c-9156-9c5f11420d4d" Jan 29 11:36:24.964835 kubelet[3444]: E0129 11:36:24.964398 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" podUID="db110898-7bf6-46df-89f1-60bff5c819f3" Jan 29 11:36:24.964835 kubelet[3444]: E0129 11:36:24.964585 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:24.964957 kubelet[3444]: E0129 11:36:24.964597 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:24.964957 kubelet[3444]: E0129 11:36:24.964606 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:24.964957 kubelet[3444]: E0129 11:36:24.964631 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bs4sr" podUID="c7ba5fb2-2386-4664-9fd0-5429e827f425" Jan 29 11:36:24.965991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9-shm.mount: Deactivated successfully. Jan 29 11:36:24.966105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91-shm.mount: Deactivated successfully. Jan 29 11:36:25.200441 containerd[1959]: time="2025-01-29T11:36:25.200343637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:0,}" Jan 29 11:36:25.228505 containerd[1959]: time="2025-01-29T11:36:25.228452630Z" level=error msg="Failed to destroy network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.228668 containerd[1959]: time="2025-01-29T11:36:25.228624311Z" level=error msg="encountered an error cleaning up failed sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.228668 containerd[1959]: time="2025-01-29T11:36:25.228658460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.228813 kubelet[3444]: E0129 11:36:25.228792 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.228855 kubelet[3444]: E0129 11:36:25.228827 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:25.228855 kubelet[3444]: E0129 11:36:25.228840 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:25.228903 kubelet[3444]: E0129 11:36:25.228869 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:25.271069 kubelet[3444]: I0129 11:36:25.271039 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b" Jan 29 11:36:25.271351 containerd[1959]: time="2025-01-29T11:36:25.271315902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:36:25.271690 containerd[1959]: time="2025-01-29T11:36:25.271675144Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:25.271801 kubelet[3444]: I0129 11:36:25.271790 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9" Jan 29 11:36:25.271870 containerd[1959]: time="2025-01-29T11:36:25.271857744Z" level=info msg="Ensure that sandbox 7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b in task-service has been cleanup successfully" Jan 29 11:36:25.271989 containerd[1959]: time="2025-01-29T11:36:25.271974687Z" level=info msg="TearDown network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" successfully" Jan 29 11:36:25.272020 containerd[1959]: time="2025-01-29T11:36:25.271987982Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" returns successfully" Jan 29 11:36:25.272045 containerd[1959]: time="2025-01-29T11:36:25.272032088Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:25.272160 containerd[1959]: time="2025-01-29T11:36:25.272149085Z" level=info msg="Ensure that sandbox 9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9 in task-service has been cleanup successfully" Jan 29 11:36:25.272243 containerd[1959]: time="2025-01-29T11:36:25.272189773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:36:25.272265 kubelet[3444]: I0129 11:36:25.272163 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8" Jan 29 11:36:25.272287 containerd[1959]: time="2025-01-29T11:36:25.272240432Z" level=info msg="TearDown network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" successfully" Jan 29 11:36:25.272287 containerd[1959]: time="2025-01-29T11:36:25.272248496Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" returns successfully" Jan 29 11:36:25.272436 containerd[1959]: time="2025-01-29T11:36:25.272426668Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:25.272459 containerd[1959]: time="2025-01-29T11:36:25.272435379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:36:25.272558 containerd[1959]: time="2025-01-29T11:36:25.272548697Z" level=info msg="Ensure that sandbox 310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8 in task-service has been cleanup successfully" Jan 29 11:36:25.272628 containerd[1959]: time="2025-01-29T11:36:25.272618336Z" level=info msg="TearDown network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" successfully" Jan 29 11:36:25.272653 containerd[1959]: time="2025-01-29T11:36:25.272628412Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" returns successfully" Jan 29 11:36:25.272674 kubelet[3444]: I0129 11:36:25.272658 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f" Jan 29 11:36:25.272816 containerd[1959]: time="2025-01-29T11:36:25.272805415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:1,}" Jan 29 11:36:25.272911 containerd[1959]: time="2025-01-29T11:36:25.272899267Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:25.272996 containerd[1959]: time="2025-01-29T11:36:25.272987250Z" level=info msg="Ensure that sandbox 886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f in task-service has been cleanup successfully" Jan 29 11:36:25.273051 kubelet[3444]: I0129 11:36:25.273042 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72" Jan 29 11:36:25.273299 containerd[1959]: time="2025-01-29T11:36:25.273256308Z" level=info msg="TearDown network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" successfully" Jan 29 11:36:25.273345 containerd[1959]: time="2025-01-29T11:36:25.273331600Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" returns successfully" Jan 29 11:36:25.273382 containerd[1959]: time="2025-01-29T11:36:25.273313226Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:25.273627 containerd[1959]: time="2025-01-29T11:36:25.273608030Z" level=info msg="Ensure that sandbox 44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72 in task-service has been cleanup successfully" Jan 29 11:36:25.273797 containerd[1959]: time="2025-01-29T11:36:25.273750502Z" level=info msg="TearDown network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" successfully" Jan 29 11:36:25.273797 containerd[1959]: time="2025-01-29T11:36:25.273767656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:1,}" Jan 29 11:36:25.273858 containerd[1959]: time="2025-01-29T11:36:25.273770034Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" returns successfully" Jan 29 11:36:25.273998 kubelet[3444]: I0129 11:36:25.273986 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91" Jan 29 11:36:25.274099 containerd[1959]: time="2025-01-29T11:36:25.274081687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:1,}" Jan 29 11:36:25.274460 containerd[1959]: time="2025-01-29T11:36:25.274448405Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:25.274570 containerd[1959]: time="2025-01-29T11:36:25.274558330Z" level=info msg="Ensure that sandbox 5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91 in task-service has been cleanup successfully" Jan 29 11:36:25.274666 containerd[1959]: time="2025-01-29T11:36:25.274653332Z" level=info msg="TearDown network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" successfully" Jan 29 11:36:25.274666 containerd[1959]: time="2025-01-29T11:36:25.274664898Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" returns successfully" Jan 29 11:36:25.274843 containerd[1959]: time="2025-01-29T11:36:25.274831783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:1,}" Jan 29 11:36:25.312302 containerd[1959]: time="2025-01-29T11:36:25.312256453Z" level=error msg="Failed to destroy network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.312557 containerd[1959]: time="2025-01-29T11:36:25.312536173Z" level=error msg="encountered an error cleaning up failed sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.312605 containerd[1959]: time="2025-01-29T11:36:25.312587407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.312762 kubelet[3444]: E0129 11:36:25.312743 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.312963 kubelet[3444]: E0129 11:36:25.312781 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:25.312963 kubelet[3444]: E0129 11:36:25.312795 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:25.312963 kubelet[3444]: E0129 11:36:25.312823 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" podUID="bd3d55b5-a627-418c-9156-9c5f11420d4d" Jan 29 11:36:25.314231 containerd[1959]: time="2025-01-29T11:36:25.314130559Z" level=error msg="Failed to destroy network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.314452 containerd[1959]: time="2025-01-29T11:36:25.314434550Z" level=error msg="encountered an error cleaning up failed sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.314527 containerd[1959]: time="2025-01-29T11:36:25.314475054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.314627 kubelet[3444]: E0129 11:36:25.314609 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.314672 kubelet[3444]: E0129 11:36:25.314643 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:25.314672 kubelet[3444]: E0129 11:36:25.314659 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:25.314742 kubelet[3444]: E0129 11:36:25.314686 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jgtzn" podUID="fa522645-0537-44d1-a6b0-ea427a3e77da" Jan 29 11:36:25.314813 containerd[1959]: time="2025-01-29T11:36:25.314767210Z" level=error msg="Failed to destroy network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.314922 containerd[1959]: time="2025-01-29T11:36:25.314909098Z" level=error msg="encountered an error cleaning up failed sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.314963 containerd[1959]: time="2025-01-29T11:36:25.314934217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315026 kubelet[3444]: E0129 11:36:25.315008 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315067 kubelet[3444]: E0129 11:36:25.315037 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:25.315067 kubelet[3444]: E0129 11:36:25.315055 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:25.315129 kubelet[3444]: E0129 11:36:25.315081 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:25.315183 containerd[1959]: time="2025-01-29T11:36:25.315078430Z" level=error msg="Failed to destroy network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315299 containerd[1959]: time="2025-01-29T11:36:25.315280820Z" level=error msg="encountered an error cleaning up failed sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315343 containerd[1959]: time="2025-01-29T11:36:25.315309788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315390 kubelet[3444]: E0129 11:36:25.315376 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315419 kubelet[3444]: E0129 11:36:25.315398 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:25.315419 kubelet[3444]: E0129 11:36:25.315411 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:25.315461 kubelet[3444]: E0129 11:36:25.315429 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" podUID="5acf5fde-1589-42fe-98f2-79caa06a6364" Jan 29 11:36:25.315739 containerd[1959]: time="2025-01-29T11:36:25.315724363Z" level=error msg="Failed to destroy network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315869 containerd[1959]: time="2025-01-29T11:36:25.315858786Z" level=error msg="encountered an error cleaning up failed sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315898 containerd[1959]: time="2025-01-29T11:36:25.315879689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.315958 kubelet[3444]: E0129 11:36:25.315945 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.316001 kubelet[3444]: E0129 11:36:25.315968 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:25.316001 kubelet[3444]: E0129 11:36:25.315984 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:25.316054 kubelet[3444]: E0129 11:36:25.316012 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bs4sr" podUID="c7ba5fb2-2386-4664-9fd0-5429e827f425" Jan 29 11:36:25.316773 containerd[1959]: time="2025-01-29T11:36:25.316760903Z" level=error msg="Failed to destroy network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.316893 containerd[1959]: time="2025-01-29T11:36:25.316881576Z" level=error msg="encountered an error cleaning up failed sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.316917 containerd[1959]: time="2025-01-29T11:36:25.316902520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.316974 kubelet[3444]: E0129 11:36:25.316963 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:25.316995 kubelet[3444]: E0129 11:36:25.316982 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:25.316995 kubelet[3444]: E0129 11:36:25.316991 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:25.317035 kubelet[3444]: E0129 11:36:25.317007 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" podUID="db110898-7bf6-46df-89f1-60bff5c819f3" Jan 29 11:36:25.643144 systemd[1]: run-netns-cni\x2d6fc8008c\x2d0fd3\x2d5815\x2dee7c\x2dd28b009c2af0.mount: Deactivated successfully. Jan 29 11:36:25.643247 systemd[1]: run-netns-cni\x2d92c87c68\x2d84d1\x2d3fcb\x2d659a\x2d56809b0b2ca1.mount: Deactivated successfully. Jan 29 11:36:25.643362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b-shm.mount: Deactivated successfully. Jan 29 11:36:25.643443 systemd[1]: run-netns-cni\x2d22f5bf42\x2d05bc\x2dcc07\x2deb1f\x2d5b105b4a6810.mount: Deactivated successfully. Jan 29 11:36:25.643522 systemd[1]: run-netns-cni\x2dbb0e93e0\x2da840\x2d7322\x2d615d\x2dd3262101c299.mount: Deactivated successfully. Jan 29 11:36:25.643600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f-shm.mount: Deactivated successfully. Jan 29 11:36:25.643682 systemd[1]: run-netns-cni\x2d106b4bca\x2da9db\x2d66e3\x2d28e5\x2ddd4becc98c24.mount: Deactivated successfully. Jan 29 11:36:25.643757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8-shm.mount: Deactivated successfully. Jan 29 11:36:26.278982 kubelet[3444]: I0129 11:36:26.278911 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e" Jan 29 11:36:26.280069 containerd[1959]: time="2025-01-29T11:36:26.279974968Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" Jan 29 11:36:26.281208 containerd[1959]: time="2025-01-29T11:36:26.280560493Z" level=info msg="Ensure that sandbox 5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e in task-service has been cleanup successfully" Jan 29 11:36:26.281208 containerd[1959]: time="2025-01-29T11:36:26.281076428Z" level=info msg="TearDown network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" successfully" Jan 29 11:36:26.281208 containerd[1959]: time="2025-01-29T11:36:26.281141953Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" returns successfully" Jan 29 11:36:26.281777 kubelet[3444]: I0129 11:36:26.281704 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946" Jan 29 11:36:26.282052 containerd[1959]: time="2025-01-29T11:36:26.281976146Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:26.282361 containerd[1959]: time="2025-01-29T11:36:26.282274513Z" level=info msg="TearDown network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" successfully" Jan 29 11:36:26.282548 containerd[1959]: time="2025-01-29T11:36:26.282358097Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" returns successfully" Jan 29 11:36:26.283054 containerd[1959]: time="2025-01-29T11:36:26.282984974Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" Jan 29 11:36:26.283532 containerd[1959]: time="2025-01-29T11:36:26.283472528Z" level=info msg="Ensure that sandbox c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946 in task-service has been cleanup successfully" Jan 29 11:36:26.283696 containerd[1959]: time="2025-01-29T11:36:26.283515867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:2,}" Jan 29 11:36:26.283945 containerd[1959]: time="2025-01-29T11:36:26.283888731Z" level=info msg="TearDown network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" successfully" Jan 29 11:36:26.283945 containerd[1959]: time="2025-01-29T11:36:26.283932995Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" returns successfully" Jan 29 11:36:26.284635 containerd[1959]: time="2025-01-29T11:36:26.284526994Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:26.284777 containerd[1959]: time="2025-01-29T11:36:26.284721904Z" level=info msg="TearDown network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" successfully" Jan 29 11:36:26.284777 containerd[1959]: time="2025-01-29T11:36:26.284773662Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" returns successfully" Jan 29 11:36:26.284866 kubelet[3444]: I0129 11:36:26.284852 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09" Jan 29 11:36:26.285002 containerd[1959]: time="2025-01-29T11:36:26.284985017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:2,}" Jan 29 11:36:26.285077 containerd[1959]: time="2025-01-29T11:36:26.285065501Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" Jan 29 11:36:26.285186 containerd[1959]: time="2025-01-29T11:36:26.285176965Z" level=info msg="Ensure that sandbox adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09 in task-service has been cleanup successfully" Jan 29 11:36:26.285269 containerd[1959]: time="2025-01-29T11:36:26.285259681Z" level=info msg="TearDown network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" successfully" Jan 29 11:36:26.285309 containerd[1959]: time="2025-01-29T11:36:26.285269201Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" returns successfully" Jan 29 11:36:26.285385 containerd[1959]: time="2025-01-29T11:36:26.285373282Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:26.285438 containerd[1959]: time="2025-01-29T11:36:26.285417172Z" level=info msg="TearDown network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" successfully" Jan 29 11:36:26.285462 containerd[1959]: time="2025-01-29T11:36:26.285438744Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" returns successfully" Jan 29 11:36:26.285487 kubelet[3444]: I0129 11:36:26.285439 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4" Jan 29 11:36:26.285625 containerd[1959]: time="2025-01-29T11:36:26.285612305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:36:26.285678 containerd[1959]: time="2025-01-29T11:36:26.285665051Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" Jan 29 11:36:26.285785 containerd[1959]: time="2025-01-29T11:36:26.285774849Z" level=info msg="Ensure that sandbox 7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4 in task-service has been cleanup successfully" Jan 29 11:36:26.285862 containerd[1959]: time="2025-01-29T11:36:26.285851324Z" level=info msg="TearDown network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" successfully" Jan 29 11:36:26.285880 containerd[1959]: time="2025-01-29T11:36:26.285864029Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" returns successfully" Jan 29 11:36:26.285867 systemd[1]: run-netns-cni\x2d8cc2aafe\x2defde\x2d1d31\x2dc4c5\x2da7653fe8c90b.mount: Deactivated successfully. Jan 29 11:36:26.285973 kubelet[3444]: I0129 11:36:26.285954 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff" Jan 29 11:36:26.286045 containerd[1959]: time="2025-01-29T11:36:26.286032875Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:26.286090 containerd[1959]: time="2025-01-29T11:36:26.286081915Z" level=info msg="TearDown network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" successfully" Jan 29 11:36:26.286115 containerd[1959]: time="2025-01-29T11:36:26.286090863Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" returns successfully" Jan 29 11:36:26.286213 containerd[1959]: time="2025-01-29T11:36:26.286201500Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" Jan 29 11:36:26.286287 containerd[1959]: time="2025-01-29T11:36:26.286273174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:36:26.286322 containerd[1959]: time="2025-01-29T11:36:26.286303396Z" level=info msg="Ensure that sandbox e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff in task-service has been cleanup successfully" Jan 29 11:36:26.286411 containerd[1959]: time="2025-01-29T11:36:26.286402043Z" level=info msg="TearDown network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" successfully" Jan 29 11:36:26.286437 containerd[1959]: time="2025-01-29T11:36:26.286410257Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" returns successfully" Jan 29 11:36:26.286513 kubelet[3444]: I0129 11:36:26.286504 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098" Jan 29 11:36:26.286544 containerd[1959]: time="2025-01-29T11:36:26.286514950Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:26.286569 containerd[1959]: time="2025-01-29T11:36:26.286555103Z" level=info msg="TearDown network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" successfully" Jan 29 11:36:26.286569 containerd[1959]: time="2025-01-29T11:36:26.286563656Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" returns successfully" Jan 29 11:36:26.286750 containerd[1959]: time="2025-01-29T11:36:26.286738343Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" Jan 29 11:36:26.286781 containerd[1959]: time="2025-01-29T11:36:26.286748837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:2,}" Jan 29 11:36:26.286873 containerd[1959]: time="2025-01-29T11:36:26.286861546Z" level=info msg="Ensure that sandbox a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098 in task-service has been cleanup successfully" Jan 29 11:36:26.286958 containerd[1959]: time="2025-01-29T11:36:26.286948116Z" level=info msg="TearDown network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" successfully" Jan 29 11:36:26.286980 containerd[1959]: time="2025-01-29T11:36:26.286959595Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" returns successfully" Jan 29 11:36:26.287074 containerd[1959]: time="2025-01-29T11:36:26.287064522Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:26.287107 containerd[1959]: time="2025-01-29T11:36:26.287100614Z" level=info msg="TearDown network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" successfully" Jan 29 11:36:26.287126 containerd[1959]: time="2025-01-29T11:36:26.287107051Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" returns successfully" Jan 29 11:36:26.287281 containerd[1959]: time="2025-01-29T11:36:26.287272196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:2,}" Jan 29 11:36:26.287996 systemd[1]: run-netns-cni\x2deecdb77f\x2d8d5b\x2dc8d7\x2d26ae\x2dca4b3c6ca32f.mount: Deactivated successfully. Jan 29 11:36:26.288074 systemd[1]: run-netns-cni\x2d8a5c54d8\x2d2a6f\x2d051b\x2dbf41\x2dab80883dfc63.mount: Deactivated successfully. Jan 29 11:36:26.288125 systemd[1]: run-netns-cni\x2dc06964a7\x2decd3\x2d56cd\x2de7ce\x2d973e2e9c5e48.mount: Deactivated successfully. Jan 29 11:36:26.288177 systemd[1]: run-netns-cni\x2d427a0482\x2d07c6\x2daba5\x2dd14d\x2dcd7af33525cd.mount: Deactivated successfully. Jan 29 11:36:26.290527 systemd[1]: run-netns-cni\x2dc0c66c62\x2dfacc\x2df706\x2dc284\x2d25c9f2a8e10a.mount: Deactivated successfully. Jan 29 11:36:26.343349 containerd[1959]: time="2025-01-29T11:36:26.343311667Z" level=error msg="Failed to destroy network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.343629 containerd[1959]: time="2025-01-29T11:36:26.343607352Z" level=error msg="encountered an error cleaning up failed sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.343705 containerd[1959]: time="2025-01-29T11:36:26.343657436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.343871 kubelet[3444]: E0129 11:36:26.343812 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.343871 kubelet[3444]: E0129 11:36:26.343852 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:26.344180 kubelet[3444]: E0129 11:36:26.343868 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:26.344180 kubelet[3444]: E0129 11:36:26.343901 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" podUID="db110898-7bf6-46df-89f1-60bff5c819f3" Jan 29 11:36:26.346084 containerd[1959]: time="2025-01-29T11:36:26.346053214Z" level=error msg="Failed to destroy network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346152 containerd[1959]: time="2025-01-29T11:36:26.346119553Z" level=error msg="Failed to destroy network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346228 containerd[1959]: time="2025-01-29T11:36:26.346212436Z" level=error msg="Failed to destroy network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346283 containerd[1959]: time="2025-01-29T11:36:26.346271055Z" level=error msg="encountered an error cleaning up failed sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346317 containerd[1959]: time="2025-01-29T11:36:26.346283996Z" level=error msg="encountered an error cleaning up failed sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346337 containerd[1959]: time="2025-01-29T11:36:26.346317159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346367 containerd[1959]: time="2025-01-29T11:36:26.346320764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346408 containerd[1959]: time="2025-01-29T11:36:26.346364810Z" level=error msg="encountered an error cleaning up failed sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346408 containerd[1959]: time="2025-01-29T11:36:26.346391780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346477 kubelet[3444]: E0129 11:36:26.346459 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346504 kubelet[3444]: E0129 11:36:26.346497 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:26.346523 kubelet[3444]: E0129 11:36:26.346510 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:26.346544 containerd[1959]: time="2025-01-29T11:36:26.346500673Z" level=error msg="Failed to destroy network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346565 kubelet[3444]: E0129 11:36:26.346532 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:26.346565 kubelet[3444]: E0129 11:36:26.346459 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346565 kubelet[3444]: E0129 11:36:26.346560 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:26.346640 kubelet[3444]: E0129 11:36:26.346568 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:26.346640 kubelet[3444]: E0129 11:36:26.346586 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" podUID="bd3d55b5-a627-418c-9156-9c5f11420d4d" Jan 29 11:36:26.346640 kubelet[3444]: E0129 11:36:26.346459 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346708 kubelet[3444]: E0129 11:36:26.346608 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:26.346708 kubelet[3444]: E0129 11:36:26.346619 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:26.346708 kubelet[3444]: E0129 11:36:26.346638 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bs4sr" podUID="c7ba5fb2-2386-4664-9fd0-5429e827f425" Jan 29 11:36:26.346772 containerd[1959]: time="2025-01-29T11:36:26.346652587Z" level=error msg="encountered an error cleaning up failed sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346772 containerd[1959]: time="2025-01-29T11:36:26.346675351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346846 kubelet[3444]: E0129 11:36:26.346736 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.346846 kubelet[3444]: E0129 11:36:26.346755 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:26.346846 kubelet[3444]: E0129 11:36:26.346765 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:26.346901 kubelet[3444]: E0129 11:36:26.346782 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" podUID="5acf5fde-1589-42fe-98f2-79caa06a6364" Jan 29 11:36:26.347034 containerd[1959]: time="2025-01-29T11:36:26.347023936Z" level=error msg="Failed to destroy network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.347148 containerd[1959]: time="2025-01-29T11:36:26.347138557Z" level=error msg="encountered an error cleaning up failed sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.347171 containerd[1959]: time="2025-01-29T11:36:26.347159668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.347224 kubelet[3444]: E0129 11:36:26.347215 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:26.347243 kubelet[3444]: E0129 11:36:26.347228 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:26.347243 kubelet[3444]: E0129 11:36:26.347237 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:26.347278 kubelet[3444]: E0129 11:36:26.347251 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jgtzn" podUID="fa522645-0537-44d1-a6b0-ea427a3e77da" Jan 29 11:36:26.637020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316-shm.mount: Deactivated successfully. Jan 29 11:36:27.288888 kubelet[3444]: I0129 11:36:27.288872 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a" Jan 29 11:36:27.289247 containerd[1959]: time="2025-01-29T11:36:27.289227681Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\"" Jan 29 11:36:27.289478 containerd[1959]: time="2025-01-29T11:36:27.289360340Z" level=info msg="Ensure that sandbox 50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a in task-service has been cleanup successfully" Jan 29 11:36:27.289478 containerd[1959]: time="2025-01-29T11:36:27.289455231Z" level=info msg="TearDown network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" successfully" Jan 29 11:36:27.289478 containerd[1959]: time="2025-01-29T11:36:27.289463055Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" returns successfully" Jan 29 11:36:27.289571 kubelet[3444]: I0129 11:36:27.289521 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316" Jan 29 11:36:27.289616 containerd[1959]: time="2025-01-29T11:36:27.289600292Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" Jan 29 11:36:27.289678 containerd[1959]: time="2025-01-29T11:36:27.289648530Z" level=info msg="TearDown network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" successfully" Jan 29 11:36:27.289678 containerd[1959]: time="2025-01-29T11:36:27.289676419Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" returns successfully" Jan 29 11:36:27.289770 containerd[1959]: time="2025-01-29T11:36:27.289760090Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\"" Jan 29 11:36:27.289839 containerd[1959]: time="2025-01-29T11:36:27.289826564Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:27.289884 containerd[1959]: time="2025-01-29T11:36:27.289873946Z" level=info msg="Ensure that sandbox 7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316 in task-service has been cleanup successfully" Jan 29 11:36:27.289924 containerd[1959]: time="2025-01-29T11:36:27.289875536Z" level=info msg="TearDown network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" successfully" Jan 29 11:36:27.289960 containerd[1959]: time="2025-01-29T11:36:27.289925535Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" returns successfully" Jan 29 11:36:27.289991 containerd[1959]: time="2025-01-29T11:36:27.289966620Z" level=info msg="TearDown network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" successfully" Jan 29 11:36:27.289991 containerd[1959]: time="2025-01-29T11:36:27.289978986Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" returns successfully" Jan 29 11:36:27.290126 containerd[1959]: time="2025-01-29T11:36:27.290113447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:3,}" Jan 29 11:36:27.290169 containerd[1959]: time="2025-01-29T11:36:27.290155266Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" Jan 29 11:36:27.290236 kubelet[3444]: I0129 11:36:27.290228 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440" Jan 29 11:36:27.290267 containerd[1959]: time="2025-01-29T11:36:27.290213426Z" level=info msg="TearDown network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" successfully" Jan 29 11:36:27.290267 containerd[1959]: time="2025-01-29T11:36:27.290243071Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" returns successfully" Jan 29 11:36:27.290405 containerd[1959]: time="2025-01-29T11:36:27.290392073Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:27.290445 containerd[1959]: time="2025-01-29T11:36:27.290437962Z" level=info msg="TearDown network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" successfully" Jan 29 11:36:27.290473 containerd[1959]: time="2025-01-29T11:36:27.290445367Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" returns successfully" Jan 29 11:36:27.290473 containerd[1959]: time="2025-01-29T11:36:27.290467828Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\"" Jan 29 11:36:27.290587 containerd[1959]: time="2025-01-29T11:36:27.290575896Z" level=info msg="Ensure that sandbox 5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440 in task-service has been cleanup successfully" Jan 29 11:36:27.290646 containerd[1959]: time="2025-01-29T11:36:27.290635125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:3,}" Jan 29 11:36:27.290688 containerd[1959]: time="2025-01-29T11:36:27.290678216Z" level=info msg="TearDown network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" successfully" Jan 29 11:36:27.290721 containerd[1959]: time="2025-01-29T11:36:27.290689182Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" returns successfully" Jan 29 11:36:27.290832 containerd[1959]: time="2025-01-29T11:36:27.290820065Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" Jan 29 11:36:27.290898 containerd[1959]: time="2025-01-29T11:36:27.290870849Z" level=info msg="TearDown network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" successfully" Jan 29 11:36:27.290918 containerd[1959]: time="2025-01-29T11:36:27.290898153Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" returns successfully" Jan 29 11:36:27.291031 containerd[1959]: time="2025-01-29T11:36:27.291021931Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:27.291055 kubelet[3444]: I0129 11:36:27.291045 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3" Jan 29 11:36:27.291081 containerd[1959]: time="2025-01-29T11:36:27.291060670Z" level=info msg="TearDown network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" successfully" Jan 29 11:36:27.291110 containerd[1959]: time="2025-01-29T11:36:27.291083014Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" returns successfully" Jan 29 11:36:27.291292 containerd[1959]: time="2025-01-29T11:36:27.291283100Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\"" Jan 29 11:36:27.291332 containerd[1959]: time="2025-01-29T11:36:27.291309905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:3,}" Jan 29 11:36:27.291347 systemd[1]: run-netns-cni\x2d3248d8af\x2df551\x2da8db\x2d0ff1\x2d4a03933e418e.mount: Deactivated successfully. Jan 29 11:36:27.291537 containerd[1959]: time="2025-01-29T11:36:27.291403424Z" level=info msg="Ensure that sandbox 493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3 in task-service has been cleanup successfully" Jan 29 11:36:27.291537 containerd[1959]: time="2025-01-29T11:36:27.291523671Z" level=info msg="TearDown network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" successfully" Jan 29 11:36:27.291592 containerd[1959]: time="2025-01-29T11:36:27.291540334Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" returns successfully" Jan 29 11:36:27.291688 containerd[1959]: time="2025-01-29T11:36:27.291674676Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" Jan 29 11:36:27.291738 containerd[1959]: time="2025-01-29T11:36:27.291728769Z" level=info msg="TearDown network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" successfully" Jan 29 11:36:27.291760 containerd[1959]: time="2025-01-29T11:36:27.291739160Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" returns successfully" Jan 29 11:36:27.291827 kubelet[3444]: I0129 11:36:27.291813 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7" Jan 29 11:36:27.291879 containerd[1959]: time="2025-01-29T11:36:27.291868464Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:27.291930 containerd[1959]: time="2025-01-29T11:36:27.291920525Z" level=info msg="TearDown network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" successfully" Jan 29 11:36:27.291953 containerd[1959]: time="2025-01-29T11:36:27.291931611Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" returns successfully" Jan 29 11:36:27.292010 containerd[1959]: time="2025-01-29T11:36:27.292001810Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\"" Jan 29 11:36:27.292092 containerd[1959]: time="2025-01-29T11:36:27.292081542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:36:27.292175 containerd[1959]: time="2025-01-29T11:36:27.292081210Z" level=info msg="Ensure that sandbox 7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7 in task-service has been cleanup successfully" Jan 29 11:36:27.292276 containerd[1959]: time="2025-01-29T11:36:27.292265142Z" level=info msg="TearDown network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" successfully" Jan 29 11:36:27.292310 containerd[1959]: time="2025-01-29T11:36:27.292278100Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" returns successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292387942Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292427702Z" level=info msg="TearDown network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292434112Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" returns successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292514531Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292551891Z" level=info msg="TearDown network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292558028Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" returns successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292713195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292722749Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\"" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292798593Z" level=info msg="Ensure that sandbox 73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f in task-service has been cleanup successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292865034Z" level=info msg="TearDown network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292871691Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" returns successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.292981822Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.293034491Z" level=info msg="TearDown network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.293044145Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" returns successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.293154522Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.293190100Z" level=info msg="TearDown network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" successfully" Jan 29 11:36:27.293229 containerd[1959]: time="2025-01-29T11:36:27.293196060Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" returns successfully" Jan 29 11:36:27.293570 kubelet[3444]: I0129 11:36:27.292536 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f" Jan 29 11:36:27.293598 containerd[1959]: time="2025-01-29T11:36:27.293341176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:3,}" Jan 29 11:36:27.293704 systemd[1]: run-netns-cni\x2db2d66a56\x2d0668\x2d3f6d\x2d8866\x2d62f41457dd18.mount: Deactivated successfully. Jan 29 11:36:27.293780 systemd[1]: run-netns-cni\x2d6d297e58\x2d5646\x2def12\x2df7e4\x2d30bb8275c104.mount: Deactivated successfully. Jan 29 11:36:27.293845 systemd[1]: run-netns-cni\x2d362d5235\x2d928d\x2d7e94\x2dafcf\x2d90475c25c817.mount: Deactivated successfully. Jan 29 11:36:27.296130 systemd[1]: run-netns-cni\x2d35fc571e\x2daf05\x2d0c0f\x2da1a5\x2df7d599192115.mount: Deactivated successfully. Jan 29 11:36:27.296206 systemd[1]: run-netns-cni\x2de97e324a\x2d5729\x2d260e\x2daa67\x2d1ea7b8fc606a.mount: Deactivated successfully. Jan 29 11:36:27.333488 containerd[1959]: time="2025-01-29T11:36:27.333443866Z" level=error msg="Failed to destroy network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.333730 containerd[1959]: time="2025-01-29T11:36:27.333709521Z" level=error msg="encountered an error cleaning up failed sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.333773 containerd[1959]: time="2025-01-29T11:36:27.333759873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.333914 kubelet[3444]: E0129 11:36:27.333891 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.333962 kubelet[3444]: E0129 11:36:27.333927 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:27.333962 kubelet[3444]: E0129 11:36:27.333940 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:27.334027 kubelet[3444]: E0129 11:36:27.333966 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:27.335137 containerd[1959]: time="2025-01-29T11:36:27.335107992Z" level=error msg="Failed to destroy network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335272 containerd[1959]: time="2025-01-29T11:36:27.335248090Z" level=error msg="Failed to destroy network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335376 containerd[1959]: time="2025-01-29T11:36:27.335358414Z" level=error msg="encountered an error cleaning up failed sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335420 containerd[1959]: time="2025-01-29T11:36:27.335406214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335464 containerd[1959]: time="2025-01-29T11:36:27.335447451Z" level=error msg="encountered an error cleaning up failed sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335496 containerd[1959]: time="2025-01-29T11:36:27.335483221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335559 kubelet[3444]: E0129 11:36:27.335524 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335636 kubelet[3444]: E0129 11:36:27.335568 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.335636 kubelet[3444]: E0129 11:36:27.335576 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:27.335636 kubelet[3444]: E0129 11:36:27.335598 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:27.335636 kubelet[3444]: E0129 11:36:27.335598 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:27.335760 kubelet[3444]: E0129 11:36:27.335614 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:27.335760 kubelet[3444]: E0129 11:36:27.335626 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" podUID="db110898-7bf6-46df-89f1-60bff5c819f3" Jan 29 11:36:27.335850 kubelet[3444]: E0129 11:36:27.335640 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" podUID="bd3d55b5-a627-418c-9156-9c5f11420d4d" Jan 29 11:36:27.336836 containerd[1959]: time="2025-01-29T11:36:27.336815434Z" level=error msg="Failed to destroy network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337002 containerd[1959]: time="2025-01-29T11:36:27.336986517Z" level=error msg="encountered an error cleaning up failed sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337027 containerd[1959]: time="2025-01-29T11:36:27.337018879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337111 containerd[1959]: time="2025-01-29T11:36:27.337094125Z" level=error msg="Failed to destroy network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337145 kubelet[3444]: E0129 11:36:27.337096 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337145 kubelet[3444]: E0129 11:36:27.337123 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:27.337145 kubelet[3444]: E0129 11:36:27.337134 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:27.337213 kubelet[3444]: E0129 11:36:27.337157 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" podUID="5acf5fde-1589-42fe-98f2-79caa06a6364" Jan 29 11:36:27.337261 containerd[1959]: time="2025-01-29T11:36:27.337248535Z" level=error msg="encountered an error cleaning up failed sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337281 containerd[1959]: time="2025-01-29T11:36:27.337271836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337375 kubelet[3444]: E0129 11:36:27.337360 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337424 kubelet[3444]: E0129 11:36:27.337382 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:27.337424 kubelet[3444]: E0129 11:36:27.337396 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:27.337424 kubelet[3444]: E0129 11:36:27.337414 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bs4sr" podUID="c7ba5fb2-2386-4664-9fd0-5429e827f425" Jan 29 11:36:27.337519 containerd[1959]: time="2025-01-29T11:36:27.337406946Z" level=error msg="Failed to destroy network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337574 containerd[1959]: time="2025-01-29T11:36:27.337561321Z" level=error msg="encountered an error cleaning up failed sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337600 containerd[1959]: time="2025-01-29T11:36:27.337583977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337652 kubelet[3444]: E0129 11:36:27.337642 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:27.337674 kubelet[3444]: E0129 11:36:27.337658 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:27.337674 kubelet[3444]: E0129 11:36:27.337667 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:27.337712 kubelet[3444]: E0129 11:36:27.337684 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jgtzn" podUID="fa522645-0537-44d1-a6b0-ea427a3e77da" Jan 29 11:36:28.294224 kubelet[3444]: I0129 11:36:28.294207 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2" Jan 29 11:36:28.294511 containerd[1959]: time="2025-01-29T11:36:28.294492445Z" level=info msg="StopPodSandbox for \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\"" Jan 29 11:36:28.294642 containerd[1959]: time="2025-01-29T11:36:28.294629034Z" level=info msg="Ensure that sandbox 9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2 in task-service has been cleanup successfully" Jan 29 11:36:28.294753 containerd[1959]: time="2025-01-29T11:36:28.294741721Z" level=info msg="TearDown network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" successfully" Jan 29 11:36:28.294784 containerd[1959]: time="2025-01-29T11:36:28.294753488Z" level=info msg="StopPodSandbox for \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" returns successfully" Jan 29 11:36:28.294916 containerd[1959]: time="2025-01-29T11:36:28.294904571Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\"" Jan 29 11:36:28.294976 containerd[1959]: time="2025-01-29T11:36:28.294950450Z" level=info msg="TearDown network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" successfully" Jan 29 11:36:28.294999 containerd[1959]: time="2025-01-29T11:36:28.294977174Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" returns successfully" Jan 29 11:36:28.295063 kubelet[3444]: I0129 11:36:28.295053 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee" Jan 29 11:36:28.295119 containerd[1959]: time="2025-01-29T11:36:28.295107124Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" Jan 29 11:36:28.295169 containerd[1959]: time="2025-01-29T11:36:28.295160267Z" level=info msg="TearDown network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" successfully" Jan 29 11:36:28.295192 containerd[1959]: time="2025-01-29T11:36:28.295169975Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" returns successfully" Jan 29 11:36:28.295321 containerd[1959]: time="2025-01-29T11:36:28.295311634Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:28.295344 containerd[1959]: time="2025-01-29T11:36:28.295330614Z" level=info msg="StopPodSandbox for \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\"" Jan 29 11:36:28.295371 containerd[1959]: time="2025-01-29T11:36:28.295347851Z" level=info msg="TearDown network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" successfully" Jan 29 11:36:28.295389 containerd[1959]: time="2025-01-29T11:36:28.295371477Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" returns successfully" Jan 29 11:36:28.295413 containerd[1959]: time="2025-01-29T11:36:28.295403605Z" level=info msg="Ensure that sandbox 9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee in task-service has been cleanup successfully" Jan 29 11:36:28.295494 containerd[1959]: time="2025-01-29T11:36:28.295485442Z" level=info msg="TearDown network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" successfully" Jan 29 11:36:28.295521 containerd[1959]: time="2025-01-29T11:36:28.295493473Z" level=info msg="StopPodSandbox for \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" returns successfully" Jan 29 11:36:28.295577 containerd[1959]: time="2025-01-29T11:36:28.295565764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:36:28.295610 containerd[1959]: time="2025-01-29T11:36:28.295600799Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\"" Jan 29 11:36:28.295641 containerd[1959]: time="2025-01-29T11:36:28.295635613Z" level=info msg="TearDown network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" successfully" Jan 29 11:36:28.295659 containerd[1959]: time="2025-01-29T11:36:28.295641647Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" returns successfully" Jan 29 11:36:28.295755 containerd[1959]: time="2025-01-29T11:36:28.295746539Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" Jan 29 11:36:28.295786 containerd[1959]: time="2025-01-29T11:36:28.295781008Z" level=info msg="TearDown network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" successfully" Jan 29 11:36:28.295805 containerd[1959]: time="2025-01-29T11:36:28.295786575Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" returns successfully" Jan 29 11:36:28.296052 containerd[1959]: time="2025-01-29T11:36:28.296035007Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:28.296097 containerd[1959]: time="2025-01-29T11:36:28.296083482Z" level=info msg="TearDown network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" successfully" Jan 29 11:36:28.296097 containerd[1959]: time="2025-01-29T11:36:28.296091603Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" returns successfully" Jan 29 11:36:28.296290 kubelet[3444]: I0129 11:36:28.296270 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79" Jan 29 11:36:28.296793 containerd[1959]: time="2025-01-29T11:36:28.296413639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:36:28.296793 containerd[1959]: time="2025-01-29T11:36:28.296600335Z" level=info msg="StopPodSandbox for \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\"" Jan 29 11:36:28.296793 containerd[1959]: time="2025-01-29T11:36:28.296744544Z" level=info msg="Ensure that sandbox 4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79 in task-service has been cleanup successfully" Jan 29 11:36:28.296709 systemd[1]: run-netns-cni\x2d0316f1b3\x2d2e5d\x2d1469\x2da6d2\x2d6c35b174f385.mount: Deactivated successfully. Jan 29 11:36:28.297078 containerd[1959]: time="2025-01-29T11:36:28.296852235Z" level=info msg="TearDown network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" successfully" Jan 29 11:36:28.297078 containerd[1959]: time="2025-01-29T11:36:28.296863592Z" level=info msg="StopPodSandbox for \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" returns successfully" Jan 29 11:36:28.297237 containerd[1959]: time="2025-01-29T11:36:28.297222223Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\"" Jan 29 11:36:28.297287 containerd[1959]: time="2025-01-29T11:36:28.297276830Z" level=info msg="TearDown network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" successfully" Jan 29 11:36:28.297336 containerd[1959]: time="2025-01-29T11:36:28.297288439Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" returns successfully" Jan 29 11:36:28.297470 containerd[1959]: time="2025-01-29T11:36:28.297455543Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" Jan 29 11:36:28.297537 containerd[1959]: time="2025-01-29T11:36:28.297511456Z" level=info msg="TearDown network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" successfully" Jan 29 11:36:28.297571 containerd[1959]: time="2025-01-29T11:36:28.297536456Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" returns successfully" Jan 29 11:36:28.297680 containerd[1959]: time="2025-01-29T11:36:28.297671072Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:28.297716 containerd[1959]: time="2025-01-29T11:36:28.297709596Z" level=info msg="TearDown network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" successfully" Jan 29 11:36:28.297745 containerd[1959]: time="2025-01-29T11:36:28.297716254Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" returns successfully" Jan 29 11:36:28.297775 kubelet[3444]: I0129 11:36:28.297763 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c" Jan 29 11:36:28.298002 containerd[1959]: time="2025-01-29T11:36:28.297987140Z" level=info msg="StopPodSandbox for \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\"" Jan 29 11:36:28.298033 containerd[1959]: time="2025-01-29T11:36:28.297991047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:4,}" Jan 29 11:36:28.298106 containerd[1959]: time="2025-01-29T11:36:28.298094884Z" level=info msg="Ensure that sandbox dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c in task-service has been cleanup successfully" Jan 29 11:36:28.298186 containerd[1959]: time="2025-01-29T11:36:28.298175639Z" level=info msg="TearDown network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" successfully" Jan 29 11:36:28.298186 containerd[1959]: time="2025-01-29T11:36:28.298185353Z" level=info msg="StopPodSandbox for \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" returns successfully" Jan 29 11:36:28.298313 containerd[1959]: time="2025-01-29T11:36:28.298302187Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\"" Jan 29 11:36:28.298360 containerd[1959]: time="2025-01-29T11:36:28.298349481Z" level=info msg="TearDown network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" successfully" Jan 29 11:36:28.298394 containerd[1959]: time="2025-01-29T11:36:28.298359887Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" returns successfully" Jan 29 11:36:28.298468 containerd[1959]: time="2025-01-29T11:36:28.298459984Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" Jan 29 11:36:28.298498 containerd[1959]: time="2025-01-29T11:36:28.298492013Z" level=info msg="TearDown network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" successfully" Jan 29 11:36:28.298523 containerd[1959]: time="2025-01-29T11:36:28.298499406Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" returns successfully" Jan 29 11:36:28.298603 containerd[1959]: time="2025-01-29T11:36:28.298594600Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:28.298629 kubelet[3444]: I0129 11:36:28.298597 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f" Jan 29 11:36:28.298660 containerd[1959]: time="2025-01-29T11:36:28.298631597Z" level=info msg="TearDown network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" successfully" Jan 29 11:36:28.298660 containerd[1959]: time="2025-01-29T11:36:28.298640381Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" returns successfully" Jan 29 11:36:28.298807 containerd[1959]: time="2025-01-29T11:36:28.298797029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:4,}" Jan 29 11:36:28.298838 containerd[1959]: time="2025-01-29T11:36:28.298821118Z" level=info msg="StopPodSandbox for \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\"" Jan 29 11:36:28.298932 containerd[1959]: time="2025-01-29T11:36:28.298923460Z" level=info msg="Ensure that sandbox 5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f in task-service has been cleanup successfully" Jan 29 11:36:28.299000 systemd[1]: run-netns-cni\x2dbed77833\x2da002\x2d7201\x2d667e\x2dd4e1f38b4dd6.mount: Deactivated successfully. Jan 29 11:36:28.299055 containerd[1959]: time="2025-01-29T11:36:28.298996798Z" level=info msg="TearDown network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" successfully" Jan 29 11:36:28.299055 containerd[1959]: time="2025-01-29T11:36:28.299004871Z" level=info msg="StopPodSandbox for \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" returns successfully" Jan 29 11:36:28.299115 containerd[1959]: time="2025-01-29T11:36:28.299101799Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\"" Jan 29 11:36:28.299098 systemd[1]: run-netns-cni\x2d71592cfd\x2decf6\x2dcabe\x2de4f7\x2d27ca46332ed7.mount: Deactivated successfully. Jan 29 11:36:28.299179 containerd[1959]: time="2025-01-29T11:36:28.299136026Z" level=info msg="TearDown network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" successfully" Jan 29 11:36:28.299179 containerd[1959]: time="2025-01-29T11:36:28.299141066Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" returns successfully" Jan 29 11:36:28.299236 containerd[1959]: time="2025-01-29T11:36:28.299218053Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" Jan 29 11:36:28.299269 containerd[1959]: time="2025-01-29T11:36:28.299252913Z" level=info msg="TearDown network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" successfully" Jan 29 11:36:28.299269 containerd[1959]: time="2025-01-29T11:36:28.299262154Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" returns successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299368674Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299407504Z" level=info msg="TearDown network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299428881Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" returns successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299575009Z" level=info msg="StopPodSandbox for \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\"" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299602441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:4,}" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299659836Z" level=info msg="Ensure that sandbox e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3 in task-service has been cleanup successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299725954Z" level=info msg="TearDown network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299732896Z" level=info msg="StopPodSandbox for \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" returns successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299833277Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\"" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299873364Z" level=info msg="TearDown network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299879467Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" returns successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.299982150Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.300032892Z" level=info msg="TearDown network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" successfully" Jan 29 11:36:28.300070 containerd[1959]: time="2025-01-29T11:36:28.300042651Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" returns successfully" Jan 29 11:36:28.300363 kubelet[3444]: I0129 11:36:28.299372 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3" Jan 29 11:36:28.300393 containerd[1959]: time="2025-01-29T11:36:28.300160577Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:28.300393 containerd[1959]: time="2025-01-29T11:36:28.300208434Z" level=info msg="TearDown network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" successfully" Jan 29 11:36:28.300393 containerd[1959]: time="2025-01-29T11:36:28.300217283Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" returns successfully" Jan 29 11:36:28.300447 containerd[1959]: time="2025-01-29T11:36:28.300399276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:4,}" Jan 29 11:36:28.301370 systemd[1]: run-netns-cni\x2dc5d7550a\x2d9296\x2d9d92\x2d8296\x2d46e2539beef9.mount: Deactivated successfully. Jan 29 11:36:28.301438 systemd[1]: run-netns-cni\x2db64c6b52\x2dc527\x2d821f\x2d563e\x2d2cb34fc8884c.mount: Deactivated successfully. Jan 29 11:36:28.301497 systemd[1]: run-netns-cni\x2d70a0b683\x2d1f07\x2d198e\x2d1520\x2dde76f470001d.mount: Deactivated successfully. Jan 29 11:36:28.331488 containerd[1959]: time="2025-01-29T11:36:28.331461955Z" level=error msg="Failed to destroy network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.331669 containerd[1959]: time="2025-01-29T11:36:28.331655651Z" level=error msg="encountered an error cleaning up failed sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.331704 containerd[1959]: time="2025-01-29T11:36:28.331685615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.331866 kubelet[3444]: E0129 11:36:28.331847 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.331901 kubelet[3444]: E0129 11:36:28.331882 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:28.331901 kubelet[3444]: E0129 11:36:28.331896 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" Jan 29 11:36:28.331938 kubelet[3444]: E0129 11:36:28.331922 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-nr82b_calico-apiserver(bd3d55b5-a627-418c-9156-9c5f11420d4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" podUID="bd3d55b5-a627-418c-9156-9c5f11420d4d" Jan 29 11:36:28.361658 containerd[1959]: time="2025-01-29T11:36:28.361622339Z" level=error msg="Failed to destroy network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.361841 containerd[1959]: time="2025-01-29T11:36:28.361826687Z" level=error msg="encountered an error cleaning up failed sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.361886 containerd[1959]: time="2025-01-29T11:36:28.361868992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362059 kubelet[3444]: E0129 11:36:28.362040 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362092 kubelet[3444]: E0129 11:36:28.362085 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:28.362111 kubelet[3444]: E0129 11:36:28.362098 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" Jan 29 11:36:28.362139 kubelet[3444]: E0129 11:36:28.362125 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66755f7995-4p25x_calico-apiserver(5acf5fde-1589-42fe-98f2-79caa06a6364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" podUID="5acf5fde-1589-42fe-98f2-79caa06a6364" Jan 29 11:36:28.362210 containerd[1959]: time="2025-01-29T11:36:28.362193464Z" level=error msg="Failed to destroy network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362274 containerd[1959]: time="2025-01-29T11:36:28.362193703Z" level=error msg="Failed to destroy network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362428 containerd[1959]: time="2025-01-29T11:36:28.362391154Z" level=error msg="encountered an error cleaning up failed sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362428 containerd[1959]: time="2025-01-29T11:36:28.362414420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362477 containerd[1959]: time="2025-01-29T11:36:28.362450549Z" level=error msg="encountered an error cleaning up failed sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362502 containerd[1959]: time="2025-01-29T11:36:28.362481705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362527 kubelet[3444]: E0129 11:36:28.362490 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362527 kubelet[3444]: E0129 11:36:28.362514 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:28.362589 kubelet[3444]: E0129 11:36:28.362531 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4frhm" Jan 29 11:36:28.362589 kubelet[3444]: E0129 11:36:28.362560 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.362589 kubelet[3444]: E0129 11:36:28.362555 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4frhm_calico-system(07bb4f50-b419-4106-95cb-874077564881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4frhm" podUID="07bb4f50-b419-4106-95cb-874077564881" Jan 29 11:36:28.362666 kubelet[3444]: E0129 11:36:28.362591 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:28.362666 kubelet[3444]: E0129 11:36:28.362608 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bs4sr" Jan 29 11:36:28.362666 kubelet[3444]: E0129 11:36:28.362634 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bs4sr_kube-system(c7ba5fb2-2386-4664-9fd0-5429e827f425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bs4sr" podUID="c7ba5fb2-2386-4664-9fd0-5429e827f425" Jan 29 11:36:28.363046 containerd[1959]: time="2025-01-29T11:36:28.363030512Z" level=error msg="Failed to destroy network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.363194 containerd[1959]: time="2025-01-29T11:36:28.363179376Z" level=error msg="encountered an error cleaning up failed sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.363229 containerd[1959]: time="2025-01-29T11:36:28.363203853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.363278 kubelet[3444]: E0129 11:36:28.363267 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.363309 kubelet[3444]: E0129 11:36:28.363285 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:28.363309 kubelet[3444]: E0129 11:36:28.363301 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" Jan 29 11:36:28.363346 kubelet[3444]: E0129 11:36:28.363324 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5849fd7c66-k8fpk_calico-system(db110898-7bf6-46df-89f1-60bff5c819f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" podUID="db110898-7bf6-46df-89f1-60bff5c819f3" Jan 29 11:36:28.364081 containerd[1959]: time="2025-01-29T11:36:28.364068326Z" level=error msg="Failed to destroy network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.364223 containerd[1959]: time="2025-01-29T11:36:28.364208832Z" level=error msg="encountered an error cleaning up failed sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.364253 containerd[1959]: time="2025-01-29T11:36:28.364232036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.364327 kubelet[3444]: E0129 11:36:28.364289 3444 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:28.364327 kubelet[3444]: E0129 11:36:28.364314 3444 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:28.364377 kubelet[3444]: E0129 11:36:28.364328 3444 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jgtzn" Jan 29 11:36:28.364377 kubelet[3444]: E0129 11:36:28.364347 3444 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jgtzn_kube-system(fa522645-0537-44d1-a6b0-ea427a3e77da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jgtzn" podUID="fa522645-0537-44d1-a6b0-ea427a3e77da" Jan 29 11:36:28.457991 containerd[1959]: time="2025-01-29T11:36:28.457939842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:28.458177 containerd[1959]: time="2025-01-29T11:36:28.458122048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:36:28.458544 containerd[1959]: time="2025-01-29T11:36:28.458505307Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:28.459351 containerd[1959]: time="2025-01-29T11:36:28.459316397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:28.459690 containerd[1959]: time="2025-01-29T11:36:28.459648597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 3.188296091s" Jan 29 11:36:28.459690 containerd[1959]: time="2025-01-29T11:36:28.459662925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:36:28.463135 containerd[1959]: time="2025-01-29T11:36:28.463089955Z" level=info msg="CreateContainer within sandbox \"abf1198b511bebad0cf7bd2f338ca6970547bfdc5dfd72245d568d435706895b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:36:28.470534 containerd[1959]: time="2025-01-29T11:36:28.470492602Z" level=info msg="CreateContainer within sandbox \"abf1198b511bebad0cf7bd2f338ca6970547bfdc5dfd72245d568d435706895b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a6cc89863157aedeb43a2977f999d2ed6ea7ee9d1a21b1aa0b1c32e5d8c495b\"" Jan 29 11:36:28.470734 containerd[1959]: time="2025-01-29T11:36:28.470686885Z" level=info msg="StartContainer for \"9a6cc89863157aedeb43a2977f999d2ed6ea7ee9d1a21b1aa0b1c32e5d8c495b\"" Jan 29 11:36:28.513444 containerd[1959]: time="2025-01-29T11:36:28.513419785Z" level=info msg="StartContainer for \"9a6cc89863157aedeb43a2977f999d2ed6ea7ee9d1a21b1aa0b1c32e5d8c495b\" returns successfully" Jan 29 11:36:28.577623 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:36:28.577674 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:36:28.643026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0-shm.mount: Deactivated successfully. Jan 29 11:36:28.643116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149447312.mount: Deactivated successfully. Jan 29 11:36:29.307317 kubelet[3444]: I0129 11:36:29.307235 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554" Jan 29 11:36:29.308586 containerd[1959]: time="2025-01-29T11:36:29.308509805Z" level=info msg="StopPodSandbox for \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\"" Jan 29 11:36:29.309236 containerd[1959]: time="2025-01-29T11:36:29.309047012Z" level=info msg="Ensure that sandbox a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554 in task-service has been cleanup successfully" Jan 29 11:36:29.309544 containerd[1959]: time="2025-01-29T11:36:29.309490803Z" level=info msg="TearDown network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\" successfully" Jan 29 11:36:29.309651 containerd[1959]: time="2025-01-29T11:36:29.309542145Z" level=info msg="StopPodSandbox for \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\" returns successfully" Jan 29 11:36:29.310210 containerd[1959]: time="2025-01-29T11:36:29.310146992Z" level=info msg="StopPodSandbox for \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\"" Jan 29 11:36:29.310475 containerd[1959]: time="2025-01-29T11:36:29.310389194Z" level=info msg="TearDown network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" successfully" Jan 29 11:36:29.310475 containerd[1959]: time="2025-01-29T11:36:29.310432316Z" level=info msg="StopPodSandbox for \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" returns successfully" Jan 29 11:36:29.311120 containerd[1959]: time="2025-01-29T11:36:29.311066308Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\"" Jan 29 11:36:29.311306 containerd[1959]: time="2025-01-29T11:36:29.311258222Z" level=info msg="TearDown network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" successfully" Jan 29 11:36:29.311439 containerd[1959]: time="2025-01-29T11:36:29.311291784Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" returns successfully" Jan 29 11:36:29.312069 containerd[1959]: time="2025-01-29T11:36:29.312009619Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" Jan 29 11:36:29.312278 containerd[1959]: time="2025-01-29T11:36:29.312231117Z" level=info msg="TearDown network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" successfully" Jan 29 11:36:29.312278 containerd[1959]: time="2025-01-29T11:36:29.312266028Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" returns successfully" Jan 29 11:36:29.312494 kubelet[3444]: I0129 11:36:29.312070 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3" Jan 29 11:36:29.312835 containerd[1959]: time="2025-01-29T11:36:29.312786281Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:29.313056 containerd[1959]: time="2025-01-29T11:36:29.313009627Z" level=info msg="TearDown network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" successfully" Jan 29 11:36:29.313146 containerd[1959]: time="2025-01-29T11:36:29.313060286Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" returns successfully" Jan 29 11:36:29.313254 containerd[1959]: time="2025-01-29T11:36:29.313135336Z" level=info msg="StopPodSandbox for \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\"" Jan 29 11:36:29.313839 containerd[1959]: time="2025-01-29T11:36:29.313752682Z" level=info msg="Ensure that sandbox 5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3 in task-service has been cleanup successfully" Jan 29 11:36:29.314011 containerd[1959]: time="2025-01-29T11:36:29.313964783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:5,}" Jan 29 11:36:29.314180 containerd[1959]: time="2025-01-29T11:36:29.314137219Z" level=info msg="TearDown network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\" successfully" Jan 29 11:36:29.314274 containerd[1959]: time="2025-01-29T11:36:29.314181587Z" level=info msg="StopPodSandbox for \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\" returns successfully" Jan 29 11:36:29.314750 containerd[1959]: time="2025-01-29T11:36:29.314737135Z" level=info msg="StopPodSandbox for \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\"" Jan 29 11:36:29.314845 containerd[1959]: time="2025-01-29T11:36:29.314798054Z" level=info msg="TearDown network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" successfully" Jan 29 11:36:29.314845 containerd[1959]: time="2025-01-29T11:36:29.314805709Z" level=info msg="StopPodSandbox for \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" returns successfully" Jan 29 11:36:29.314926 containerd[1959]: time="2025-01-29T11:36:29.314915209Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\"" Jan 29 11:36:29.314963 containerd[1959]: time="2025-01-29T11:36:29.314954274Z" level=info msg="TearDown network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" successfully" Jan 29 11:36:29.314982 containerd[1959]: time="2025-01-29T11:36:29.314963590Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" returns successfully" Jan 29 11:36:29.315078 containerd[1959]: time="2025-01-29T11:36:29.315066876Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" Jan 29 11:36:29.315137 containerd[1959]: time="2025-01-29T11:36:29.315106172Z" level=info msg="TearDown network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" successfully" Jan 29 11:36:29.315165 containerd[1959]: time="2025-01-29T11:36:29.315137783Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" returns successfully" Jan 29 11:36:29.315184 kubelet[3444]: I0129 11:36:29.315158 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241" Jan 29 11:36:29.315244 systemd[1]: run-netns-cni\x2dba080c6b\x2d93ff\x2d0b2e\x2dd897\x2db20dd30701f8.mount: Deactivated successfully. Jan 29 11:36:29.315399 containerd[1959]: time="2025-01-29T11:36:29.315259553Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:29.315399 containerd[1959]: time="2025-01-29T11:36:29.315302486Z" level=info msg="TearDown network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" successfully" Jan 29 11:36:29.315399 containerd[1959]: time="2025-01-29T11:36:29.315309391Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" returns successfully" Jan 29 11:36:29.315451 containerd[1959]: time="2025-01-29T11:36:29.315404570Z" level=info msg="StopPodSandbox for \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\"" Jan 29 11:36:29.315507 containerd[1959]: time="2025-01-29T11:36:29.315496679Z" level=info msg="Ensure that sandbox fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241 in task-service has been cleanup successfully" Jan 29 11:36:29.315538 containerd[1959]: time="2025-01-29T11:36:29.315513551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:5,}" Jan 29 11:36:29.315583 containerd[1959]: time="2025-01-29T11:36:29.315573080Z" level=info msg="TearDown network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\" successfully" Jan 29 11:36:29.315614 containerd[1959]: time="2025-01-29T11:36:29.315581861Z" level=info msg="StopPodSandbox for \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\" returns successfully" Jan 29 11:36:29.315692 containerd[1959]: time="2025-01-29T11:36:29.315680677Z" level=info msg="StopPodSandbox for \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\"" Jan 29 11:36:29.315730 containerd[1959]: time="2025-01-29T11:36:29.315722285Z" level=info msg="TearDown network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" successfully" Jan 29 11:36:29.315753 containerd[1959]: time="2025-01-29T11:36:29.315730305Z" level=info msg="StopPodSandbox for \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" returns successfully" Jan 29 11:36:29.315862 containerd[1959]: time="2025-01-29T11:36:29.315850482Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\"" Jan 29 11:36:29.315909 containerd[1959]: time="2025-01-29T11:36:29.315899703Z" level=info msg="TearDown network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" successfully" Jan 29 11:36:29.315942 containerd[1959]: time="2025-01-29T11:36:29.315908938Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" returns successfully" Jan 29 11:36:29.316056 containerd[1959]: time="2025-01-29T11:36:29.316044473Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" Jan 29 11:36:29.316109 containerd[1959]: time="2025-01-29T11:36:29.316099720Z" level=info msg="TearDown network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" successfully" Jan 29 11:36:29.316142 containerd[1959]: time="2025-01-29T11:36:29.316110468Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" returns successfully" Jan 29 11:36:29.316177 kubelet[3444]: I0129 11:36:29.316166 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b" Jan 29 11:36:29.316254 containerd[1959]: time="2025-01-29T11:36:29.316241803Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:29.316306 containerd[1959]: time="2025-01-29T11:36:29.316292325Z" level=info msg="TearDown network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" successfully" Jan 29 11:36:29.316333 containerd[1959]: time="2025-01-29T11:36:29.316306903Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" returns successfully" Jan 29 11:36:29.316449 containerd[1959]: time="2025-01-29T11:36:29.316441204Z" level=info msg="StopPodSandbox for \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\"" Jan 29 11:36:29.316487 containerd[1959]: time="2025-01-29T11:36:29.316460753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:5,}" Jan 29 11:36:29.316725 containerd[1959]: time="2025-01-29T11:36:29.316531740Z" level=info msg="Ensure that sandbox 6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b in task-service has been cleanup successfully" Jan 29 11:36:29.316725 containerd[1959]: time="2025-01-29T11:36:29.316615652Z" level=info msg="TearDown network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\" successfully" Jan 29 11:36:29.316725 containerd[1959]: time="2025-01-29T11:36:29.316626226Z" level=info msg="StopPodSandbox for \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\" returns successfully" Jan 29 11:36:29.316782 containerd[1959]: time="2025-01-29T11:36:29.316721073Z" level=info msg="StopPodSandbox for \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\"" Jan 29 11:36:29.316782 containerd[1959]: time="2025-01-29T11:36:29.316772751Z" level=info msg="TearDown network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" successfully" Jan 29 11:36:29.316821 containerd[1959]: time="2025-01-29T11:36:29.316783445Z" level=info msg="StopPodSandbox for \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" returns successfully" Jan 29 11:36:29.316911 containerd[1959]: time="2025-01-29T11:36:29.316900359Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\"" Jan 29 11:36:29.316958 containerd[1959]: time="2025-01-29T11:36:29.316950682Z" level=info msg="TearDown network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" successfully" Jan 29 11:36:29.316984 containerd[1959]: time="2025-01-29T11:36:29.316958388Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" returns successfully" Jan 29 11:36:29.317058 containerd[1959]: time="2025-01-29T11:36:29.317048418Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" Jan 29 11:36:29.317108 containerd[1959]: time="2025-01-29T11:36:29.317099721Z" level=info msg="TearDown network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" successfully" Jan 29 11:36:29.317140 containerd[1959]: time="2025-01-29T11:36:29.317108183Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" returns successfully" Jan 29 11:36:29.317248 containerd[1959]: time="2025-01-29T11:36:29.317238136Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:29.317287 containerd[1959]: time="2025-01-29T11:36:29.317279862Z" level=info msg="TearDown network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" successfully" Jan 29 11:36:29.317320 containerd[1959]: time="2025-01-29T11:36:29.317286863Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" returns successfully" Jan 29 11:36:29.317457 containerd[1959]: time="2025-01-29T11:36:29.317446641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:5,}" Jan 29 11:36:29.317610 systemd[1]: run-netns-cni\x2d27589659\x2da493\x2de0c1\x2dc3f0\x2d3b0c5b1b5350.mount: Deactivated successfully. Jan 29 11:36:29.317684 systemd[1]: run-netns-cni\x2dfab14b60\x2d7a24\x2dc4f7\x2dc990\x2ddc7eef7acda5.mount: Deactivated successfully. Jan 29 11:36:29.317964 kubelet[3444]: I0129 11:36:29.317956 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0" Jan 29 11:36:29.318152 containerd[1959]: time="2025-01-29T11:36:29.318142711Z" level=info msg="StopPodSandbox for \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\"" Jan 29 11:36:29.318252 containerd[1959]: time="2025-01-29T11:36:29.318244133Z" level=info msg="Ensure that sandbox 5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0 in task-service has been cleanup successfully" Jan 29 11:36:29.318339 containerd[1959]: time="2025-01-29T11:36:29.318324625Z" level=info msg="TearDown network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\" successfully" Jan 29 11:36:29.318367 containerd[1959]: time="2025-01-29T11:36:29.318342025Z" level=info msg="StopPodSandbox for \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\" returns successfully" Jan 29 11:36:29.318439 containerd[1959]: time="2025-01-29T11:36:29.318429484Z" level=info msg="StopPodSandbox for \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\"" Jan 29 11:36:29.318482 containerd[1959]: time="2025-01-29T11:36:29.318474030Z" level=info msg="TearDown network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" successfully" Jan 29 11:36:29.318511 containerd[1959]: time="2025-01-29T11:36:29.318483347Z" level=info msg="StopPodSandbox for \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" returns successfully" Jan 29 11:36:29.318619 containerd[1959]: time="2025-01-29T11:36:29.318605184Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\"" Jan 29 11:36:29.318671 containerd[1959]: time="2025-01-29T11:36:29.318662757Z" level=info msg="TearDown network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" successfully" Jan 29 11:36:29.318704 containerd[1959]: time="2025-01-29T11:36:29.318670724Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" returns successfully" Jan 29 11:36:29.318824 containerd[1959]: time="2025-01-29T11:36:29.318813719Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" Jan 29 11:36:29.318863 containerd[1959]: time="2025-01-29T11:36:29.318855148Z" level=info msg="TearDown network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" successfully" Jan 29 11:36:29.318885 containerd[1959]: time="2025-01-29T11:36:29.318864432Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" returns successfully" Jan 29 11:36:29.318985 kubelet[3444]: I0129 11:36:29.318974 3444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944" Jan 29 11:36:29.319023 containerd[1959]: time="2025-01-29T11:36:29.318985030Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:29.319049 containerd[1959]: time="2025-01-29T11:36:29.319027886Z" level=info msg="TearDown network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" successfully" Jan 29 11:36:29.319049 containerd[1959]: time="2025-01-29T11:36:29.319034720Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" returns successfully" Jan 29 11:36:29.319221 containerd[1959]: time="2025-01-29T11:36:29.319210708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:36:29.319251 containerd[1959]: time="2025-01-29T11:36:29.319225686Z" level=info msg="StopPodSandbox for \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\"" Jan 29 11:36:29.319354 containerd[1959]: time="2025-01-29T11:36:29.319343462Z" level=info msg="Ensure that sandbox d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944 in task-service has been cleanup successfully" Jan 29 11:36:29.319431 containerd[1959]: time="2025-01-29T11:36:29.319422787Z" level=info msg="TearDown network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\" successfully" Jan 29 11:36:29.319431 containerd[1959]: time="2025-01-29T11:36:29.319430529Z" level=info msg="StopPodSandbox for \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\" returns successfully" Jan 29 11:36:29.319559 containerd[1959]: time="2025-01-29T11:36:29.319549269Z" level=info msg="StopPodSandbox for \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\"" Jan 29 11:36:29.319604 containerd[1959]: time="2025-01-29T11:36:29.319595490Z" level=info msg="TearDown network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" successfully" Jan 29 11:36:29.319623 containerd[1959]: time="2025-01-29T11:36:29.319605094Z" level=info msg="StopPodSandbox for \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" returns successfully" Jan 29 11:36:29.319722 containerd[1959]: time="2025-01-29T11:36:29.319711456Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\"" Jan 29 11:36:29.319762 systemd[1]: run-netns-cni\x2d405db3c9\x2d2772\x2d1c0a\x2d5633\x2def3d9ca127c8.mount: Deactivated successfully. Jan 29 11:36:29.319817 containerd[1959]: time="2025-01-29T11:36:29.319762844Z" level=info msg="TearDown network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" successfully" Jan 29 11:36:29.319817 containerd[1959]: time="2025-01-29T11:36:29.319772317Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" returns successfully" Jan 29 11:36:29.319846 systemd[1]: run-netns-cni\x2d81ac13d9\x2d7b85\x2d475e\x2d2f1c\x2df6c1e03a3fbe.mount: Deactivated successfully. Jan 29 11:36:29.319924 containerd[1959]: time="2025-01-29T11:36:29.319913979Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" Jan 29 11:36:29.319973 containerd[1959]: time="2025-01-29T11:36:29.319965796Z" level=info msg="TearDown network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" successfully" Jan 29 11:36:29.319998 containerd[1959]: time="2025-01-29T11:36:29.319974101Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" returns successfully" Jan 29 11:36:29.320086 containerd[1959]: time="2025-01-29T11:36:29.320074310Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:29.320141 containerd[1959]: time="2025-01-29T11:36:29.320130961Z" level=info msg="TearDown network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" successfully" Jan 29 11:36:29.320159 containerd[1959]: time="2025-01-29T11:36:29.320142439Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" returns successfully" Jan 29 11:36:29.321395 containerd[1959]: time="2025-01-29T11:36:29.321379473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:36:29.322887 systemd[1]: run-netns-cni\x2d06a4c7d3\x2db02e\x2db2c6\x2da436\x2deeaefefcf9bb.mount: Deactivated successfully. Jan 29 11:36:29.325239 kubelet[3444]: I0129 11:36:29.325195 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9dmps" podStartSLOduration=1.623742638 podStartE2EDuration="12.325179604s" podCreationTimestamp="2025-01-29 11:36:17 +0000 UTC" firstStartedPulling="2025-01-29 11:36:17.758538467 +0000 UTC m=+21.607405999" lastFinishedPulling="2025-01-29 11:36:28.459975493 +0000 UTC m=+32.308842965" observedRunningTime="2025-01-29 11:36:29.324767942 +0000 UTC m=+33.173635423" watchObservedRunningTime="2025-01-29 11:36:29.325179604 +0000 UTC m=+33.174047069" Jan 29 11:36:29.390766 systemd-networkd[1564]: calife00ed48e71: Link UP Jan 29 11:36:29.390859 systemd-networkd[1564]: calife00ed48e71: Gained carrier Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.335 [INFO][5853] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.342 [INFO][5853] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0 coredns-7db6d8ff4d- kube-system fa522645-0537-44d1-a6b0-ea427a3e77da 663 0 2025-01-29 11:36:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.0-a-23f4c5510f coredns-7db6d8ff4d-jgtzn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calife00ed48e71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.342 [INFO][5853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.361 [INFO][5968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" HandleID="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Workload="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.369 [INFO][5968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" HandleID="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Workload="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006a4480), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.0-a-23f4c5510f", "pod":"coredns-7db6d8ff4d-jgtzn", "timestamp":"2025-01-29 11:36:29.361815542 +0000 UTC"}, Hostname:"ci-4152.2.0-a-23f4c5510f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.369 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.369 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.369 [INFO][5968] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-23f4c5510f' Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.370 [INFO][5968] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.373 [INFO][5968] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.376 [INFO][5968] ipam/ipam.go 489: Trying affinity for 192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.377 [INFO][5968] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.379 [INFO][5968] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.379 [INFO][5968] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.380 [INFO][5968] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9 Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.382 [INFO][5968] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.385 [INFO][5968] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.65/26] block=192.168.56.64/26 handle="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.385 [INFO][5968] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.65/26] handle="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.385 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:29.395750 containerd[1959]: 2025-01-29 11:36:29.385 [INFO][5968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.65/26] IPv6=[] ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" HandleID="k8s-pod-network.c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Workload="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" Jan 29 11:36:29.396216 containerd[1959]: 2025-01-29 11:36:29.386 [INFO][5853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fa522645-0537-44d1-a6b0-ea427a3e77da", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"", Pod:"coredns-7db6d8ff4d-jgtzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife00ed48e71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.396216 containerd[1959]: 2025-01-29 11:36:29.386 [INFO][5853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.65/32] ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" Jan 29 11:36:29.396216 containerd[1959]: 2025-01-29 11:36:29.387 [INFO][5853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife00ed48e71 ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" Jan 29 11:36:29.396216 containerd[1959]: 2025-01-29 11:36:29.390 [INFO][5853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" Jan 29 11:36:29.396216 containerd[1959]: 2025-01-29 11:36:29.390 [INFO][5853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fa522645-0537-44d1-a6b0-ea427a3e77da", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9", Pod:"coredns-7db6d8ff4d-jgtzn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calife00ed48e71", MAC:"aa:32:e2:0c:52:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.396216 containerd[1959]: 2025-01-29 11:36:29.394 [INFO][5853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-jgtzn" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--jgtzn-eth0" Jan 29 11:36:29.402445 systemd-networkd[1564]: calid132c0ac697: Link UP Jan 29 11:36:29.402561 systemd-networkd[1564]: calid132c0ac697: Gained carrier Jan 29 11:36:29.405397 containerd[1959]: time="2025-01-29T11:36:29.405350070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:29.405466 containerd[1959]: time="2025-01-29T11:36:29.405403214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:29.405466 containerd[1959]: time="2025-01-29T11:36:29.405426700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.405500 containerd[1959]: time="2025-01-29T11:36:29.405478322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.341 [INFO][5868] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.347 [INFO][5868] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0 csi-node-driver- calico-system 07bb4f50-b419-4106-95cb-874077564881 595 0 2025-01-29 11:36:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152.2.0-a-23f4c5510f csi-node-driver-4frhm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid132c0ac697 [] []}} ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.347 [INFO][5868] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.364 [INFO][5976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" HandleID="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Workload="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.370 [INFO][5976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" HandleID="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Workload="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028f5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.0-a-23f4c5510f", "pod":"csi-node-driver-4frhm", "timestamp":"2025-01-29 11:36:29.364744569 +0000 UTC"}, Hostname:"ci-4152.2.0-a-23f4c5510f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.370 [INFO][5976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.385 [INFO][5976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.385 [INFO][5976] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-23f4c5510f' Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.386 [INFO][5976] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.388 [INFO][5976] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.392 [INFO][5976] ipam/ipam.go 489: Trying affinity for 192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.393 [INFO][5976] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.395 [INFO][5976] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.395 [INFO][5976] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.395 [INFO][5976] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5 Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.398 [INFO][5976] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.400 [INFO][5976] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.66/26] block=192.168.56.64/26 handle="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.400 [INFO][5976] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.66/26] handle="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.400 [INFO][5976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:29.407389 containerd[1959]: 2025-01-29 11:36:29.400 [INFO][5976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.66/26] IPv6=[] ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" HandleID="k8s-pod-network.2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Workload="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" Jan 29 11:36:29.407822 containerd[1959]: 2025-01-29 11:36:29.401 [INFO][5868] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"07bb4f50-b419-4106-95cb-874077564881", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"", Pod:"csi-node-driver-4frhm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid132c0ac697", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.407822 containerd[1959]: 2025-01-29 11:36:29.401 [INFO][5868] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.66/32] ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" Jan 29 11:36:29.407822 containerd[1959]: 2025-01-29 11:36:29.401 [INFO][5868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid132c0ac697 ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" Jan 29 11:36:29.407822 containerd[1959]: 2025-01-29 11:36:29.402 [INFO][5868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" Jan 29 11:36:29.407822 containerd[1959]: 2025-01-29 11:36:29.402 [INFO][5868] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"07bb4f50-b419-4106-95cb-874077564881", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5", Pod:"csi-node-driver-4frhm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid132c0ac697", MAC:"3e:e1:84:1d:b1:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.407822 containerd[1959]: 2025-01-29 11:36:29.406 [INFO][5868] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5" Namespace="calico-system" Pod="csi-node-driver-4frhm" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-csi--node--driver--4frhm-eth0" Jan 29 11:36:29.416460 systemd-networkd[1564]: cali45df1571d95: Link UP Jan 29 11:36:29.416567 systemd-networkd[1564]: cali45df1571d95: Gained carrier Jan 29 11:36:29.416660 containerd[1959]: time="2025-01-29T11:36:29.416408429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:29.416705 containerd[1959]: time="2025-01-29T11:36:29.416656198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:29.416705 containerd[1959]: time="2025-01-29T11:36:29.416668390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.416771 containerd[1959]: time="2025-01-29T11:36:29.416722130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.431465 containerd[1959]: time="2025-01-29T11:36:29.431405080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4frhm,Uid:07bb4f50-b419-4106-95cb-874077564881,Namespace:calico-system,Attempt:5,} returns sandbox id \"2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5\"" Jan 29 11:36:29.432116 containerd[1959]: time="2025-01-29T11:36:29.432104526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.343 [INFO][5878] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.348 [INFO][5878] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0 calico-kube-controllers-5849fd7c66- calico-system db110898-7bf6-46df-89f1-60bff5c819f3 661 0 2025-01-29 11:36:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5849fd7c66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152.2.0-a-23f4c5510f calico-kube-controllers-5849fd7c66-k8fpk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45df1571d95 [] []}} ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.348 [INFO][5878] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.365 [INFO][5981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" HandleID="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.371 [INFO][5981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" HandleID="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ad210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152.2.0-a-23f4c5510f", "pod":"calico-kube-controllers-5849fd7c66-k8fpk", "timestamp":"2025-01-29 11:36:29.365031869 +0000 UTC"}, Hostname:"ci-4152.2.0-a-23f4c5510f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.371 [INFO][5981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.400 [INFO][5981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.400 [INFO][5981] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-23f4c5510f' Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.401 [INFO][5981] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.404 [INFO][5981] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.406 [INFO][5981] ipam/ipam.go 489: Trying affinity for 192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.407 [INFO][5981] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.408 [INFO][5981] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.409 [INFO][5981] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.409 [INFO][5981] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472 Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.411 [INFO][5981] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.414 [INFO][5981] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.67/26] block=192.168.56.64/26 handle="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.414 [INFO][5981] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.67/26] handle="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.414 [INFO][5981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:29.435289 containerd[1959]: 2025-01-29 11:36:29.414 [INFO][5981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.67/26] IPv6=[] ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" HandleID="k8s-pod-network.8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" Jan 29 11:36:29.435710 containerd[1959]: 2025-01-29 11:36:29.415 [INFO][5878] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0", GenerateName:"calico-kube-controllers-5849fd7c66-", Namespace:"calico-system", SelfLink:"", UID:"db110898-7bf6-46df-89f1-60bff5c819f3", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5849fd7c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"", Pod:"calico-kube-controllers-5849fd7c66-k8fpk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45df1571d95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.435710 containerd[1959]: 2025-01-29 11:36:29.415 [INFO][5878] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.67/32] ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" Jan 29 11:36:29.435710 containerd[1959]: 2025-01-29 11:36:29.415 [INFO][5878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45df1571d95 ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" Jan 29 11:36:29.435710 containerd[1959]: 2025-01-29 11:36:29.416 [INFO][5878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" Jan 29 11:36:29.435710 containerd[1959]: 2025-01-29 11:36:29.416 [INFO][5878] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0", GenerateName:"calico-kube-controllers-5849fd7c66-", Namespace:"calico-system", SelfLink:"", UID:"db110898-7bf6-46df-89f1-60bff5c819f3", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5849fd7c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472", Pod:"calico-kube-controllers-5849fd7c66-k8fpk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45df1571d95", MAC:"fe:4a:f4:f4:a6:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.435710 containerd[1959]: 2025-01-29 11:36:29.434 [INFO][5878] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472" Namespace="calico-system" Pod="calico-kube-controllers-5849fd7c66-k8fpk" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--kube--controllers--5849fd7c66--k8fpk-eth0" Jan 29 11:36:29.444452 containerd[1959]: time="2025-01-29T11:36:29.444427278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jgtzn,Uid:fa522645-0537-44d1-a6b0-ea427a3e77da,Namespace:kube-system,Attempt:5,} returns sandbox id \"c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9\"" Jan 29 11:36:29.445026 containerd[1959]: time="2025-01-29T11:36:29.444949490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:29.445206 containerd[1959]: time="2025-01-29T11:36:29.445031715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:29.445206 containerd[1959]: time="2025-01-29T11:36:29.445051394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.445206 containerd[1959]: time="2025-01-29T11:36:29.445123009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.446222 systemd-networkd[1564]: cali65cd7168416: Link UP Jan 29 11:36:29.446365 systemd-networkd[1564]: cali65cd7168416: Gained carrier Jan 29 11:36:29.446859 containerd[1959]: time="2025-01-29T11:36:29.446834807Z" level=info msg="CreateContainer within sandbox \"c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:36:29.451154 containerd[1959]: time="2025-01-29T11:36:29.451128651Z" level=info msg="CreateContainer within sandbox \"c64293cc92f12757b51bc2f52807fc83bdc47e32139ecc8c81058bf6ab9cbac9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e1b4bdbef4bb52596d76fea25faa004ff0835601bf27725e9f74675e698dc9d9\"" Jan 29 11:36:29.451407 containerd[1959]: time="2025-01-29T11:36:29.451395138Z" level=info msg="StartContainer for \"e1b4bdbef4bb52596d76fea25faa004ff0835601bf27725e9f74675e698dc9d9\"" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.344 [INFO][5908] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.350 [INFO][5908] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0 calico-apiserver-66755f7995- calico-apiserver bd3d55b5-a627-418c-9156-9c5f11420d4d 662 0 2025-01-29 11:36:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66755f7995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.0-a-23f4c5510f calico-apiserver-66755f7995-nr82b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65cd7168416 [] []}} ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.350 [INFO][5908] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.367 [INFO][5991] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" HandleID="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.372 [INFO][5991] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" HandleID="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003673f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.0-a-23f4c5510f", "pod":"calico-apiserver-66755f7995-nr82b", "timestamp":"2025-01-29 11:36:29.36708365 +0000 UTC"}, Hostname:"ci-4152.2.0-a-23f4c5510f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.372 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.414 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.414 [INFO][5991] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-23f4c5510f' Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.415 [INFO][5991] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.418 [INFO][5991] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.434 [INFO][5991] ipam/ipam.go 489: Trying affinity for 192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.435 [INFO][5991] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.436 [INFO][5991] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.437 [INFO][5991] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.438 [INFO][5991] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.439 [INFO][5991] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.443 [INFO][5991] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.68/26] block=192.168.56.64/26 handle="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.443 [INFO][5991] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.68/26] handle="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.443 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:29.451676 containerd[1959]: 2025-01-29 11:36:29.443 [INFO][5991] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.68/26] IPv6=[] ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" HandleID="k8s-pod-network.98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" Jan 29 11:36:29.452046 containerd[1959]: 2025-01-29 11:36:29.444 [INFO][5908] cni-plugin/k8s.go 386: Populated endpoint ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0", GenerateName:"calico-apiserver-66755f7995-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd3d55b5-a627-418c-9156-9c5f11420d4d", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66755f7995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"", Pod:"calico-apiserver-66755f7995-nr82b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65cd7168416", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.452046 containerd[1959]: 2025-01-29 11:36:29.445 [INFO][5908] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.68/32] ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" Jan 29 11:36:29.452046 containerd[1959]: 2025-01-29 11:36:29.445 [INFO][5908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65cd7168416 ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" Jan 29 11:36:29.452046 containerd[1959]: 2025-01-29 11:36:29.446 [INFO][5908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" Jan 29 11:36:29.452046 containerd[1959]: 2025-01-29 11:36:29.446 [INFO][5908] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0", GenerateName:"calico-apiserver-66755f7995-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd3d55b5-a627-418c-9156-9c5f11420d4d", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66755f7995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb", Pod:"calico-apiserver-66755f7995-nr82b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65cd7168416", MAC:"82:7d:07:04:d6:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.452046 containerd[1959]: 2025-01-29 11:36:29.450 [INFO][5908] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-nr82b" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--nr82b-eth0" Jan 29 11:36:29.462049 systemd-networkd[1564]: calidcbcc46cf61: Link UP Jan 29 11:36:29.462194 systemd-networkd[1564]: calidcbcc46cf61: Gained carrier Jan 29 11:36:29.462252 containerd[1959]: time="2025-01-29T11:36:29.461950339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:29.462252 containerd[1959]: time="2025-01-29T11:36:29.462227855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:29.462252 containerd[1959]: time="2025-01-29T11:36:29.462238092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.462364 containerd[1959]: time="2025-01-29T11:36:29.462303166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.343 [INFO][5896] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.349 [INFO][5896] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0 coredns-7db6d8ff4d- kube-system c7ba5fb2-2386-4664-9fd0-5429e827f425 659 0 2025-01-29 11:36:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152.2.0-a-23f4c5510f coredns-7db6d8ff4d-bs4sr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidcbcc46cf61 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.349 [INFO][5896] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.367 [INFO][5992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" HandleID="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Workload="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.372 [INFO][5992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" HandleID="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Workload="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152.2.0-a-23f4c5510f", "pod":"coredns-7db6d8ff4d-bs4sr", "timestamp":"2025-01-29 11:36:29.367433574 +0000 UTC"}, Hostname:"ci-4152.2.0-a-23f4c5510f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.372 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.443 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.443 [INFO][5992] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-23f4c5510f' Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.445 [INFO][5992] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.448 [INFO][5992] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.450 [INFO][5992] ipam/ipam.go 489: Trying affinity for 192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.452 [INFO][5992] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.453 [INFO][5992] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.453 [INFO][5992] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.454 [INFO][5992] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5 Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.456 [INFO][5992] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.460 [INFO][5992] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.69/26] block=192.168.56.64/26 handle="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.460 [INFO][5992] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.69/26] handle="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.460 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:29.468679 containerd[1959]: 2025-01-29 11:36:29.460 [INFO][5992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.69/26] IPv6=[] ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" HandleID="k8s-pod-network.110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Workload="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" Jan 29 11:36:29.469121 containerd[1959]: 2025-01-29 11:36:29.461 [INFO][5896] cni-plugin/k8s.go 386: Populated endpoint ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c7ba5fb2-2386-4664-9fd0-5429e827f425", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"", Pod:"coredns-7db6d8ff4d-bs4sr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcbcc46cf61", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.469121 containerd[1959]: 2025-01-29 11:36:29.461 [INFO][5896] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.69/32] ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" Jan 29 11:36:29.469121 containerd[1959]: 2025-01-29 11:36:29.461 [INFO][5896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcbcc46cf61 ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" Jan 29 11:36:29.469121 containerd[1959]: 2025-01-29 11:36:29.462 [INFO][5896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" Jan 29 11:36:29.469121 containerd[1959]: 2025-01-29 11:36:29.462 [INFO][5896] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c7ba5fb2-2386-4664-9fd0-5429e827f425", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5", Pod:"coredns-7db6d8ff4d-bs4sr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcbcc46cf61", MAC:"8a:3c:95:c9:28:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.469121 containerd[1959]: 2025-01-29 11:36:29.467 [INFO][5896] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bs4sr" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-coredns--7db6d8ff4d--bs4sr-eth0" Jan 29 11:36:29.475136 containerd[1959]: time="2025-01-29T11:36:29.475113682Z" level=info msg="StartContainer for \"e1b4bdbef4bb52596d76fea25faa004ff0835601bf27725e9f74675e698dc9d9\" returns successfully" Jan 29 11:36:29.479331 containerd[1959]: time="2025-01-29T11:36:29.479230530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:29.479331 containerd[1959]: time="2025-01-29T11:36:29.479273580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:29.479331 containerd[1959]: time="2025-01-29T11:36:29.479281598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.479497 containerd[1959]: time="2025-01-29T11:36:29.479350000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.480400 systemd-networkd[1564]: cali4a04f4ee0d3: Link UP Jan 29 11:36:29.480617 systemd-networkd[1564]: cali4a04f4ee0d3: Gained carrier Jan 29 11:36:29.483939 containerd[1959]: time="2025-01-29T11:36:29.483916108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5849fd7c66-k8fpk,Uid:db110898-7bf6-46df-89f1-60bff5c819f3,Namespace:calico-system,Attempt:5,} returns sandbox id \"8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472\"" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.344 [INFO][5898] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.351 [INFO][5898] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0 calico-apiserver-66755f7995- calico-apiserver 5acf5fde-1589-42fe-98f2-79caa06a6364 664 0 2025-01-29 11:36:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66755f7995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152.2.0-a-23f4c5510f calico-apiserver-66755f7995-4p25x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a04f4ee0d3 [] []}} ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.351 [INFO][5898] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.368 [INFO][6001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" HandleID="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.372 [INFO][6001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" HandleID="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0007ab3d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152.2.0-a-23f4c5510f", "pod":"calico-apiserver-66755f7995-4p25x", "timestamp":"2025-01-29 11:36:29.368018358 +0000 UTC"}, Hostname:"ci-4152.2.0-a-23f4c5510f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.372 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.460 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.460 [INFO][6001] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152.2.0-a-23f4c5510f' Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.461 [INFO][6001] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.464 [INFO][6001] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.467 [INFO][6001] ipam/ipam.go 489: Trying affinity for 192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.469 [INFO][6001] ipam/ipam.go 155: Attempting to load block cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.470 [INFO][6001] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.56.64/26 host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.470 [INFO][6001] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.56.64/26 handle="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.471 [INFO][6001] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.474 [INFO][6001] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.56.64/26 handle="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.477 [INFO][6001] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.56.70/26] block=192.168.56.64/26 handle="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.478 [INFO][6001] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.56.70/26] handle="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" host="ci-4152.2.0-a-23f4c5510f" Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.478 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:29.486231 containerd[1959]: 2025-01-29 11:36:29.478 [INFO][6001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.56.70/26] IPv6=[] ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" HandleID="k8s-pod-network.c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Workload="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" Jan 29 11:36:29.486694 containerd[1959]: 2025-01-29 11:36:29.479 [INFO][5898] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0", GenerateName:"calico-apiserver-66755f7995-", Namespace:"calico-apiserver", SelfLink:"", UID:"5acf5fde-1589-42fe-98f2-79caa06a6364", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66755f7995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"", Pod:"calico-apiserver-66755f7995-4p25x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a04f4ee0d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.486694 containerd[1959]: 2025-01-29 11:36:29.479 [INFO][5898] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.56.70/32] ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" Jan 29 11:36:29.486694 containerd[1959]: 2025-01-29 11:36:29.479 [INFO][5898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a04f4ee0d3 ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" Jan 29 11:36:29.486694 containerd[1959]: 2025-01-29 11:36:29.480 [INFO][5898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" Jan 29 11:36:29.486694 containerd[1959]: 2025-01-29 11:36:29.480 [INFO][5898] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0", GenerateName:"calico-apiserver-66755f7995-", Namespace:"calico-apiserver", SelfLink:"", UID:"5acf5fde-1589-42fe-98f2-79caa06a6364", ResourceVersion:"664", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 36, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66755f7995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152.2.0-a-23f4c5510f", ContainerID:"c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf", Pod:"calico-apiserver-66755f7995-4p25x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a04f4ee0d3", MAC:"0e:d1:25:08:19:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:29.486694 containerd[1959]: 2025-01-29 11:36:29.485 [INFO][5898] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf" Namespace="calico-apiserver" Pod="calico-apiserver-66755f7995-4p25x" WorkloadEndpoint="ci--4152.2.0--a--23f4c5510f-k8s-calico--apiserver--66755f7995--4p25x-eth0" Jan 29 11:36:29.496523 containerd[1959]: time="2025-01-29T11:36:29.496434223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:29.496523 containerd[1959]: time="2025-01-29T11:36:29.496497342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:29.496747 containerd[1959]: time="2025-01-29T11:36:29.496512356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.496747 containerd[1959]: time="2025-01-29T11:36:29.496593422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:29.500859 containerd[1959]: time="2025-01-29T11:36:29.500836620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-nr82b,Uid:bd3d55b5-a627-418c-9156-9c5f11420d4d,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb\"" Jan 29 11:36:29.516980 containerd[1959]: time="2025-01-29T11:36:29.516962984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs4sr,Uid:c7ba5fb2-2386-4664-9fd0-5429e827f425,Namespace:kube-system,Attempt:5,} returns sandbox id \"110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5\"" Jan 29 11:36:29.518292 containerd[1959]: time="2025-01-29T11:36:29.518278027Z" level=info msg="CreateContainer within sandbox \"110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:36:29.523419 containerd[1959]: time="2025-01-29T11:36:29.523399371Z" level=info msg="CreateContainer within sandbox \"110f9d3b4d65c0e1f9c32d8afbc69e0d3a34328b19b367e2845537e617de2ca5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17d3c9ea2044893cebbb1258445fda93c307865312932e828f3580094a42d599\"" Jan 29 11:36:29.523666 containerd[1959]: time="2025-01-29T11:36:29.523650658Z" level=info msg="StartContainer for \"17d3c9ea2044893cebbb1258445fda93c307865312932e828f3580094a42d599\"" Jan 29 11:36:29.524330 containerd[1959]: time="2025-01-29T11:36:29.524314964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66755f7995-4p25x,Uid:5acf5fde-1589-42fe-98f2-79caa06a6364,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf\"" Jan 29 11:36:29.577024 containerd[1959]: time="2025-01-29T11:36:29.576958545Z" level=info msg="StartContainer for \"17d3c9ea2044893cebbb1258445fda93c307865312932e828f3580094a42d599\" returns successfully" Jan 29 11:36:30.351664 kubelet[3444]: I0129 11:36:30.351553 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jgtzn" podStartSLOduration=18.351520691 podStartE2EDuration="18.351520691s" podCreationTimestamp="2025-01-29 11:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:36:30.350517106 +0000 UTC m=+34.199384664" watchObservedRunningTime="2025-01-29 11:36:30.351520691 +0000 UTC m=+34.200388201" Jan 29 11:36:30.359318 kubelet[3444]: I0129 11:36:30.359260 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:30.370995 kubelet[3444]: I0129 11:36:30.370887 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bs4sr" podStartSLOduration=18.370851127 podStartE2EDuration="18.370851127s" podCreationTimestamp="2025-01-29 11:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:36:30.369903992 +0000 UTC m=+34.218771525" watchObservedRunningTime="2025-01-29 11:36:30.370851127 +0000 UTC m=+34.219718641" Jan 29 11:36:30.398368 systemd-networkd[1564]: calife00ed48e71: Gained IPv6LL Jan 29 11:36:30.526399 systemd-networkd[1564]: calidcbcc46cf61: Gained IPv6LL Jan 29 11:36:30.590580 systemd-networkd[1564]: cali65cd7168416: Gained IPv6LL Jan 29 11:36:30.654739 systemd-networkd[1564]: cali45df1571d95: Gained IPv6LL Jan 29 11:36:30.902407 containerd[1959]: time="2025-01-29T11:36:30.902356580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:30.902635 containerd[1959]: time="2025-01-29T11:36:30.902505951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:36:30.902868 containerd[1959]: time="2025-01-29T11:36:30.902830499Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:30.904212 containerd[1959]: time="2025-01-29T11:36:30.904168683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:30.904440 containerd[1959]: time="2025-01-29T11:36:30.904401514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.472280439s" Jan 29 11:36:30.904440 containerd[1959]: time="2025-01-29T11:36:30.904415542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:36:30.904877 containerd[1959]: time="2025-01-29T11:36:30.904865644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:36:30.905439 containerd[1959]: time="2025-01-29T11:36:30.905397199Z" level=info msg="CreateContainer within sandbox \"2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:36:30.910527 containerd[1959]: time="2025-01-29T11:36:30.910478855Z" level=info msg="CreateContainer within sandbox \"2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bc65ab66860fbffb3f41ee63d861ef97be31004a56c439b4404b0ad2c465e213\"" Jan 29 11:36:30.910748 containerd[1959]: time="2025-01-29T11:36:30.910693943Z" level=info msg="StartContainer for \"bc65ab66860fbffb3f41ee63d861ef97be31004a56c439b4404b0ad2c465e213\"" Jan 29 11:36:30.949323 containerd[1959]: time="2025-01-29T11:36:30.949301150Z" level=info msg="StartContainer for \"bc65ab66860fbffb3f41ee63d861ef97be31004a56c439b4404b0ad2c465e213\" returns successfully" Jan 29 11:36:31.038579 systemd-networkd[1564]: cali4a04f4ee0d3: Gained IPv6LL Jan 29 11:36:31.422576 systemd-networkd[1564]: calid132c0ac697: Gained IPv6LL Jan 29 11:36:32.490388 containerd[1959]: time="2025-01-29T11:36:32.490334527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:32.490609 containerd[1959]: time="2025-01-29T11:36:32.490570554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:36:32.490936 containerd[1959]: time="2025-01-29T11:36:32.490898933Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:32.491888 containerd[1959]: time="2025-01-29T11:36:32.491848440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:32.492298 containerd[1959]: time="2025-01-29T11:36:32.492259158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.587380192s" Jan 29 11:36:32.492298 containerd[1959]: time="2025-01-29T11:36:32.492272492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:36:32.492775 containerd[1959]: time="2025-01-29T11:36:32.492763641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:36:32.495856 containerd[1959]: time="2025-01-29T11:36:32.495822738Z" level=info msg="CreateContainer within sandbox \"8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:36:32.499576 containerd[1959]: time="2025-01-29T11:36:32.499535057Z" level=info msg="CreateContainer within sandbox \"8d67e7344340c22e80411c834f668c47648713653dfa768326236d122fca5472\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"356d0fad2bcd95727c8b6328bd58b7879e5b4e7186cee238a9d420aa5258e534\"" Jan 29 11:36:32.499776 containerd[1959]: time="2025-01-29T11:36:32.499727381Z" level=info msg="StartContainer for \"356d0fad2bcd95727c8b6328bd58b7879e5b4e7186cee238a9d420aa5258e534\"" Jan 29 11:36:32.541086 containerd[1959]: time="2025-01-29T11:36:32.541064066Z" level=info msg="StartContainer for \"356d0fad2bcd95727c8b6328bd58b7879e5b4e7186cee238a9d420aa5258e534\" returns successfully" Jan 29 11:36:33.389470 kubelet[3444]: I0129 11:36:33.389406 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5849fd7c66-k8fpk" podStartSLOduration=13.38119826 podStartE2EDuration="16.389394915s" podCreationTimestamp="2025-01-29 11:36:17 +0000 UTC" firstStartedPulling="2025-01-29 11:36:29.48451264 +0000 UTC m=+33.333380113" lastFinishedPulling="2025-01-29 11:36:32.492709296 +0000 UTC m=+36.341576768" observedRunningTime="2025-01-29 11:36:33.388962438 +0000 UTC m=+37.237829905" watchObservedRunningTime="2025-01-29 11:36:33.389394915 +0000 UTC m=+37.238262379" Jan 29 11:36:33.699146 kubelet[3444]: I0129 11:36:33.699020 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:33.879353 kernel: bpftool[6886]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:36:34.034498 systemd-networkd[1564]: vxlan.calico: Link UP Jan 29 11:36:34.034501 systemd-networkd[1564]: vxlan.calico: Gained carrier Jan 29 11:36:34.238381 containerd[1959]: time="2025-01-29T11:36:34.238351842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:34.238668 containerd[1959]: time="2025-01-29T11:36:34.238499557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:36:34.239146 containerd[1959]: time="2025-01-29T11:36:34.239134216Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:34.240448 containerd[1959]: time="2025-01-29T11:36:34.240436430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:34.240963 containerd[1959]: time="2025-01-29T11:36:34.240949881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.748170743s" Jan 29 11:36:34.241014 containerd[1959]: time="2025-01-29T11:36:34.240965738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:36:34.241452 containerd[1959]: time="2025-01-29T11:36:34.241440355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:36:34.242011 containerd[1959]: time="2025-01-29T11:36:34.241997923Z" level=info msg="CreateContainer within sandbox \"98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:36:34.246228 containerd[1959]: time="2025-01-29T11:36:34.246209279Z" level=info msg="CreateContainer within sandbox \"98163ec473407b9c121cbbeda85a8f2423874102d59eb18633ab7741e37119cb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50f0ab74e4b912fccfd718adc4b2aef7e79c461f15c8319425081252b8006e12\"" Jan 29 11:36:34.246494 containerd[1959]: time="2025-01-29T11:36:34.246478838Z" level=info msg="StartContainer for \"50f0ab74e4b912fccfd718adc4b2aef7e79c461f15c8319425081252b8006e12\"" Jan 29 11:36:34.289997 containerd[1959]: time="2025-01-29T11:36:34.289937185Z" level=info msg="StartContainer for \"50f0ab74e4b912fccfd718adc4b2aef7e79c461f15c8319425081252b8006e12\" returns successfully" Jan 29 11:36:34.376436 kubelet[3444]: I0129 11:36:34.376421 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:34.382413 kubelet[3444]: I0129 11:36:34.382353 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66755f7995-nr82b" podStartSLOduration=12.642505702 podStartE2EDuration="17.382339916s" podCreationTimestamp="2025-01-29 11:36:17 +0000 UTC" firstStartedPulling="2025-01-29 11:36:29.501536206 +0000 UTC m=+33.350403677" lastFinishedPulling="2025-01-29 11:36:34.241370424 +0000 UTC m=+38.090237891" observedRunningTime="2025-01-29 11:36:34.382303625 +0000 UTC m=+38.231171107" watchObservedRunningTime="2025-01-29 11:36:34.382339916 +0000 UTC m=+38.231207386" Jan 29 11:36:34.627788 containerd[1959]: time="2025-01-29T11:36:34.627652351Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:34.627965 containerd[1959]: time="2025-01-29T11:36:34.627896968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:36:34.629655 containerd[1959]: time="2025-01-29T11:36:34.629612371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 388.156264ms" Jan 29 11:36:34.629655 containerd[1959]: time="2025-01-29T11:36:34.629626551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:36:34.630230 containerd[1959]: time="2025-01-29T11:36:34.630201368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:36:34.630890 containerd[1959]: time="2025-01-29T11:36:34.630876631Z" level=info msg="CreateContainer within sandbox \"c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:36:34.635879 containerd[1959]: time="2025-01-29T11:36:34.635839780Z" level=info msg="CreateContainer within sandbox \"c2626432ccc472ef568131d7e4b627736ec132e6ccdd3c6ba11b7783f0a747cf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"15ef3679a19f331397879d756ea471e1750b6eaf67cb79cdfb31ba40279b2f90\"" Jan 29 11:36:34.636141 containerd[1959]: time="2025-01-29T11:36:34.636127177Z" level=info msg="StartContainer for \"15ef3679a19f331397879d756ea471e1750b6eaf67cb79cdfb31ba40279b2f90\"" Jan 29 11:36:34.684868 containerd[1959]: time="2025-01-29T11:36:34.684845758Z" level=info msg="StartContainer for \"15ef3679a19f331397879d756ea471e1750b6eaf67cb79cdfb31ba40279b2f90\" returns successfully" Jan 29 11:36:35.070616 systemd-networkd[1564]: vxlan.calico: Gained IPv6LL Jan 29 11:36:35.386676 kubelet[3444]: I0129 11:36:35.386284 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:35.393892 kubelet[3444]: I0129 11:36:35.393858 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66755f7995-4p25x" podStartSLOduration=13.288573283 podStartE2EDuration="18.393845302s" podCreationTimestamp="2025-01-29 11:36:17 +0000 UTC" firstStartedPulling="2025-01-29 11:36:29.524805684 +0000 UTC m=+33.373673151" lastFinishedPulling="2025-01-29 11:36:34.630077703 +0000 UTC m=+38.478945170" observedRunningTime="2025-01-29 11:36:35.393787398 +0000 UTC m=+39.242654865" watchObservedRunningTime="2025-01-29 11:36:35.393845302 +0000 UTC m=+39.242712770" Jan 29 11:36:35.871654 containerd[1959]: time="2025-01-29T11:36:35.871629734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:35.871939 containerd[1959]: time="2025-01-29T11:36:35.871884703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:36:35.872259 containerd[1959]: time="2025-01-29T11:36:35.872248012Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:35.873163 containerd[1959]: time="2025-01-29T11:36:35.873153327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:35.873552 containerd[1959]: time="2025-01-29T11:36:35.873538693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.243322039s" Jan 29 11:36:35.873589 containerd[1959]: time="2025-01-29T11:36:35.873555559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:36:35.874480 containerd[1959]: time="2025-01-29T11:36:35.874469457Z" level=info msg="CreateContainer within sandbox \"2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:36:35.878625 containerd[1959]: time="2025-01-29T11:36:35.878583806Z" level=info msg="CreateContainer within sandbox \"2441861a97cd041e7497794e1527286b18032986200efea59b1ee14a02bf6fc5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f7dab7e13a5252dc16a279b94fb888e693e7f82d51835182c9fa34c8a6cad0a3\"" Jan 29 11:36:35.878826 containerd[1959]: time="2025-01-29T11:36:35.878787807Z" level=info msg="StartContainer for \"f7dab7e13a5252dc16a279b94fb888e693e7f82d51835182c9fa34c8a6cad0a3\"" Jan 29 11:36:35.957987 containerd[1959]: time="2025-01-29T11:36:35.957950537Z" level=info msg="StartContainer for \"f7dab7e13a5252dc16a279b94fb888e693e7f82d51835182c9fa34c8a6cad0a3\" returns successfully" Jan 29 11:36:36.232692 kubelet[3444]: I0129 11:36:36.232583 3444 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:36:36.232692 kubelet[3444]: I0129 11:36:36.232611 3444 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:36:36.396994 kubelet[3444]: I0129 11:36:36.396937 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:36.714018 kubelet[3444]: I0129 11:36:36.713941 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:36.847883 kubelet[3444]: I0129 11:36:36.847834 3444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4frhm" podStartSLOduration=13.405909474 podStartE2EDuration="19.84781423s" podCreationTimestamp="2025-01-29 11:36:17 +0000 UTC" firstStartedPulling="2025-01-29 11:36:29.431989544 +0000 UTC m=+33.280857017" lastFinishedPulling="2025-01-29 11:36:35.873894306 +0000 UTC m=+39.722761773" observedRunningTime="2025-01-29 11:36:36.417354437 +0000 UTC m=+40.266221967" watchObservedRunningTime="2025-01-29 11:36:36.84781423 +0000 UTC m=+40.696681700" Jan 29 11:36:45.910926 kubelet[3444]: I0129 11:36:45.910814 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:51.471600 systemd[1]: Started sshd@7-139.178.70.53:22-39.98.40.23:32896.service - OpenSSH per-connection server daemon (39.98.40.23:32896). Jan 29 11:36:51.474613 kubelet[3444]: I0129 11:36:51.474583 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:51.480764 sshd[7265]: Connection closed by 39.98.40.23 port 32896 Jan 29 11:36:51.481437 systemd[1]: sshd@7-139.178.70.53:22-39.98.40.23:32896.service: Deactivated successfully. Jan 29 11:36:56.191557 containerd[1959]: time="2025-01-29T11:36:56.191513761Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:56.191949 containerd[1959]: time="2025-01-29T11:36:56.191631145Z" level=info msg="TearDown network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" successfully" Jan 29 11:36:56.191949 containerd[1959]: time="2025-01-29T11:36:56.191640333Z" level=info msg="StopPodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" returns successfully" Jan 29 11:36:56.191949 containerd[1959]: time="2025-01-29T11:36:56.191919276Z" level=info msg="RemovePodSandbox for \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:56.191949 containerd[1959]: time="2025-01-29T11:36:56.191931280Z" level=info msg="Forcibly stopping sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\"" Jan 29 11:36:56.192054 containerd[1959]: time="2025-01-29T11:36:56.191959884Z" level=info msg="TearDown network for sandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" successfully" Jan 29 11:36:56.193280 containerd[1959]: time="2025-01-29T11:36:56.193240851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.193280 containerd[1959]: time="2025-01-29T11:36:56.193262103Z" level=info msg="RemovePodSandbox \"886652190920c7c881375256c43dee442e3425539772965d5d5bada2eddca37f\" returns successfully" Jan 29 11:36:56.193637 containerd[1959]: time="2025-01-29T11:36:56.193561869Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" Jan 29 11:36:56.193697 containerd[1959]: time="2025-01-29T11:36:56.193682166Z" level=info msg="TearDown network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" successfully" Jan 29 11:36:56.193697 containerd[1959]: time="2025-01-29T11:36:56.193693364Z" level=info msg="StopPodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" returns successfully" Jan 29 11:36:56.193945 containerd[1959]: time="2025-01-29T11:36:56.193913410Z" level=info msg="RemovePodSandbox for \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" Jan 29 11:36:56.193945 containerd[1959]: time="2025-01-29T11:36:56.193924860Z" level=info msg="Forcibly stopping sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\"" Jan 29 11:36:56.194010 containerd[1959]: time="2025-01-29T11:36:56.193989262Z" level=info msg="TearDown network for sandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" successfully" Jan 29 11:36:56.195376 containerd[1959]: time="2025-01-29T11:36:56.195316246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.195376 containerd[1959]: time="2025-01-29T11:36:56.195372888Z" level=info msg="RemovePodSandbox \"c665f6707dda0cf1824f155296f77f721f7d636a6e852f658be95b4f6c6ea946\" returns successfully" Jan 29 11:36:56.195692 containerd[1959]: time="2025-01-29T11:36:56.195628478Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\"" Jan 29 11:36:56.195725 containerd[1959]: time="2025-01-29T11:36:56.195701758Z" level=info msg="TearDown network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" successfully" Jan 29 11:36:56.195725 containerd[1959]: time="2025-01-29T11:36:56.195708637Z" level=info msg="StopPodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" returns successfully" Jan 29 11:36:56.195895 containerd[1959]: time="2025-01-29T11:36:56.195839357Z" level=info msg="RemovePodSandbox for \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\"" Jan 29 11:36:56.195895 containerd[1959]: time="2025-01-29T11:36:56.195869858Z" level=info msg="Forcibly stopping sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\"" Jan 29 11:36:56.195935 containerd[1959]: time="2025-01-29T11:36:56.195919377Z" level=info msg="TearDown network for sandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" successfully" Jan 29 11:36:56.197120 containerd[1959]: time="2025-01-29T11:36:56.197093526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.197150 containerd[1959]: time="2025-01-29T11:36:56.197128231Z" level=info msg="RemovePodSandbox \"5d4369b81514ab52a3fedb3fd28dbefc0cc89cd73284216d2c5985dad6850440\" returns successfully" Jan 29 11:36:56.197249 containerd[1959]: time="2025-01-29T11:36:56.197239511Z" level=info msg="StopPodSandbox for \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\"" Jan 29 11:36:56.197285 containerd[1959]: time="2025-01-29T11:36:56.197279255Z" level=info msg="TearDown network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" successfully" Jan 29 11:36:56.197309 containerd[1959]: time="2025-01-29T11:36:56.197285617Z" level=info msg="StopPodSandbox for \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" returns successfully" Jan 29 11:36:56.197495 containerd[1959]: time="2025-01-29T11:36:56.197470005Z" level=info msg="RemovePodSandbox for \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\"" Jan 29 11:36:56.197515 containerd[1959]: time="2025-01-29T11:36:56.197497867Z" level=info msg="Forcibly stopping sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\"" Jan 29 11:36:56.197559 containerd[1959]: time="2025-01-29T11:36:56.197529941Z" level=info msg="TearDown network for sandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" successfully" Jan 29 11:36:56.198646 containerd[1959]: time="2025-01-29T11:36:56.198608136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.198646 containerd[1959]: time="2025-01-29T11:36:56.198625391Z" level=info msg="RemovePodSandbox \"e36e089bdf3e29fd06399bf98df00b1e52fb1c140ed5e59fb513d6f5589962f3\" returns successfully" Jan 29 11:36:56.198797 containerd[1959]: time="2025-01-29T11:36:56.198760357Z" level=info msg="StopPodSandbox for \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\"" Jan 29 11:36:56.198857 containerd[1959]: time="2025-01-29T11:36:56.198823865Z" level=info msg="TearDown network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\" successfully" Jan 29 11:36:56.198857 containerd[1959]: time="2025-01-29T11:36:56.198830162Z" level=info msg="StopPodSandbox for \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\" returns successfully" Jan 29 11:36:56.199022 containerd[1959]: time="2025-01-29T11:36:56.198991354Z" level=info msg="RemovePodSandbox for \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\"" Jan 29 11:36:56.199022 containerd[1959]: time="2025-01-29T11:36:56.199019695Z" level=info msg="Forcibly stopping sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\"" Jan 29 11:36:56.199081 containerd[1959]: time="2025-01-29T11:36:56.199068553Z" level=info msg="TearDown network for sandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\" successfully" Jan 29 11:36:56.217286 containerd[1959]: time="2025-01-29T11:36:56.217246620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.217286 containerd[1959]: time="2025-01-29T11:36:56.217266236Z" level=info msg="RemovePodSandbox \"6c1d0212cbc332989fda7ce836664606eb810f71de0743f71294519338ec028b\" returns successfully" Jan 29 11:36:56.217582 containerd[1959]: time="2025-01-29T11:36:56.217504756Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:56.217651 containerd[1959]: time="2025-01-29T11:36:56.217593213Z" level=info msg="TearDown network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" successfully" Jan 29 11:36:56.217651 containerd[1959]: time="2025-01-29T11:36:56.217599580Z" level=info msg="StopPodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" returns successfully" Jan 29 11:36:56.217857 containerd[1959]: time="2025-01-29T11:36:56.217823094Z" level=info msg="RemovePodSandbox for \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:56.217906 containerd[1959]: time="2025-01-29T11:36:56.217857360Z" level=info msg="Forcibly stopping sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\"" Jan 29 11:36:56.217928 containerd[1959]: time="2025-01-29T11:36:56.217902256Z" level=info msg="TearDown network for sandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" successfully" Jan 29 11:36:56.219323 containerd[1959]: time="2025-01-29T11:36:56.219307909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.219363 containerd[1959]: time="2025-01-29T11:36:56.219329116Z" level=info msg="RemovePodSandbox \"310114bfd28ce07f86219fef0fb39f8118f5d712562004daea20b9c90f2a69d8\" returns successfully" Jan 29 11:36:56.220334 containerd[1959]: time="2025-01-29T11:36:56.219681440Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" Jan 29 11:36:56.220334 containerd[1959]: time="2025-01-29T11:36:56.219771438Z" level=info msg="TearDown network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" successfully" Jan 29 11:36:56.220334 containerd[1959]: time="2025-01-29T11:36:56.219805560Z" level=info msg="StopPodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" returns successfully" Jan 29 11:36:56.220334 containerd[1959]: time="2025-01-29T11:36:56.219971212Z" level=info msg="RemovePodSandbox for \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" Jan 29 11:36:56.220334 containerd[1959]: time="2025-01-29T11:36:56.219995693Z" level=info msg="Forcibly stopping sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\"" Jan 29 11:36:56.220334 containerd[1959]: time="2025-01-29T11:36:56.220057360Z" level=info msg="TearDown network for sandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" successfully" Jan 29 11:36:56.221528 containerd[1959]: time="2025-01-29T11:36:56.221516160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.221555 containerd[1959]: time="2025-01-29T11:36:56.221536282Z" level=info msg="RemovePodSandbox \"e5569f59dd8a32613d2cc67e5cdcfac99ad5cfdd0032464c90a9128a9e3de0ff\" returns successfully" Jan 29 11:36:56.221699 containerd[1959]: time="2025-01-29T11:36:56.221663756Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\"" Jan 29 11:36:56.221729 containerd[1959]: time="2025-01-29T11:36:56.221711332Z" level=info msg="TearDown network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" successfully" Jan 29 11:36:56.221749 containerd[1959]: time="2025-01-29T11:36:56.221729356Z" level=info msg="StopPodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" returns successfully" Jan 29 11:36:56.221829 containerd[1959]: time="2025-01-29T11:36:56.221821115Z" level=info msg="RemovePodSandbox for \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\"" Jan 29 11:36:56.221846 containerd[1959]: time="2025-01-29T11:36:56.221832798Z" level=info msg="Forcibly stopping sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\"" Jan 29 11:36:56.221902 containerd[1959]: time="2025-01-29T11:36:56.221870793Z" level=info msg="TearDown network for sandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" successfully" Jan 29 11:36:56.222984 containerd[1959]: time="2025-01-29T11:36:56.222948067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.223036 containerd[1959]: time="2025-01-29T11:36:56.222987953Z" level=info msg="RemovePodSandbox \"73de4b711e3bba88129c03b1a00519f1bea2d8e8476301f370d9845ad3d9c40f\" returns successfully" Jan 29 11:36:56.223185 containerd[1959]: time="2025-01-29T11:36:56.223148065Z" level=info msg="StopPodSandbox for \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\"" Jan 29 11:36:56.223218 containerd[1959]: time="2025-01-29T11:36:56.223182389Z" level=info msg="TearDown network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" successfully" Jan 29 11:36:56.223218 containerd[1959]: time="2025-01-29T11:36:56.223199535Z" level=info msg="StopPodSandbox for \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" returns successfully" Jan 29 11:36:56.223291 containerd[1959]: time="2025-01-29T11:36:56.223283137Z" level=info msg="RemovePodSandbox for \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\"" Jan 29 11:36:56.223320 containerd[1959]: time="2025-01-29T11:36:56.223300640Z" level=info msg="Forcibly stopping sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\"" Jan 29 11:36:56.223350 containerd[1959]: time="2025-01-29T11:36:56.223332043Z" level=info msg="TearDown network for sandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" successfully" Jan 29 11:36:56.224524 containerd[1959]: time="2025-01-29T11:36:56.224446498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.224524 containerd[1959]: time="2025-01-29T11:36:56.224502935Z" level=info msg="RemovePodSandbox \"4c447c3ec9a8061e58dd1d261331ca98ac767090c88c05df28bc84122437ad79\" returns successfully" Jan 29 11:36:56.224732 containerd[1959]: time="2025-01-29T11:36:56.224698186Z" level=info msg="StopPodSandbox for \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\"" Jan 29 11:36:56.224781 containerd[1959]: time="2025-01-29T11:36:56.224770843Z" level=info msg="TearDown network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\" successfully" Jan 29 11:36:56.224781 containerd[1959]: time="2025-01-29T11:36:56.224777215Z" level=info msg="StopPodSandbox for \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\" returns successfully" Jan 29 11:36:56.224988 containerd[1959]: time="2025-01-29T11:36:56.224955719Z" level=info msg="RemovePodSandbox for \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\"" Jan 29 11:36:56.224988 containerd[1959]: time="2025-01-29T11:36:56.224982651Z" level=info msg="Forcibly stopping sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\"" Jan 29 11:36:56.225085 containerd[1959]: time="2025-01-29T11:36:56.225046494Z" level=info msg="TearDown network for sandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\" successfully" Jan 29 11:36:56.226155 containerd[1959]: time="2025-01-29T11:36:56.226116424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.226155 containerd[1959]: time="2025-01-29T11:36:56.226133361Z" level=info msg="RemovePodSandbox \"a59785fd33852d06b16401b37f7076c8d9496706e1462c0acbdbed401b46a554\" returns successfully" Jan 29 11:36:56.226272 containerd[1959]: time="2025-01-29T11:36:56.226261681Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:56.226370 containerd[1959]: time="2025-01-29T11:36:56.226324809Z" level=info msg="TearDown network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" successfully" Jan 29 11:36:56.226370 containerd[1959]: time="2025-01-29T11:36:56.226331539Z" level=info msg="StopPodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" returns successfully" Jan 29 11:36:56.226516 containerd[1959]: time="2025-01-29T11:36:56.226484376Z" level=info msg="RemovePodSandbox for \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:56.226516 containerd[1959]: time="2025-01-29T11:36:56.226496434Z" level=info msg="Forcibly stopping sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\"" Jan 29 11:36:56.226597 containerd[1959]: time="2025-01-29T11:36:56.226526615Z" level=info msg="TearDown network for sandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" successfully" Jan 29 11:36:56.227677 containerd[1959]: time="2025-01-29T11:36:56.227638642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.227677 containerd[1959]: time="2025-01-29T11:36:56.227655620Z" level=info msg="RemovePodSandbox \"44973b2d7ba85c6b98b0cde9774715a152cc5701f4a8878b5abbc7fbe6f76a72\" returns successfully" Jan 29 11:36:56.227937 containerd[1959]: time="2025-01-29T11:36:56.227883300Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" Jan 29 11:36:56.227984 containerd[1959]: time="2025-01-29T11:36:56.227970818Z" level=info msg="TearDown network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" successfully" Jan 29 11:36:56.227984 containerd[1959]: time="2025-01-29T11:36:56.227977060Z" level=info msg="StopPodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" returns successfully" Jan 29 11:36:56.228209 containerd[1959]: time="2025-01-29T11:36:56.228156968Z" level=info msg="RemovePodSandbox for \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" Jan 29 11:36:56.228209 containerd[1959]: time="2025-01-29T11:36:56.228184609Z" level=info msg="Forcibly stopping sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\"" Jan 29 11:36:56.228263 containerd[1959]: time="2025-01-29T11:36:56.228210862Z" level=info msg="TearDown network for sandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" successfully" Jan 29 11:36:56.229311 containerd[1959]: time="2025-01-29T11:36:56.229269254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.229311 containerd[1959]: time="2025-01-29T11:36:56.229285585Z" level=info msg="RemovePodSandbox \"a6554e61854b09ba0e2d8e8e67019bff062e0bde1f2529058a5beaa146335098\" returns successfully" Jan 29 11:36:56.229554 containerd[1959]: time="2025-01-29T11:36:56.229515151Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\"" Jan 29 11:36:56.229583 containerd[1959]: time="2025-01-29T11:36:56.229560652Z" level=info msg="TearDown network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" successfully" Jan 29 11:36:56.229583 containerd[1959]: time="2025-01-29T11:36:56.229578452Z" level=info msg="StopPodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" returns successfully" Jan 29 11:36:56.229755 containerd[1959]: time="2025-01-29T11:36:56.229709003Z" level=info msg="RemovePodSandbox for \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\"" Jan 29 11:36:56.229755 containerd[1959]: time="2025-01-29T11:36:56.229719746Z" level=info msg="Forcibly stopping sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\"" Jan 29 11:36:56.229831 containerd[1959]: time="2025-01-29T11:36:56.229792857Z" level=info msg="TearDown network for sandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" successfully" Jan 29 11:36:56.230957 containerd[1959]: time="2025-01-29T11:36:56.230922030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.230987 containerd[1959]: time="2025-01-29T11:36:56.230959339Z" level=info msg="RemovePodSandbox \"50b2a311df9ed9cf36cfe9d6d7dda9c2d68e41a4a294af0d678e3afcff46730a\" returns successfully" Jan 29 11:36:56.231141 containerd[1959]: time="2025-01-29T11:36:56.231080922Z" level=info msg="StopPodSandbox for \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\"" Jan 29 11:36:56.231205 containerd[1959]: time="2025-01-29T11:36:56.231162182Z" level=info msg="TearDown network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" successfully" Jan 29 11:36:56.231205 containerd[1959]: time="2025-01-29T11:36:56.231168105Z" level=info msg="StopPodSandbox for \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" returns successfully" Jan 29 11:36:56.231282 containerd[1959]: time="2025-01-29T11:36:56.231274569Z" level=info msg="RemovePodSandbox for \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\"" Jan 29 11:36:56.231307 containerd[1959]: time="2025-01-29T11:36:56.231285289Z" level=info msg="Forcibly stopping sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\"" Jan 29 11:36:56.231429 containerd[1959]: time="2025-01-29T11:36:56.231336775Z" level=info msg="TearDown network for sandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" successfully" Jan 29 11:36:56.232475 containerd[1959]: time="2025-01-29T11:36:56.232437532Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.232475 containerd[1959]: time="2025-01-29T11:36:56.232456723Z" level=info msg="RemovePodSandbox \"dc935f49c58cb7ff10286d7aa926a38e14f5bcf4ebe89e99dc3ab63f93f3738c\" returns successfully" Jan 29 11:36:56.232738 containerd[1959]: time="2025-01-29T11:36:56.232698778Z" level=info msg="StopPodSandbox for \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\"" Jan 29 11:36:56.232820 containerd[1959]: time="2025-01-29T11:36:56.232739238Z" level=info msg="TearDown network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\" successfully" Jan 29 11:36:56.232820 containerd[1959]: time="2025-01-29T11:36:56.232763342Z" level=info msg="StopPodSandbox for \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\" returns successfully" Jan 29 11:36:56.233066 containerd[1959]: time="2025-01-29T11:36:56.233012876Z" level=info msg="RemovePodSandbox for \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\"" Jan 29 11:36:56.233066 containerd[1959]: time="2025-01-29T11:36:56.233039541Z" level=info msg="Forcibly stopping sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\"" Jan 29 11:36:56.233132 containerd[1959]: time="2025-01-29T11:36:56.233089306Z" level=info msg="TearDown network for sandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\" successfully" Jan 29 11:36:56.234220 containerd[1959]: time="2025-01-29T11:36:56.234178710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.234250 containerd[1959]: time="2025-01-29T11:36:56.234221501Z" level=info msg="RemovePodSandbox \"5296f99b4dc62132255f4fe0c516b2c8227f47489f448903421874c15dbf31a3\" returns successfully" Jan 29 11:36:56.234458 containerd[1959]: time="2025-01-29T11:36:56.234408245Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:56.234504 containerd[1959]: time="2025-01-29T11:36:56.234490359Z" level=info msg="TearDown network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" successfully" Jan 29 11:36:56.234504 containerd[1959]: time="2025-01-29T11:36:56.234496821Z" level=info msg="StopPodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" returns successfully" Jan 29 11:36:56.234721 containerd[1959]: time="2025-01-29T11:36:56.234679306Z" level=info msg="RemovePodSandbox for \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:56.234721 containerd[1959]: time="2025-01-29T11:36:56.234690218Z" level=info msg="Forcibly stopping sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\"" Jan 29 11:36:56.234844 containerd[1959]: time="2025-01-29T11:36:56.234745480Z" level=info msg="TearDown network for sandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" successfully" Jan 29 11:36:56.235819 containerd[1959]: time="2025-01-29T11:36:56.235772391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.235819 containerd[1959]: time="2025-01-29T11:36:56.235788838Z" level=info msg="RemovePodSandbox \"7a18cc63233347cd3f85e3536bd88ae50f2b1fc4b307d9e1526799d3ef0c643b\" returns successfully" Jan 29 11:36:56.236039 containerd[1959]: time="2025-01-29T11:36:56.235984081Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" Jan 29 11:36:56.236088 containerd[1959]: time="2025-01-29T11:36:56.236061878Z" level=info msg="TearDown network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" successfully" Jan 29 11:36:56.236088 containerd[1959]: time="2025-01-29T11:36:56.236068468Z" level=info msg="StopPodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" returns successfully" Jan 29 11:36:56.236206 containerd[1959]: time="2025-01-29T11:36:56.236196758Z" level=info msg="RemovePodSandbox for \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" Jan 29 11:36:56.236229 containerd[1959]: time="2025-01-29T11:36:56.236208565Z" level=info msg="Forcibly stopping sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\"" Jan 29 11:36:56.236265 containerd[1959]: time="2025-01-29T11:36:56.236249116Z" level=info msg="TearDown network for sandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" successfully" Jan 29 11:36:56.237292 containerd[1959]: time="2025-01-29T11:36:56.237281469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.237355 containerd[1959]: time="2025-01-29T11:36:56.237305299Z" level=info msg="RemovePodSandbox \"adda9a6df16f774f10fd781a7c136d6f6a86ebfc037c4f41f7c02be2ae6b4e09\" returns successfully" Jan 29 11:36:56.237587 containerd[1959]: time="2025-01-29T11:36:56.237553111Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\"" Jan 29 11:36:56.237637 containerd[1959]: time="2025-01-29T11:36:56.237630862Z" level=info msg="TearDown network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" successfully" Jan 29 11:36:56.237658 containerd[1959]: time="2025-01-29T11:36:56.237637569Z" level=info msg="StopPodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" returns successfully" Jan 29 11:36:56.237851 containerd[1959]: time="2025-01-29T11:36:56.237817848Z" level=info msg="RemovePodSandbox for \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\"" Jan 29 11:36:56.237851 containerd[1959]: time="2025-01-29T11:36:56.237829783Z" level=info msg="Forcibly stopping sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\"" Jan 29 11:36:56.237925 containerd[1959]: time="2025-01-29T11:36:56.237891917Z" level=info msg="TearDown network for sandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" successfully" Jan 29 11:36:56.239032 containerd[1959]: time="2025-01-29T11:36:56.238991117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.239032 containerd[1959]: time="2025-01-29T11:36:56.239008608Z" level=info msg="RemovePodSandbox \"493096bffccb48aa6278c21e89442e141756465eb1ded0cac04cc378edbf4be3\" returns successfully" Jan 29 11:36:56.239159 containerd[1959]: time="2025-01-29T11:36:56.239149056Z" level=info msg="StopPodSandbox for \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\"" Jan 29 11:36:56.239204 containerd[1959]: time="2025-01-29T11:36:56.239194736Z" level=info msg="TearDown network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" successfully" Jan 29 11:36:56.239226 containerd[1959]: time="2025-01-29T11:36:56.239204677Z" level=info msg="StopPodSandbox for \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" returns successfully" Jan 29 11:36:56.239367 containerd[1959]: time="2025-01-29T11:36:56.239305220Z" level=info msg="RemovePodSandbox for \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\"" Jan 29 11:36:56.239367 containerd[1959]: time="2025-01-29T11:36:56.239317537Z" level=info msg="Forcibly stopping sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\"" Jan 29 11:36:56.239427 containerd[1959]: time="2025-01-29T11:36:56.239378728Z" level=info msg="TearDown network for sandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" successfully" Jan 29 11:36:56.240470 containerd[1959]: time="2025-01-29T11:36:56.240432124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.240470 containerd[1959]: time="2025-01-29T11:36:56.240449440Z" level=info msg="RemovePodSandbox \"9a7b6dacb17724f42d4585c2d6e7d809e0610b18931dad0ccf3049b814f5b2c2\" returns successfully" Jan 29 11:36:56.240671 containerd[1959]: time="2025-01-29T11:36:56.240619501Z" level=info msg="StopPodSandbox for \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\"" Jan 29 11:36:56.240730 containerd[1959]: time="2025-01-29T11:36:56.240695238Z" level=info msg="TearDown network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\" successfully" Jan 29 11:36:56.240730 containerd[1959]: time="2025-01-29T11:36:56.240725067Z" level=info msg="StopPodSandbox for \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\" returns successfully" Jan 29 11:36:56.240962 containerd[1959]: time="2025-01-29T11:36:56.240929032Z" level=info msg="RemovePodSandbox for \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\"" Jan 29 11:36:56.240962 containerd[1959]: time="2025-01-29T11:36:56.240939189Z" level=info msg="Forcibly stopping sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\"" Jan 29 11:36:56.241012 containerd[1959]: time="2025-01-29T11:36:56.240996060Z" level=info msg="TearDown network for sandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\" successfully" Jan 29 11:36:56.242178 containerd[1959]: time="2025-01-29T11:36:56.242117638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.242178 containerd[1959]: time="2025-01-29T11:36:56.242150753Z" level=info msg="RemovePodSandbox \"5b067760630e2857f342e18d1fc61b30e2ba1ae30aa5c0bf5ac02d01bbde9eb0\" returns successfully" Jan 29 11:36:56.242300 containerd[1959]: time="2025-01-29T11:36:56.242287277Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:56.242366 containerd[1959]: time="2025-01-29T11:36:56.242359547Z" level=info msg="TearDown network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" successfully" Jan 29 11:36:56.242403 containerd[1959]: time="2025-01-29T11:36:56.242366229Z" level=info msg="StopPodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" returns successfully" Jan 29 11:36:56.242588 containerd[1959]: time="2025-01-29T11:36:56.242534757Z" level=info msg="RemovePodSandbox for \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:56.242588 containerd[1959]: time="2025-01-29T11:36:56.242585542Z" level=info msg="Forcibly stopping sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\"" Jan 29 11:36:56.242638 containerd[1959]: time="2025-01-29T11:36:56.242617995Z" level=info msg="TearDown network for sandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" successfully" Jan 29 11:36:56.243726 containerd[1959]: time="2025-01-29T11:36:56.243691050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.243726 containerd[1959]: time="2025-01-29T11:36:56.243708072Z" level=info msg="RemovePodSandbox \"9d450221c45f6f69345547468c935830e4952f13334ef1f093cfe321f14e64f9\" returns successfully" Jan 29 11:36:56.243958 containerd[1959]: time="2025-01-29T11:36:56.243928055Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" Jan 29 11:36:56.244059 containerd[1959]: time="2025-01-29T11:36:56.244001129Z" level=info msg="TearDown network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" successfully" Jan 29 11:36:56.244059 containerd[1959]: time="2025-01-29T11:36:56.244020081Z" level=info msg="StopPodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" returns successfully" Jan 29 11:36:56.244195 containerd[1959]: time="2025-01-29T11:36:56.244184737Z" level=info msg="RemovePodSandbox for \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" Jan 29 11:36:56.244223 containerd[1959]: time="2025-01-29T11:36:56.244196645Z" level=info msg="Forcibly stopping sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\"" Jan 29 11:36:56.244245 containerd[1959]: time="2025-01-29T11:36:56.244228541Z" level=info msg="TearDown network for sandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" successfully" Jan 29 11:36:56.245253 containerd[1959]: time="2025-01-29T11:36:56.245242997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.245276 containerd[1959]: time="2025-01-29T11:36:56.245259726Z" level=info msg="RemovePodSandbox \"7286c56b2895590e903942a899de695b46f6a0502fcf7cbd19bf9c57d82de1d4\" returns successfully" Jan 29 11:36:56.245551 containerd[1959]: time="2025-01-29T11:36:56.245540956Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\"" Jan 29 11:36:56.245621 containerd[1959]: time="2025-01-29T11:36:56.245597599Z" level=info msg="TearDown network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" successfully" Jan 29 11:36:56.245621 containerd[1959]: time="2025-01-29T11:36:56.245620161Z" level=info msg="StopPodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" returns successfully" Jan 29 11:36:56.245865 containerd[1959]: time="2025-01-29T11:36:56.245839013Z" level=info msg="RemovePodSandbox for \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\"" Jan 29 11:36:56.245920 containerd[1959]: time="2025-01-29T11:36:56.245866949Z" level=info msg="Forcibly stopping sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\"" Jan 29 11:36:56.245939 containerd[1959]: time="2025-01-29T11:36:56.245927140Z" level=info msg="TearDown network for sandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" successfully" Jan 29 11:36:56.247188 containerd[1959]: time="2025-01-29T11:36:56.247178154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.247214 containerd[1959]: time="2025-01-29T11:36:56.247194359Z" level=info msg="RemovePodSandbox \"7d703463d6f3829374f31b69e71540a1ca0cb4e0362370e01490ab5975a622a7\" returns successfully" Jan 29 11:36:56.247339 containerd[1959]: time="2025-01-29T11:36:56.247306123Z" level=info msg="StopPodSandbox for \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\"" Jan 29 11:36:56.247398 containerd[1959]: time="2025-01-29T11:36:56.247368149Z" level=info msg="TearDown network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" successfully" Jan 29 11:36:56.247398 containerd[1959]: time="2025-01-29T11:36:56.247396562Z" level=info msg="StopPodSandbox for \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" returns successfully" Jan 29 11:36:56.247541 containerd[1959]: time="2025-01-29T11:36:56.247494664Z" level=info msg="RemovePodSandbox for \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\"" Jan 29 11:36:56.247541 containerd[1959]: time="2025-01-29T11:36:56.247507047Z" level=info msg="Forcibly stopping sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\"" Jan 29 11:36:56.247628 containerd[1959]: time="2025-01-29T11:36:56.247545194Z" level=info msg="TearDown network for sandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" successfully" Jan 29 11:36:56.248668 containerd[1959]: time="2025-01-29T11:36:56.248628726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.248668 containerd[1959]: time="2025-01-29T11:36:56.248646534Z" level=info msg="RemovePodSandbox \"9e02fa6573b3e30f728d62f7652ce443be13ed8c960a117a07636e76e471b7ee\" returns successfully" Jan 29 11:36:56.248821 containerd[1959]: time="2025-01-29T11:36:56.248780957Z" level=info msg="StopPodSandbox for \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\"" Jan 29 11:36:56.248821 containerd[1959]: time="2025-01-29T11:36:56.248818910Z" level=info msg="TearDown network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\" successfully" Jan 29 11:36:56.248886 containerd[1959]: time="2025-01-29T11:36:56.248824881Z" level=info msg="StopPodSandbox for \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\" returns successfully" Jan 29 11:36:56.248981 containerd[1959]: time="2025-01-29T11:36:56.248958579Z" level=info msg="RemovePodSandbox for \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\"" Jan 29 11:36:56.249020 containerd[1959]: time="2025-01-29T11:36:56.248983775Z" level=info msg="Forcibly stopping sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\"" Jan 29 11:36:56.249058 containerd[1959]: time="2025-01-29T11:36:56.249029804Z" level=info msg="TearDown network for sandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\" successfully" Jan 29 11:36:56.250092 containerd[1959]: time="2025-01-29T11:36:56.250048039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.250092 containerd[1959]: time="2025-01-29T11:36:56.250089768Z" level=info msg="RemovePodSandbox \"d501de68d71938a222ee2411c90a700a5f1f99f8598eabd2ed40a5c21d1ae944\" returns successfully" Jan 29 11:36:56.250228 containerd[1959]: time="2025-01-29T11:36:56.250217577Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:56.250262 containerd[1959]: time="2025-01-29T11:36:56.250255642Z" level=info msg="TearDown network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" successfully" Jan 29 11:36:56.250285 containerd[1959]: time="2025-01-29T11:36:56.250261862Z" level=info msg="StopPodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" returns successfully" Jan 29 11:36:56.250446 containerd[1959]: time="2025-01-29T11:36:56.250410059Z" level=info msg="RemovePodSandbox for \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:56.250468 containerd[1959]: time="2025-01-29T11:36:56.250449876Z" level=info msg="Forcibly stopping sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\"" Jan 29 11:36:56.250511 containerd[1959]: time="2025-01-29T11:36:56.250496229Z" level=info msg="TearDown network for sandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" successfully" Jan 29 11:36:56.251574 containerd[1959]: time="2025-01-29T11:36:56.251536164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.251574 containerd[1959]: time="2025-01-29T11:36:56.251553561Z" level=info msg="RemovePodSandbox \"5658ebe767dc0e48483b50ea60a975db28a3de9d6da7c647b94ccff706b8fc91\" returns successfully" Jan 29 11:36:56.251768 containerd[1959]: time="2025-01-29T11:36:56.251740368Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" Jan 29 11:36:56.251858 containerd[1959]: time="2025-01-29T11:36:56.251850201Z" level=info msg="TearDown network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" successfully" Jan 29 11:36:56.251858 containerd[1959]: time="2025-01-29T11:36:56.251857248Z" level=info msg="StopPodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" returns successfully" Jan 29 11:36:56.252075 containerd[1959]: time="2025-01-29T11:36:56.252066578Z" level=info msg="RemovePodSandbox for \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" Jan 29 11:36:56.252116 containerd[1959]: time="2025-01-29T11:36:56.252078521Z" level=info msg="Forcibly stopping sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\"" Jan 29 11:36:56.252138 containerd[1959]: time="2025-01-29T11:36:56.252123882Z" level=info msg="TearDown network for sandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" successfully" Jan 29 11:36:56.253201 containerd[1959]: time="2025-01-29T11:36:56.253191303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.253243 containerd[1959]: time="2025-01-29T11:36:56.253209604Z" level=info msg="RemovePodSandbox \"5694e7b381134602609bc7cbfae0f985c9cc667777f52a910a340fbc95a1bd1e\" returns successfully" Jan 29 11:36:56.253447 containerd[1959]: time="2025-01-29T11:36:56.253439614Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\"" Jan 29 11:36:56.253529 containerd[1959]: time="2025-01-29T11:36:56.253506728Z" level=info msg="TearDown network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" successfully" Jan 29 11:36:56.253529 containerd[1959]: time="2025-01-29T11:36:56.253528605Z" level=info msg="StopPodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" returns successfully" Jan 29 11:36:56.253763 containerd[1959]: time="2025-01-29T11:36:56.253755024Z" level=info msg="RemovePodSandbox for \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\"" Jan 29 11:36:56.253803 containerd[1959]: time="2025-01-29T11:36:56.253764732Z" level=info msg="Forcibly stopping sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\"" Jan 29 11:36:56.253840 containerd[1959]: time="2025-01-29T11:36:56.253810165Z" level=info msg="TearDown network for sandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" successfully" Jan 29 11:36:56.254981 containerd[1959]: time="2025-01-29T11:36:56.254971549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.255002 containerd[1959]: time="2025-01-29T11:36:56.254988017Z" level=info msg="RemovePodSandbox \"7ca2d386dead55069830e1077ce55338eac73a0ae5279b1def2cf5864f02f316\" returns successfully" Jan 29 11:36:56.255179 containerd[1959]: time="2025-01-29T11:36:56.255169330Z" level=info msg="StopPodSandbox for \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\"" Jan 29 11:36:56.255227 containerd[1959]: time="2025-01-29T11:36:56.255209586Z" level=info msg="TearDown network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" successfully" Jan 29 11:36:56.255248 containerd[1959]: time="2025-01-29T11:36:56.255227880Z" level=info msg="StopPodSandbox for \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" returns successfully" Jan 29 11:36:56.255353 containerd[1959]: time="2025-01-29T11:36:56.255343175Z" level=info msg="RemovePodSandbox for \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\"" Jan 29 11:36:56.255378 containerd[1959]: time="2025-01-29T11:36:56.255354787Z" level=info msg="Forcibly stopping sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\"" Jan 29 11:36:56.255402 containerd[1959]: time="2025-01-29T11:36:56.255386787Z" level=info msg="TearDown network for sandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" successfully" Jan 29 11:36:56.256472 containerd[1959]: time="2025-01-29T11:36:56.256462186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.256495 containerd[1959]: time="2025-01-29T11:36:56.256479470Z" level=info msg="RemovePodSandbox \"5bdb5296b226ef8edb82e2eed72289b6d99350f1ac604a75468135e82bb7c86f\" returns successfully" Jan 29 11:36:56.256695 containerd[1959]: time="2025-01-29T11:36:56.256651373Z" level=info msg="StopPodSandbox for \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\"" Jan 29 11:36:56.256748 containerd[1959]: time="2025-01-29T11:36:56.256740193Z" level=info msg="TearDown network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\" successfully" Jan 29 11:36:56.256777 containerd[1959]: time="2025-01-29T11:36:56.256747113Z" level=info msg="StopPodSandbox for \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\" returns successfully" Jan 29 11:36:56.256939 containerd[1959]: time="2025-01-29T11:36:56.256929980Z" level=info msg="RemovePodSandbox for \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\"" Jan 29 11:36:56.256977 containerd[1959]: time="2025-01-29T11:36:56.256941416Z" level=info msg="Forcibly stopping sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\"" Jan 29 11:36:56.257006 containerd[1959]: time="2025-01-29T11:36:56.256991014Z" level=info msg="TearDown network for sandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\" successfully" Jan 29 11:36:56.258266 containerd[1959]: time="2025-01-29T11:36:56.258217909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:56.258266 containerd[1959]: time="2025-01-29T11:36:56.258235352Z" level=info msg="RemovePodSandbox \"fffb48256745a11d2225d2865e66a1e5833905a0af4313fe3e7f0b88d41a9241\" returns successfully" Jan 29 11:37:09.252180 kubelet[3444]: I0129 11:37:09.252002 3444 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:41:59.344569 systemd[1]: Started sshd@8-139.178.70.53:22-194.0.234.38:50330.service - OpenSSH per-connection server daemon (194.0.234.38:50330). Jan 29 11:42:00.734472 sshd[7976]: Connection closed by authenticating user root 194.0.234.38 port 50330 [preauth] Jan 29 11:42:00.737613 systemd[1]: sshd@8-139.178.70.53:22-194.0.234.38:50330.service: Deactivated successfully. Jan 29 11:42:03.100012 systemd[1]: Started sshd@9-139.178.70.53:22-139.178.89.65:50806.service - OpenSSH per-connection server daemon (139.178.89.65:50806). Jan 29 11:42:03.165714 sshd[7986]: Accepted publickey for core from 139.178.89.65 port 50806 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:03.166354 sshd-session[7986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:03.169092 systemd-logind[1938]: New session 10 of user core. Jan 29 11:42:03.183579 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:42:03.299219 sshd[7989]: Connection closed by 139.178.89.65 port 50806 Jan 29 11:42:03.299398 sshd-session[7986]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:03.301365 systemd[1]: sshd@9-139.178.70.53:22-139.178.89.65:50806.service: Deactivated successfully. Jan 29 11:42:03.302471 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:42:03.302479 systemd-logind[1938]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:42:03.303157 systemd-logind[1938]: Removed session 10. Jan 29 11:42:08.326655 systemd[1]: Started sshd@10-139.178.70.53:22-139.178.89.65:50814.service - OpenSSH per-connection server daemon (139.178.89.65:50814). Jan 29 11:42:08.368739 sshd[8044]: Accepted publickey for core from 139.178.89.65 port 50814 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:08.369763 sshd-session[8044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:08.373574 systemd-logind[1938]: New session 11 of user core. Jan 29 11:42:08.384512 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:42:08.470824 sshd[8047]: Connection closed by 139.178.89.65 port 50814 Jan 29 11:42:08.471007 sshd-session[8044]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:08.472472 systemd[1]: sshd@10-139.178.70.53:22-139.178.89.65:50814.service: Deactivated successfully. Jan 29 11:42:08.473905 systemd-logind[1938]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:42:08.473992 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:42:08.474571 systemd-logind[1938]: Removed session 11. Jan 29 11:42:13.486583 systemd[1]: Started sshd@11-139.178.70.53:22-139.178.89.65:52438.service - OpenSSH per-connection server daemon (139.178.89.65:52438). Jan 29 11:42:13.523071 sshd[8073]: Accepted publickey for core from 139.178.89.65 port 52438 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:13.523909 sshd-session[8073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:13.527182 systemd-logind[1938]: New session 12 of user core. Jan 29 11:42:13.547566 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:42:13.637975 sshd[8076]: Connection closed by 139.178.89.65 port 52438 Jan 29 11:42:13.638164 sshd-session[8073]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:13.652508 systemd[1]: Started sshd@12-139.178.70.53:22-139.178.89.65:52450.service - OpenSSH per-connection server daemon (139.178.89.65:52450). Jan 29 11:42:13.652836 systemd[1]: sshd@11-139.178.70.53:22-139.178.89.65:52438.service: Deactivated successfully. Jan 29 11:42:13.653751 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:42:13.654525 systemd-logind[1938]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:42:13.655234 systemd-logind[1938]: Removed session 12. Jan 29 11:42:13.690185 sshd[8099]: Accepted publickey for core from 139.178.89.65 port 52450 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:13.690909 sshd-session[8099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:13.694014 systemd-logind[1938]: New session 13 of user core. Jan 29 11:42:13.704607 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:42:13.811464 sshd[8104]: Connection closed by 139.178.89.65 port 52450 Jan 29 11:42:13.811653 sshd-session[8099]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:13.822525 systemd[1]: Started sshd@13-139.178.70.53:22-139.178.89.65:52466.service - OpenSSH per-connection server daemon (139.178.89.65:52466). Jan 29 11:42:13.822875 systemd[1]: sshd@12-139.178.70.53:22-139.178.89.65:52450.service: Deactivated successfully. Jan 29 11:42:13.824325 systemd-logind[1938]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:42:13.824444 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:42:13.825084 systemd-logind[1938]: Removed session 13. Jan 29 11:42:13.859823 sshd[8123]: Accepted publickey for core from 139.178.89.65 port 52466 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:13.860486 sshd-session[8123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:13.863229 systemd-logind[1938]: New session 14 of user core. Jan 29 11:42:13.881547 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:42:14.008507 sshd[8129]: Connection closed by 139.178.89.65 port 52466 Jan 29 11:42:14.008649 sshd-session[8123]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:14.010026 systemd[1]: sshd@13-139.178.70.53:22-139.178.89.65:52466.service: Deactivated successfully. Jan 29 11:42:14.011445 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:42:14.011465 systemd-logind[1938]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:42:14.012104 systemd-logind[1938]: Removed session 14. Jan 29 11:42:19.036011 systemd[1]: Started sshd@14-139.178.70.53:22-139.178.89.65:52480.service - OpenSSH per-connection server daemon (139.178.89.65:52480). Jan 29 11:42:19.107484 sshd[8176]: Accepted publickey for core from 139.178.89.65 port 52480 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:19.108187 sshd-session[8176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:19.111069 systemd-logind[1938]: New session 15 of user core. Jan 29 11:42:19.122603 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:42:19.208445 sshd[8179]: Connection closed by 139.178.89.65 port 52480 Jan 29 11:42:19.208639 sshd-session[8176]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:19.222057 systemd[1]: Started sshd@15-139.178.70.53:22-139.178.89.65:52488.service - OpenSSH per-connection server daemon (139.178.89.65:52488). Jan 29 11:42:19.223704 systemd[1]: sshd@14-139.178.70.53:22-139.178.89.65:52480.service: Deactivated successfully. Jan 29 11:42:19.226696 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:42:19.227367 systemd-logind[1938]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:42:19.228110 systemd-logind[1938]: Removed session 15. Jan 29 11:42:19.255840 sshd[8200]: Accepted publickey for core from 139.178.89.65 port 52488 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:19.256438 sshd-session[8200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:19.259080 systemd-logind[1938]: New session 16 of user core. Jan 29 11:42:19.277123 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:42:19.508575 sshd[8206]: Connection closed by 139.178.89.65 port 52488 Jan 29 11:42:19.509222 sshd-session[8200]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:19.524589 systemd[1]: Started sshd@16-139.178.70.53:22-139.178.89.65:52504.service - OpenSSH per-connection server daemon (139.178.89.65:52504). Jan 29 11:42:19.524871 systemd[1]: sshd@15-139.178.70.53:22-139.178.89.65:52488.service: Deactivated successfully. Jan 29 11:42:19.526200 systemd-logind[1938]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:42:19.526478 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:42:19.527137 systemd-logind[1938]: Removed session 16. Jan 29 11:42:19.563355 sshd[8225]: Accepted publickey for core from 139.178.89.65 port 52504 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:19.564111 sshd-session[8225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:19.567178 systemd-logind[1938]: New session 17 of user core. Jan 29 11:42:19.587537 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:42:20.699448 sshd[8231]: Connection closed by 139.178.89.65 port 52504 Jan 29 11:42:20.700067 sshd-session[8225]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:20.717018 systemd[1]: Started sshd@17-139.178.70.53:22-139.178.89.65:52508.service - OpenSSH per-connection server daemon (139.178.89.65:52508). Jan 29 11:42:20.718709 systemd[1]: sshd@16-139.178.70.53:22-139.178.89.65:52504.service: Deactivated successfully. Jan 29 11:42:20.721442 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:42:20.723401 systemd-logind[1938]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:42:20.725135 systemd-logind[1938]: Removed session 17. Jan 29 11:42:20.762752 sshd[8256]: Accepted publickey for core from 139.178.89.65 port 52508 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:20.763411 sshd-session[8256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:20.765958 systemd-logind[1938]: New session 18 of user core. Jan 29 11:42:20.786055 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:42:21.012993 sshd[8265]: Connection closed by 139.178.89.65 port 52508 Jan 29 11:42:21.013213 sshd-session[8256]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:21.027563 systemd[1]: Started sshd@18-139.178.70.53:22-139.178.89.65:37162.service - OpenSSH per-connection server daemon (139.178.89.65:37162). Jan 29 11:42:21.027992 systemd[1]: sshd@17-139.178.70.53:22-139.178.89.65:52508.service: Deactivated successfully. Jan 29 11:42:21.029206 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:42:21.030197 systemd-logind[1938]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:42:21.031282 systemd-logind[1938]: Removed session 18. Jan 29 11:42:21.071790 sshd[8285]: Accepted publickey for core from 139.178.89.65 port 37162 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:21.072765 sshd-session[8285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:21.076200 systemd-logind[1938]: New session 19 of user core. Jan 29 11:42:21.092069 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:42:21.229198 sshd[8290]: Connection closed by 139.178.89.65 port 37162 Jan 29 11:42:21.229409 sshd-session[8285]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:21.231006 systemd[1]: sshd@18-139.178.70.53:22-139.178.89.65:37162.service: Deactivated successfully. Jan 29 11:42:21.232361 systemd-logind[1938]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:42:21.232424 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:42:21.233080 systemd-logind[1938]: Removed session 19. Jan 29 11:42:26.260558 systemd[1]: Started sshd@19-139.178.70.53:22-139.178.89.65:37178.service - OpenSSH per-connection server daemon (139.178.89.65:37178). Jan 29 11:42:26.301114 sshd[8319]: Accepted publickey for core from 139.178.89.65 port 37178 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:26.302006 sshd-session[8319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:26.305314 systemd-logind[1938]: New session 20 of user core. Jan 29 11:42:26.313606 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:42:26.402302 sshd[8322]: Connection closed by 139.178.89.65 port 37178 Jan 29 11:42:26.402457 sshd-session[8319]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:26.404363 systemd[1]: sshd@19-139.178.70.53:22-139.178.89.65:37178.service: Deactivated successfully. Jan 29 11:42:26.405358 systemd-logind[1938]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:42:26.405377 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:42:26.405982 systemd-logind[1938]: Removed session 20. Jan 29 11:42:31.419587 systemd[1]: Started sshd@20-139.178.70.53:22-139.178.89.65:39590.service - OpenSSH per-connection server daemon (139.178.89.65:39590). Jan 29 11:42:31.450146 sshd[8347]: Accepted publickey for core from 139.178.89.65 port 39590 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:31.450735 sshd-session[8347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:31.453256 systemd-logind[1938]: New session 21 of user core. Jan 29 11:42:31.471027 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:42:31.557675 sshd[8350]: Connection closed by 139.178.89.65 port 39590 Jan 29 11:42:31.557845 sshd-session[8347]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:31.559465 systemd[1]: sshd@20-139.178.70.53:22-139.178.89.65:39590.service: Deactivated successfully. Jan 29 11:42:31.560765 systemd-logind[1938]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:42:31.560877 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:42:31.561419 systemd-logind[1938]: Removed session 21. Jan 29 11:42:36.581504 systemd[1]: Started sshd@21-139.178.70.53:22-139.178.89.65:39594.service - OpenSSH per-connection server daemon (139.178.89.65:39594). Jan 29 11:42:36.612739 sshd[8395]: Accepted publickey for core from 139.178.89.65 port 39594 ssh2: RSA SHA256:m8/38zVkkSiD5qJuoX6hkgYWBFGuEs/Owzl1kXbpsh0 Jan 29 11:42:36.613368 sshd-session[8395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:42:36.616023 systemd-logind[1938]: New session 22 of user core. Jan 29 11:42:36.633585 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:42:36.723163 sshd[8398]: Connection closed by 139.178.89.65 port 39594 Jan 29 11:42:36.723359 sshd-session[8395]: pam_unix(sshd:session): session closed for user core Jan 29 11:42:36.729498 systemd[1]: sshd@21-139.178.70.53:22-139.178.89.65:39594.service: Deactivated successfully. Jan 29 11:42:36.730568 systemd-logind[1938]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:42:36.730619 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:42:36.731176 systemd-logind[1938]: Removed session 22.