Jul 16 00:48:05.906077 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 16 00:48:05.906092 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 16 00:48:05.906099 kernel: BIOS-provided physical RAM map: Jul 16 00:48:05.906103 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 16 00:48:05.906107 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 16 00:48:05.906111 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 16 00:48:05.906116 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 16 00:48:05.906121 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 16 00:48:05.906125 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbbfff] usable Jul 16 00:48:05.906130 kernel: BIOS-e820: [mem 0x000000006dfbc000-0x000000006dfbcfff] ACPI NVS Jul 16 00:48:05.906134 kernel: BIOS-e820: [mem 0x000000006dfbd000-0x000000006dfbdfff] reserved Jul 16 00:48:05.906138 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x0000000077fc4fff] usable Jul 16 00:48:05.906142 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Jul 16 00:48:05.906146 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Jul 16 00:48:05.906153 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Jul 16 00:48:05.906158 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Jul 16 00:48:05.906162 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Jul 16 00:48:05.906167 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Jul 16 00:48:05.906172 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 16 00:48:05.906176 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 16 00:48:05.906181 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 16 00:48:05.906185 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 16 00:48:05.906191 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 16 00:48:05.906196 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Jul 16 00:48:05.906201 kernel: NX (Execute Disable) protection: active Jul 16 00:48:05.906205 kernel: APIC: Static calls initialized Jul 16 00:48:05.906210 kernel: SMBIOS 3.2.1 present. Jul 16 00:48:05.906215 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Jul 16 00:48:05.906219 kernel: DMI: Memory slots populated: 2/4 Jul 16 00:48:05.906224 kernel: tsc: Detected 3400.000 MHz processor Jul 16 00:48:05.906228 kernel: tsc: Detected 3399.906 MHz TSC Jul 16 00:48:05.906233 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 16 00:48:05.906238 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 16 00:48:05.906243 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Jul 16 00:48:05.906249 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jul 16 00:48:05.906254 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 16 00:48:05.906259 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Jul 16 00:48:05.906263 kernel: Using GB pages for direct mapping Jul 16 00:48:05.906268 kernel: ACPI: Early table checksum verification disabled Jul 16 00:48:05.906273 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 16 00:48:05.906280 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 16 00:48:05.906286 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Jul 16 00:48:05.906291 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 16 00:48:05.906296 kernel: ACPI: FACS 0x0000000079662F80 000040 Jul 16 00:48:05.906302 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Jul 16 00:48:05.906307 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Jul 16 00:48:05.906312 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 16 00:48:05.906317 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 16 00:48:05.906323 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 16 00:48:05.906328 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 16 00:48:05.906333 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 16 00:48:05.906338 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 16 00:48:05.906343 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:48:05.906348 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 16 00:48:05.906356 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 16 00:48:05.906361 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:48:05.906368 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:48:05.906373 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 16 00:48:05.906399 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 16 00:48:05.906404 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:48:05.906409 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:48:05.906430 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 16 00:48:05.906435 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jul 16 00:48:05.906440 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 16 00:48:05.906445 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 16 00:48:05.906451 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 16 00:48:05.906457 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xf5m 01072009 AMI 00010013) Jul 16 00:48:05.906462 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 16 00:48:05.906467 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 16 00:48:05.906472 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 16 00:48:05.906477 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 16 00:48:05.906482 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 16 00:48:05.906487 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Jul 16 00:48:05.906493 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Jul 16 00:48:05.906498 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Jul 16 00:48:05.906503 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Jul 16 00:48:05.906508 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Jul 16 00:48:05.906513 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Jul 16 00:48:05.906518 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Jul 16 00:48:05.906524 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Jul 16 00:48:05.906529 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Jul 16 00:48:05.906534 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Jul 16 00:48:05.906540 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Jul 16 00:48:05.906545 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Jul 16 00:48:05.906550 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Jul 16 00:48:05.906555 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Jul 16 00:48:05.906560 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Jul 16 00:48:05.906565 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Jul 16 00:48:05.906570 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Jul 16 00:48:05.906575 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Jul 16 00:48:05.906580 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Jul 16 00:48:05.906585 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Jul 16 00:48:05.906591 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Jul 16 00:48:05.906596 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Jul 16 00:48:05.906601 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Jul 16 00:48:05.906606 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Jul 16 00:48:05.906611 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Jul 16 00:48:05.906616 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Jul 16 00:48:05.906621 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Jul 16 00:48:05.906626 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Jul 16 00:48:05.906631 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Jul 16 00:48:05.906637 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Jul 16 00:48:05.906642 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Jul 16 00:48:05.906647 kernel: No NUMA configuration found Jul 16 00:48:05.906652 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Jul 16 00:48:05.906657 kernel: NODE_DATA(0) allocated [mem 0x87f7f8dc0-0x87f7fffff] Jul 16 00:48:05.906662 kernel: Zone ranges: Jul 16 00:48:05.906667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 16 00:48:05.906672 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 16 00:48:05.906677 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Jul 16 00:48:05.906683 kernel: Device empty Jul 16 00:48:05.906689 kernel: Movable zone start for each node Jul 16 00:48:05.906694 kernel: Early memory node ranges Jul 16 00:48:05.906699 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 16 00:48:05.906704 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 16 00:48:05.906709 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbbfff] Jul 16 00:48:05.906714 kernel: node 0: [mem 0x000000006dfbe000-0x0000000077fc4fff] Jul 16 00:48:05.906719 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Jul 16 00:48:05.906728 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Jul 16 00:48:05.906733 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Jul 16 00:48:05.906739 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Jul 16 00:48:05.906744 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 16 00:48:05.906751 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 16 00:48:05.906756 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 16 00:48:05.906762 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 16 00:48:05.906767 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jul 16 00:48:05.906772 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jul 16 00:48:05.906779 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Jul 16 00:48:05.906784 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Jul 16 00:48:05.906790 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 16 00:48:05.906795 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 16 00:48:05.906800 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 16 00:48:05.906806 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 16 00:48:05.906811 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 16 00:48:05.906816 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 16 00:48:05.906822 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 16 00:48:05.906828 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 16 00:48:05.906833 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 16 00:48:05.906839 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 16 00:48:05.906844 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 16 00:48:05.906849 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 16 00:48:05.906854 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 16 00:48:05.906860 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 16 00:48:05.906865 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 16 00:48:05.906870 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 16 00:48:05.906876 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 16 00:48:05.906882 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 16 00:48:05.906887 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 16 00:48:05.906893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 16 00:48:05.906898 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 16 00:48:05.906903 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 16 00:48:05.906909 kernel: TSC deadline timer available Jul 16 00:48:05.906914 kernel: CPU topo: Max. logical packages: 1 Jul 16 00:48:05.906919 kernel: CPU topo: Max. logical dies: 1 Jul 16 00:48:05.906925 kernel: CPU topo: Max. dies per package: 1 Jul 16 00:48:05.906931 kernel: CPU topo: Max. threads per core: 2 Jul 16 00:48:05.906937 kernel: CPU topo: Num. cores per package: 8 Jul 16 00:48:05.906942 kernel: CPU topo: Num. threads per package: 16 Jul 16 00:48:05.906947 kernel: CPU topo: Allowing 16 present CPUs plus 0 hotplug CPUs Jul 16 00:48:05.906952 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Jul 16 00:48:05.906958 kernel: Booting paravirtualized kernel on bare hardware Jul 16 00:48:05.906963 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 16 00:48:05.906969 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 16 00:48:05.906974 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 16 00:48:05.906981 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 16 00:48:05.906986 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 16 00:48:05.906992 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 16 00:48:05.906998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 16 00:48:05.907003 kernel: random: crng init done Jul 16 00:48:05.907008 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 16 00:48:05.907014 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 16 00:48:05.907019 kernel: Fallback order for Node 0: 0 Jul 16 00:48:05.907025 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8352997 Jul 16 00:48:05.907031 kernel: Policy zone: Normal Jul 16 00:48:05.907036 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 16 00:48:05.907041 kernel: software IO TLB: area num 16. Jul 16 00:48:05.907047 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 16 00:48:05.907052 kernel: ftrace: allocating 40095 entries in 157 pages Jul 16 00:48:05.907057 kernel: ftrace: allocated 157 pages with 5 groups Jul 16 00:48:05.907063 kernel: Dynamic Preempt: voluntary Jul 16 00:48:05.907068 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 16 00:48:05.907075 kernel: rcu: RCU event tracing is enabled. Jul 16 00:48:05.907080 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 16 00:48:05.907086 kernel: Trampoline variant of Tasks RCU enabled. Jul 16 00:48:05.907091 kernel: Rude variant of Tasks RCU enabled. Jul 16 00:48:05.907096 kernel: Tracing variant of Tasks RCU enabled. Jul 16 00:48:05.907101 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 16 00:48:05.907107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 16 00:48:05.907112 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 16 00:48:05.907118 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 16 00:48:05.907123 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 16 00:48:05.907130 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 16 00:48:05.907135 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 16 00:48:05.907140 kernel: Console: colour VGA+ 80x25 Jul 16 00:48:05.907146 kernel: printk: legacy console [tty0] enabled Jul 16 00:48:05.907151 kernel: printk: legacy console [ttyS1] enabled Jul 16 00:48:05.907156 kernel: ACPI: Core revision 20240827 Jul 16 00:48:05.907161 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jul 16 00:48:05.907167 kernel: APIC: Switch to symmetric I/O mode setup Jul 16 00:48:05.907172 kernel: DMAR: Host address width 39 Jul 16 00:48:05.907178 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jul 16 00:48:05.907184 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jul 16 00:48:05.907189 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 16 00:48:05.907195 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 16 00:48:05.907200 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Jul 16 00:48:05.907205 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Jul 16 00:48:05.907211 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jul 16 00:48:05.907216 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 16 00:48:05.907221 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 16 00:48:05.907228 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 16 00:48:05.907233 kernel: x2apic enabled Jul 16 00:48:05.907238 kernel: APIC: Switched APIC routing to: cluster x2apic Jul 16 00:48:05.907244 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 16 00:48:05.907249 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 16 00:48:05.907255 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 16 00:48:05.907260 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 16 00:48:05.907265 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 16 00:48:05.907271 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 16 00:48:05.907277 kernel: process: using mwait in idle threads Jul 16 00:48:05.907283 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 16 00:48:05.907288 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 16 00:48:05.907293 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 16 00:48:05.907299 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 16 00:48:05.907304 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 16 00:48:05.907310 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 16 00:48:05.907315 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 16 00:48:05.907321 kernel: TAA: Mitigation: Clear CPU buffers Jul 16 00:48:05.907327 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 16 00:48:05.907332 kernel: SRBDS: Mitigation: Microcode Jul 16 00:48:05.907337 kernel: GDS: Vulnerable: No microcode Jul 16 00:48:05.907343 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 16 00:48:05.907348 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 16 00:48:05.907355 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 16 00:48:05.907361 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 16 00:48:05.907366 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 16 00:48:05.907373 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 16 00:48:05.907397 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 16 00:48:05.907402 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 16 00:48:05.907423 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 16 00:48:05.907429 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 16 00:48:05.907434 kernel: Freeing SMP alternatives memory: 32K Jul 16 00:48:05.907440 kernel: pid_max: default: 32768 minimum: 301 Jul 16 00:48:05.907445 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 16 00:48:05.907450 kernel: landlock: Up and running. Jul 16 00:48:05.907457 kernel: SELinux: Initializing. Jul 16 00:48:05.907462 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 16 00:48:05.907467 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 16 00:48:05.907473 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 16 00:48:05.907478 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 16 00:48:05.907484 kernel: ... version: 4 Jul 16 00:48:05.907489 kernel: ... bit width: 48 Jul 16 00:48:05.907494 kernel: ... generic registers: 4 Jul 16 00:48:05.907500 kernel: ... value mask: 0000ffffffffffff Jul 16 00:48:05.907506 kernel: ... max period: 00007fffffffffff Jul 16 00:48:05.907511 kernel: ... fixed-purpose events: 3 Jul 16 00:48:05.907517 kernel: ... event mask: 000000070000000f Jul 16 00:48:05.907522 kernel: signal: max sigframe size: 2032 Jul 16 00:48:05.907527 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 16 00:48:05.907533 kernel: rcu: Hierarchical SRCU implementation. Jul 16 00:48:05.907538 kernel: rcu: Max phase no-delay instances is 400. Jul 16 00:48:05.907543 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jul 16 00:48:05.907549 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 16 00:48:05.907555 kernel: smp: Bringing up secondary CPUs ... Jul 16 00:48:05.907561 kernel: smpboot: x86: Booting SMP configuration: Jul 16 00:48:05.907566 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jul 16 00:48:05.907572 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 16 00:48:05.907578 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 16 00:48:05.907583 kernel: smp: Brought up 1 node, 16 CPUs Jul 16 00:48:05.907588 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 16 00:48:05.907594 kernel: Memory: 32652116K/33411988K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 732516K reserved, 0K cma-reserved) Jul 16 00:48:05.907600 kernel: devtmpfs: initialized Jul 16 00:48:05.907606 kernel: x86/mm: Memory block size: 128MB Jul 16 00:48:05.907611 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbc000-0x6dfbcfff] (4096 bytes) Jul 16 00:48:05.907617 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Jul 16 00:48:05.907622 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 16 00:48:05.907627 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 16 00:48:05.907633 kernel: pinctrl core: initialized pinctrl subsystem Jul 16 00:48:05.907638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 16 00:48:05.907644 kernel: audit: initializing netlink subsys (disabled) Jul 16 00:48:05.907650 kernel: audit: type=2000 audit(1752626878.171:1): state=initialized audit_enabled=0 res=1 Jul 16 00:48:05.907655 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 16 00:48:05.907661 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 16 00:48:05.907666 kernel: cpuidle: using governor menu Jul 16 00:48:05.907671 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 16 00:48:05.907677 kernel: dca service started, version 1.12.1 Jul 16 00:48:05.907682 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 16 00:48:05.907687 kernel: PCI: Using configuration type 1 for base access Jul 16 00:48:05.907693 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 16 00:48:05.907699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 16 00:48:05.907705 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 16 00:48:05.907710 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 16 00:48:05.907715 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 16 00:48:05.907721 kernel: ACPI: Added _OSI(Module Device) Jul 16 00:48:05.907726 kernel: ACPI: Added _OSI(Processor Device) Jul 16 00:48:05.907731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 16 00:48:05.907737 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 16 00:48:05.907742 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:48:05.907748 kernel: ACPI: SSDT 0xFFFF90AD422C9C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 16 00:48:05.907754 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:48:05.907759 kernel: ACPI: SSDT 0xFFFF90AD423A5800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 16 00:48:05.907764 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:48:05.907770 kernel: ACPI: SSDT 0xFFFF90AD40249800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 16 00:48:05.907775 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:48:05.907780 kernel: ACPI: SSDT 0xFFFF90AD423A5000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 16 00:48:05.907786 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:48:05.907791 kernel: ACPI: SSDT 0xFFFF90AD401A5000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 16 00:48:05.907796 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:48:05.907802 kernel: ACPI: SSDT 0xFFFF90AD422CE000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 16 00:48:05.907808 kernel: ACPI: Interpreter enabled Jul 16 00:48:05.907813 kernel: ACPI: PM: (supports S0 S5) Jul 16 00:48:05.907819 kernel: ACPI: Using IOAPIC for interrupt routing Jul 16 00:48:05.907824 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 16 00:48:05.907830 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 16 00:48:05.907835 kernel: HEST: Table parsing has been initialized. Jul 16 00:48:05.907840 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 16 00:48:05.907845 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 16 00:48:05.907852 kernel: PCI: Using E820 reservations for host bridge windows Jul 16 00:48:05.907857 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 16 00:48:05.907863 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jul 16 00:48:05.907868 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jul 16 00:48:05.907873 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jul 16 00:48:05.907879 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jul 16 00:48:05.907884 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jul 16 00:48:05.907890 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 16 00:48:05.907895 kernel: ACPI: \_TZ_.FN00: New power resource Jul 16 00:48:05.907902 kernel: ACPI: \_TZ_.FN01: New power resource Jul 16 00:48:05.907907 kernel: ACPI: \_TZ_.FN02: New power resource Jul 16 00:48:05.907913 kernel: ACPI: \_TZ_.FN03: New power resource Jul 16 00:48:05.907918 kernel: ACPI: \_TZ_.FN04: New power resource Jul 16 00:48:05.907923 kernel: ACPI: \PIN_: New power resource Jul 16 00:48:05.907929 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 16 00:48:05.908009 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:48:05.908067 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 16 00:48:05.908125 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 16 00:48:05.908133 kernel: PCI host bridge to bus 0000:00 Jul 16 00:48:05.908190 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 16 00:48:05.908240 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 16 00:48:05.908288 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 16 00:48:05.908336 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Jul 16 00:48:05.908423 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 16 00:48:05.908473 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 16 00:48:05.908535 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:48:05.908598 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 PCIe Root Port Jul 16 00:48:05.908655 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:48:05.908710 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.908769 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 PCIe Root Port Jul 16 00:48:05.908826 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 16 00:48:05.908881 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jul 16 00:48:05.908935 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 16 00:48:05.908989 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.909049 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 PCIe Root Complex Integrated Endpoint Jul 16 00:48:05.909104 kernel: pci 0000:00:02.0: BAR 0 [mem 0x94000000-0x94ffffff 64bit] Jul 16 00:48:05.909167 kernel: pci 0000:00:02.0: BAR 2 [mem 0x80000000-0x8fffffff 64bit pref] Jul 16 00:48:05.909223 kernel: pci 0000:00:02.0: BAR 4 [io 0x6000-0x603f] Jul 16 00:48:05.909281 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 conventional PCI endpoint Jul 16 00:48:05.909336 kernel: pci 0000:00:08.0: BAR 0 [mem 0x9651f000-0x9651ffff 64bit] Jul 16 00:48:05.909443 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 conventional PCI endpoint Jul 16 00:48:05.909500 kernel: pci 0000:00:12.0: BAR 0 [mem 0x9651e000-0x9651efff 64bit] Jul 16 00:48:05.909558 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 conventional PCI endpoint Jul 16 00:48:05.909615 kernel: pci 0000:00:14.0: BAR 0 [mem 0x96500000-0x9650ffff 64bit] Jul 16 00:48:05.909670 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 16 00:48:05.909728 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 conventional PCI endpoint Jul 16 00:48:05.909782 kernel: pci 0000:00:14.2: BAR 0 [mem 0x96512000-0x96513fff 64bit] Jul 16 00:48:05.909836 kernel: pci 0000:00:14.2: BAR 2 [mem 0x9651d000-0x9651dfff 64bit] Jul 16 00:48:05.909893 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 conventional PCI endpoint Jul 16 00:48:05.909950 kernel: pci 0000:00:15.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 16 00:48:05.910010 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 conventional PCI endpoint Jul 16 00:48:05.910066 kernel: pci 0000:00:15.1: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 16 00:48:05.910124 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:48:05.910180 kernel: pci 0000:00:16.0: BAR 0 [mem 0x9651a000-0x9651afff 64bit] Jul 16 00:48:05.910234 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 16 00:48:05.910293 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:48:05.910348 kernel: pci 0000:00:16.1: BAR 0 [mem 0x96519000-0x96519fff 64bit] Jul 16 00:48:05.910442 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 16 00:48:05.910503 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:48:05.910558 kernel: pci 0000:00:16.4: BAR 0 [mem 0x96518000-0x96518fff 64bit] Jul 16 00:48:05.910613 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 16 00:48:05.910672 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 conventional PCI endpoint Jul 16 00:48:05.910726 kernel: pci 0000:00:17.0: BAR 0 [mem 0x96510000-0x96511fff] Jul 16 00:48:05.910779 kernel: pci 0000:00:17.0: BAR 1 [mem 0x96517000-0x965170ff] Jul 16 00:48:05.910833 kernel: pci 0000:00:17.0: BAR 2 [io 0x6090-0x6097] Jul 16 00:48:05.910886 kernel: pci 0000:00:17.0: BAR 3 [io 0x6080-0x6083] Jul 16 00:48:05.910941 kernel: pci 0000:00:17.0: BAR 4 [io 0x6060-0x607f] Jul 16 00:48:05.910995 kernel: pci 0000:00:17.0: BAR 5 [mem 0x96516000-0x965167ff] Jul 16 00:48:05.911049 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 16 00:48:05.911108 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 PCIe Root Port Jul 16 00:48:05.911162 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 16 00:48:05.911217 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.911277 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 PCIe Root Port Jul 16 00:48:05.911335 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 16 00:48:05.911428 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 16 00:48:05.911482 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jul 16 00:48:05.911537 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.911595 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 PCIe Root Port Jul 16 00:48:05.911649 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 16 00:48:05.911703 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 16 00:48:05.911760 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jul 16 00:48:05.911815 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.911873 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 PCIe Root Port Jul 16 00:48:05.911928 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 16 00:48:05.911982 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.912044 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 PCIe Root Port Jul 16 00:48:05.912100 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 16 00:48:05.912157 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jul 16 00:48:05.912211 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jul 16 00:48:05.912266 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.912325 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:48:05.912408 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 16 00:48:05.912483 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 conventional PCI endpoint Jul 16 00:48:05.912542 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 conventional PCI endpoint Jul 16 00:48:05.912600 kernel: pci 0000:00:1f.4: BAR 0 [mem 0x96514000-0x965140ff 64bit] Jul 16 00:48:05.912654 kernel: pci 0000:00:1f.4: BAR 4 [io 0xefa0-0xefbf] Jul 16 00:48:05.912712 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 conventional PCI endpoint Jul 16 00:48:05.912765 kernel: pci 0000:00:1f.5: BAR 0 [mem 0xfe010000-0xfe010fff] Jul 16 00:48:05.912820 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:48:05.912881 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:48:05.912940 kernel: pci 0000:02:00.0: BAR 0 [mem 0x92000000-0x93ffffff 64bit pref] Jul 16 00:48:05.912996 kernel: pci 0000:02:00.0: ROM [mem 0x96200000-0x962fffff pref] Jul 16 00:48:05.913051 kernel: pci 0000:02:00.0: PME# supported from D3cold Jul 16 00:48:05.913107 kernel: pci 0000:02:00.0: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 16 00:48:05.913162 kernel: pci 0000:02:00.0: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:48:05.913223 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:48:05.913279 kernel: pci 0000:02:00.1: BAR 0 [mem 0x90000000-0x91ffffff 64bit pref] Jul 16 00:48:05.913336 kernel: pci 0000:02:00.1: ROM [mem 0x96100000-0x961fffff pref] Jul 16 00:48:05.913428 kernel: pci 0000:02:00.1: PME# supported from D3cold Jul 16 00:48:05.913484 kernel: pci 0000:02:00.1: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 16 00:48:05.913539 kernel: pci 0000:02:00.1: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:48:05.913595 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 16 00:48:05.913650 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 16 00:48:05.913744 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 16 00:48:05.913803 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 16 00:48:05.913860 kernel: pci 0000:04:00.0: BAR 0 [mem 0x96400000-0x9647ffff] Jul 16 00:48:05.913915 kernel: pci 0000:04:00.0: BAR 2 [io 0x5000-0x501f] Jul 16 00:48:05.913970 kernel: pci 0000:04:00.0: BAR 3 [mem 0x96480000-0x96483fff] Jul 16 00:48:05.914027 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.914082 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 16 00:48:05.914145 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jul 16 00:48:05.914204 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 16 00:48:05.914259 kernel: pci 0000:05:00.0: BAR 0 [mem 0x96300000-0x9637ffff] Jul 16 00:48:05.914314 kernel: pci 0000:05:00.0: BAR 2 [io 0x4000-0x401f] Jul 16 00:48:05.914398 kernel: pci 0000:05:00.0: BAR 3 [mem 0x96380000-0x96383fff] Jul 16 00:48:05.914498 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:48:05.914555 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 16 00:48:05.914624 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 16 00:48:05.914685 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 16 00:48:05.914744 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 16 00:48:05.914801 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jul 16 00:48:05.914856 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jul 16 00:48:05.914911 kernel: pci 0000:07:00.0: enabling Extended Tags Jul 16 00:48:05.914967 kernel: pci 0000:07:00.0: supports D1 D2 Jul 16 00:48:05.915022 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:48:05.915078 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 16 00:48:05.915139 kernel: pci_bus 0000:08: extended config space not accessible Jul 16 00:48:05.915204 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 16 00:48:05.915264 kernel: pci 0000:08:00.0: BAR 0 [mem 0x95000000-0x95ffffff] Jul 16 00:48:05.915323 kernel: pci 0000:08:00.0: BAR 1 [mem 0x96000000-0x9601ffff] Jul 16 00:48:05.915443 kernel: pci 0000:08:00.0: BAR 2 [io 0x3000-0x307f] Jul 16 00:48:05.915502 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 16 00:48:05.915560 kernel: pci 0000:08:00.0: supports D1 D2 Jul 16 00:48:05.915620 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:48:05.915677 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 16 00:48:05.915685 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 16 00:48:05.915691 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 16 00:48:05.915698 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 16 00:48:05.915704 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 16 00:48:05.915710 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 16 00:48:05.915715 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 16 00:48:05.915721 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 16 00:48:05.915727 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 16 00:48:05.915732 kernel: iommu: Default domain type: Translated Jul 16 00:48:05.915738 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 16 00:48:05.915744 kernel: PCI: Using ACPI for IRQ routing Jul 16 00:48:05.915750 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 16 00:48:05.915756 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 16 00:48:05.915762 kernel: e820: reserve RAM buffer [mem 0x6dfbc000-0x6fffffff] Jul 16 00:48:05.915767 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Jul 16 00:48:05.915773 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Jul 16 00:48:05.915778 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Jul 16 00:48:05.915784 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Jul 16 00:48:05.915841 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jul 16 00:48:05.915899 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jul 16 00:48:05.915988 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 16 00:48:05.915997 kernel: vgaarb: loaded Jul 16 00:48:05.916002 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 16 00:48:05.916008 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jul 16 00:48:05.916014 kernel: clocksource: Switched to clocksource tsc-early Jul 16 00:48:05.916019 kernel: VFS: Disk quotas dquot_6.6.0 Jul 16 00:48:05.916025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 16 00:48:05.916031 kernel: pnp: PnP ACPI init Jul 16 00:48:05.916089 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 16 00:48:05.916146 kernel: pnp 00:02: [dma 0 disabled] Jul 16 00:48:05.916200 kernel: pnp 00:03: [dma 0 disabled] Jul 16 00:48:05.916254 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 16 00:48:05.916304 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 16 00:48:05.916361 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Jul 16 00:48:05.916465 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Jul 16 00:48:05.916517 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Jul 16 00:48:05.916566 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Jul 16 00:48:05.916616 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 16 00:48:05.916666 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 16 00:48:05.916715 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 16 00:48:05.916764 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 16 00:48:05.916817 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Jul 16 00:48:05.916870 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 16 00:48:05.916919 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 16 00:48:05.916969 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 16 00:48:05.917018 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 16 00:48:05.917067 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 16 00:48:05.917116 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Jul 16 00:48:05.917169 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Jul 16 00:48:05.917179 kernel: pnp: PnP ACPI: found 9 devices Jul 16 00:48:05.917185 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 16 00:48:05.917191 kernel: NET: Registered PF_INET protocol family Jul 16 00:48:05.917196 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:48:05.917202 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 16 00:48:05.917208 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 16 00:48:05.917214 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:48:05.917219 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:48:05.917226 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 16 00:48:05.917232 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 16 00:48:05.917238 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 16 00:48:05.917243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 16 00:48:05.917249 kernel: NET: Registered PF_XDP protocol family Jul 16 00:48:05.917304 kernel: pci 0000:00:15.0: BAR 0 [mem 0x7f800000-0x7f800fff 64bit]: assigned Jul 16 00:48:05.917413 kernel: pci 0000:00:15.1: BAR 0 [mem 0x7f801000-0x7f801fff 64bit]: assigned Jul 16 00:48:05.917482 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x7f802000-0x7f802fff 64bit]: assigned Jul 16 00:48:05.917536 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:48:05.917596 kernel: pci 0000:02:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 16 00:48:05.917652 kernel: pci 0000:02:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 16 00:48:05.917708 kernel: pci 0000:02:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 16 00:48:05.917764 kernel: pci 0000:02:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 16 00:48:05.917818 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 16 00:48:05.917872 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jul 16 00:48:05.917927 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 16 00:48:05.917982 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 16 00:48:05.918036 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 16 00:48:05.918092 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 16 00:48:05.918147 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jul 16 00:48:05.918201 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 16 00:48:05.918255 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 16 00:48:05.918310 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jul 16 00:48:05.918429 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 16 00:48:05.918500 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 16 00:48:05.918556 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jul 16 00:48:05.918611 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jul 16 00:48:05.918665 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 16 00:48:05.918738 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jul 16 00:48:05.918806 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jul 16 00:48:05.918856 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 16 00:48:05.918905 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 16 00:48:05.918953 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 16 00:48:05.919001 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 16 00:48:05.919048 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Jul 16 00:48:05.919096 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 16 00:48:05.919156 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Jul 16 00:48:05.919207 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 16 00:48:05.919261 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jul 16 00:48:05.919311 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Jul 16 00:48:05.919368 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 16 00:48:05.919472 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Jul 16 00:48:05.919529 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 16 00:48:05.919579 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Jul 16 00:48:05.919631 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jul 16 00:48:05.919684 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Jul 16 00:48:05.919691 kernel: PCI: CLS 64 bytes, default 64 Jul 16 00:48:05.919697 kernel: DMAR: No ATSR found Jul 16 00:48:05.919703 kernel: DMAR: No SATC found Jul 16 00:48:05.919709 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jul 16 00:48:05.919716 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jul 16 00:48:05.919722 kernel: DMAR: IOMMU feature nwfs inconsistent Jul 16 00:48:05.919728 kernel: DMAR: IOMMU feature pasid inconsistent Jul 16 00:48:05.919734 kernel: DMAR: IOMMU feature eafs inconsistent Jul 16 00:48:05.919739 kernel: DMAR: IOMMU feature prs inconsistent Jul 16 00:48:05.919745 kernel: DMAR: IOMMU feature nest inconsistent Jul 16 00:48:05.919751 kernel: DMAR: IOMMU feature mts inconsistent Jul 16 00:48:05.919756 kernel: DMAR: IOMMU feature sc_support inconsistent Jul 16 00:48:05.919762 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jul 16 00:48:05.919769 kernel: DMAR: dmar0: Using Queued invalidation Jul 16 00:48:05.919775 kernel: DMAR: dmar1: Using Queued invalidation Jul 16 00:48:05.919829 kernel: pci 0000:00:02.0: Adding to iommu group 0 Jul 16 00:48:05.919885 kernel: pci 0000:00:00.0: Adding to iommu group 1 Jul 16 00:48:05.919942 kernel: pci 0000:00:01.0: Adding to iommu group 2 Jul 16 00:48:05.919998 kernel: pci 0000:00:01.1: Adding to iommu group 2 Jul 16 00:48:05.920052 kernel: pci 0000:00:08.0: Adding to iommu group 3 Jul 16 00:48:05.920144 kernel: pci 0000:00:12.0: Adding to iommu group 4 Jul 16 00:48:05.920200 kernel: pci 0000:00:14.0: Adding to iommu group 5 Jul 16 00:48:05.920255 kernel: pci 0000:00:14.2: Adding to iommu group 5 Jul 16 00:48:05.920309 kernel: pci 0000:00:15.0: Adding to iommu group 6 Jul 16 00:48:05.920365 kernel: pci 0000:00:15.1: Adding to iommu group 6 Jul 16 00:48:05.920470 kernel: pci 0000:00:16.0: Adding to iommu group 7 Jul 16 00:48:05.920525 kernel: pci 0000:00:16.1: Adding to iommu group 7 Jul 16 00:48:05.920580 kernel: pci 0000:00:16.4: Adding to iommu group 7 Jul 16 00:48:05.920633 kernel: pci 0000:00:17.0: Adding to iommu group 8 Jul 16 00:48:05.920690 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Jul 16 00:48:05.920745 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Jul 16 00:48:05.920800 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Jul 16 00:48:05.920854 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Jul 16 00:48:05.920909 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Jul 16 00:48:05.920963 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Jul 16 00:48:05.921018 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Jul 16 00:48:05.921072 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Jul 16 00:48:05.921183 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Jul 16 00:48:05.921288 kernel: pci 0000:02:00.0: Adding to iommu group 2 Jul 16 00:48:05.921381 kernel: pci 0000:02:00.1: Adding to iommu group 2 Jul 16 00:48:05.921486 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 16 00:48:05.921551 kernel: pci 0000:05:00.0: Adding to iommu group 17 Jul 16 00:48:05.921616 kernel: pci 0000:07:00.0: Adding to iommu group 18 Jul 16 00:48:05.921693 kernel: pci 0000:08:00.0: Adding to iommu group 18 Jul 16 00:48:05.921702 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 16 00:48:05.921710 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 16 00:48:05.921717 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Jul 16 00:48:05.921723 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jul 16 00:48:05.921729 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 16 00:48:05.921735 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 16 00:48:05.921741 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 16 00:48:05.921747 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jul 16 00:48:05.921806 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 16 00:48:05.921817 kernel: Initialise system trusted keyrings Jul 16 00:48:05.921823 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 16 00:48:05.921829 kernel: Key type asymmetric registered Jul 16 00:48:05.921835 kernel: Asymmetric key parser 'x509' registered Jul 16 00:48:05.921841 kernel: tsc: Refined TSC clocksource calibration: 3407.951 MHz Jul 16 00:48:05.921848 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fa5d91b7, max_idle_ns: 440795370708 ns Jul 16 00:48:05.921854 kernel: clocksource: Switched to clocksource tsc Jul 16 00:48:05.921860 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 16 00:48:05.921866 kernel: io scheduler mq-deadline registered Jul 16 00:48:05.921873 kernel: io scheduler kyber registered Jul 16 00:48:05.921879 kernel: io scheduler bfq registered Jul 16 00:48:05.921935 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jul 16 00:48:05.921993 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jul 16 00:48:05.922091 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jul 16 00:48:05.922148 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jul 16 00:48:05.922206 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jul 16 00:48:05.922262 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jul 16 00:48:05.922319 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jul 16 00:48:05.922411 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 16 00:48:05.922421 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 16 00:48:05.922427 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 16 00:48:05.922433 kernel: pstore: Using crash dump compression: deflate Jul 16 00:48:05.922439 kernel: pstore: Registered erst as persistent store backend Jul 16 00:48:05.922446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 16 00:48:05.922452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 16 00:48:05.922458 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 16 00:48:05.922466 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 16 00:48:05.922525 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 16 00:48:05.922534 kernel: i8042: PNP: No PS/2 controller found. Jul 16 00:48:05.922587 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 16 00:48:05.922641 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 16 00:48:05.922695 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-07-16T00:48:04 UTC (1752626884) Jul 16 00:48:05.922748 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 16 00:48:05.922756 kernel: intel_pstate: Intel P-state driver initializing Jul 16 00:48:05.922764 kernel: intel_pstate: Disabling energy efficiency optimization Jul 16 00:48:05.922770 kernel: intel_pstate: HWP enabled Jul 16 00:48:05.922776 kernel: NET: Registered PF_INET6 protocol family Jul 16 00:48:05.922782 kernel: Segment Routing with IPv6 Jul 16 00:48:05.922789 kernel: In-situ OAM (IOAM) with IPv6 Jul 16 00:48:05.922795 kernel: NET: Registered PF_PACKET protocol family Jul 16 00:48:05.922801 kernel: Key type dns_resolver registered Jul 16 00:48:05.922807 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 16 00:48:05.922813 kernel: microcode: Current revision: 0x000000de Jul 16 00:48:05.922820 kernel: IPI shorthand broadcast: enabled Jul 16 00:48:05.922826 kernel: sched_clock: Marking stable (3872000639, 1503995823)->(6946018302, -1570021840) Jul 16 00:48:05.922832 kernel: registered taskstats version 1 Jul 16 00:48:05.922839 kernel: Loading compiled-in X.509 certificates Jul 16 00:48:05.922845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 16 00:48:05.922851 kernel: Demotion targets for Node 0: null Jul 16 00:48:05.922857 kernel: Key type .fscrypt registered Jul 16 00:48:05.922863 kernel: Key type fscrypt-provisioning registered Jul 16 00:48:05.922870 kernel: ima: Allocated hash algorithm: sha1 Jul 16 00:48:05.922876 kernel: ima: No architecture policies found Jul 16 00:48:05.922882 kernel: clk: Disabling unused clocks Jul 16 00:48:05.922888 kernel: Warning: unable to open an initial console. Jul 16 00:48:05.922894 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 16 00:48:05.922901 kernel: Write protecting the kernel read-only data: 24576k Jul 16 00:48:05.922907 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 16 00:48:05.922913 kernel: Run /init as init process Jul 16 00:48:05.922919 kernel: with arguments: Jul 16 00:48:05.922926 kernel: /init Jul 16 00:48:05.922932 kernel: with environment: Jul 16 00:48:05.922938 kernel: HOME=/ Jul 16 00:48:05.922944 kernel: TERM=linux Jul 16 00:48:05.922950 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 16 00:48:05.922957 systemd[1]: Successfully made /usr/ read-only. Jul 16 00:48:05.922965 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:48:05.922971 systemd[1]: Detected architecture x86-64. Jul 16 00:48:05.922979 systemd[1]: Running in initrd. Jul 16 00:48:05.922985 systemd[1]: No hostname configured, using default hostname. Jul 16 00:48:05.922991 systemd[1]: Hostname set to . Jul 16 00:48:05.922998 systemd[1]: Initializing machine ID from random generator. Jul 16 00:48:05.923004 systemd[1]: Queued start job for default target initrd.target. Jul 16 00:48:05.923011 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:48:05.923017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:48:05.923024 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 16 00:48:05.923032 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:48:05.923038 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 16 00:48:05.923045 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 16 00:48:05.923052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 16 00:48:05.923058 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 16 00:48:05.923065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:48:05.923072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:48:05.923079 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:48:05.923085 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:48:05.923091 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:48:05.923098 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:48:05.923104 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:48:05.923110 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:48:05.923117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 16 00:48:05.923123 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 16 00:48:05.923131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:48:05.923137 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:48:05.923143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:48:05.923150 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:48:05.923156 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 16 00:48:05.923163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:48:05.923169 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 16 00:48:05.923176 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 16 00:48:05.923183 systemd[1]: Starting systemd-fsck-usr.service... Jul 16 00:48:05.923190 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:48:05.923208 systemd-journald[299]: Collecting audit messages is disabled. Jul 16 00:48:05.923223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:48:05.923231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:48:05.923238 systemd-journald[299]: Journal started Jul 16 00:48:05.923253 systemd-journald[299]: Runtime Journal (/run/log/journal/942f895e445b47c0883d3258f7bbfa03) is 8M, max 639.3M, 631.3M free. Jul 16 00:48:05.931665 systemd-modules-load[301]: Inserted module 'overlay' Jul 16 00:48:05.948360 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:48:05.948754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 16 00:48:05.986187 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 16 00:48:05.986202 kernel: Bridge firewalling registered Jul 16 00:48:05.954458 systemd-modules-load[301]: Inserted module 'br_netfilter' Jul 16 00:48:05.986246 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:48:06.010958 systemd[1]: Finished systemd-fsck-usr.service. Jul 16 00:48:06.039709 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:48:06.055777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:48:06.069413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 16 00:48:06.106130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:48:06.116017 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 16 00:48:06.125763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:48:06.133133 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:48:06.133496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:48:06.134276 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:48:06.134831 systemd-tmpfiles[318]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 16 00:48:06.136886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:48:06.137833 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:48:06.139844 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:48:06.151666 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:48:06.158812 systemd-resolved[338]: Positive Trust Anchors: Jul 16 00:48:06.158817 systemd-resolved[338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:48:06.158842 systemd-resolved[338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:48:06.160594 systemd-resolved[338]: Defaulting to hostname 'linux'. Jul 16 00:48:06.181681 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:48:06.196786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:48:06.273214 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 16 00:48:06.351796 dracut-cmdline[343]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 16 00:48:06.551391 kernel: SCSI subsystem initialized Jul 16 00:48:06.564385 kernel: Loading iSCSI transport class v2.0-870. Jul 16 00:48:06.577385 kernel: iscsi: registered transport (tcp) Jul 16 00:48:06.601070 kernel: iscsi: registered transport (qla4xxx) Jul 16 00:48:06.601088 kernel: QLogic iSCSI HBA Driver Jul 16 00:48:06.611904 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:48:06.640748 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:48:06.651684 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:48:06.692508 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 16 00:48:06.703025 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 16 00:48:06.801414 kernel: raid6: avx2x4 gen() 48522 MB/s Jul 16 00:48:06.822387 kernel: raid6: avx2x2 gen() 54278 MB/s Jul 16 00:48:06.848480 kernel: raid6: avx2x1 gen() 45500 MB/s Jul 16 00:48:06.848498 kernel: raid6: using algorithm avx2x2 gen() 54278 MB/s Jul 16 00:48:06.875567 kernel: raid6: .... xor() 32440 MB/s, rmw enabled Jul 16 00:48:06.875583 kernel: raid6: using avx2x2 recovery algorithm Jul 16 00:48:06.898384 kernel: xor: automatically using best checksumming function avx Jul 16 00:48:07.002400 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 16 00:48:07.005876 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:48:07.015545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:48:07.066351 systemd-udevd[554]: Using default interface naming scheme 'v255'. Jul 16 00:48:07.071892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:48:07.089057 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 16 00:48:07.138168 dracut-pre-trigger[565]: rd.md=0: removing MD RAID activation Jul 16 00:48:07.191716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:48:07.206049 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:48:07.356125 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:48:07.378459 kernel: cryptd: max_cpu_qlen set to 1000 Jul 16 00:48:07.356798 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 16 00:48:07.421350 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 16 00:48:07.421372 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 16 00:48:07.421381 kernel: ACPI: bus type USB registered Jul 16 00:48:07.421394 kernel: usbcore: registered new interface driver usbfs Jul 16 00:48:07.421402 kernel: usbcore: registered new interface driver hub Jul 16 00:48:07.421410 kernel: usbcore: registered new device driver usb Jul 16 00:48:07.387966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:48:07.624252 kernel: AES CTR mode by8 optimization enabled Jul 16 00:48:07.624268 kernel: libata version 3.00 loaded. Jul 16 00:48:07.624277 kernel: PTP clock support registered Jul 16 00:48:07.624288 kernel: ahci 0000:00:17.0: version 3.0 Jul 16 00:48:07.624411 kernel: ahci 0000:00:17.0: AHCI vers 0001.0301, 32 command slots, 6 Gbps, SATA mode Jul 16 00:48:07.624513 kernel: ahci 0000:00:17.0: 8/8 ports implemented (port mask 0xff) Jul 16 00:48:07.624605 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 16 00:48:07.624699 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 16 00:48:07.624796 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 16 00:48:07.624885 kernel: scsi host0: ahci Jul 16 00:48:07.624973 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 16 00:48:07.625060 kernel: scsi host1: ahci Jul 16 00:48:07.625145 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 16 00:48:07.625238 kernel: scsi host2: ahci Jul 16 00:48:07.625312 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 16 00:48:07.625401 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 16 00:48:07.625495 kernel: scsi host3: ahci Jul 16 00:48:07.625567 kernel: hub 1-0:1.0: USB hub found Jul 16 00:48:07.625661 kernel: scsi host4: ahci Jul 16 00:48:07.625735 kernel: hub 1-0:1.0: 16 ports detected Jul 16 00:48:07.625816 kernel: scsi host5: ahci Jul 16 00:48:07.625905 kernel: hub 2-0:1.0: USB hub found Jul 16 00:48:07.626013 kernel: scsi host6: ahci Jul 16 00:48:07.626110 kernel: hub 2-0:1.0: 10 ports detected Jul 16 00:48:07.626212 kernel: scsi host7: ahci Jul 16 00:48:07.626313 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 lpm-pol 0 Jul 16 00:48:07.626327 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 lpm-pol 0 Jul 16 00:48:07.626338 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 lpm-pol 0 Jul 16 00:48:07.626350 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 lpm-pol 0 Jul 16 00:48:07.626366 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 lpm-pol 0 Jul 16 00:48:07.626379 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 lpm-pol 0 Jul 16 00:48:07.626391 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 lpm-pol 0 Jul 16 00:48:07.626405 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 lpm-pol 0 Jul 16 00:48:07.388051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:48:07.662597 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 16 00:48:07.662610 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 16 00:48:07.554882 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:48:07.653919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:48:07.671769 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:48:07.733558 kernel: igb 0000:04:00.0: added PHC on eth0 Jul 16 00:48:07.733662 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:48:07.733743 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:71:57:84 Jul 16 00:48:07.733818 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jul 16 00:48:07.733890 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 16 00:48:07.741358 kernel: igb 0000:05:00.0: added PHC on eth1 Jul 16 00:48:07.741458 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:48:07.741536 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:71:57:85 Jul 16 00:48:07.741610 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jul 16 00:48:07.741682 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 16 00:48:07.797356 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 16 00:48:07.797440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:48:07.878401 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 16 00:48:07.878463 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 16 00:48:07.883365 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 16 00:48:07.889389 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 16 00:48:07.896358 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 16 00:48:07.902357 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jul 16 00:48:07.908423 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 16 00:48:07.914389 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 16 00:48:07.919356 kernel: ata2.00: Model 'Micron_5300_MTFDDAK480TDT', rev ' D3MU001', applying quirks: zeroaftertrim Jul 16 00:48:07.932230 kernel: hub 1-14:1.0: USB hub found Jul 16 00:48:07.932381 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 16 00:48:07.932392 kernel: hub 1-14:1.0: 4 ports detected Jul 16 00:48:07.940399 kernel: ata1.00: Model 'Micron_5300_MTFDDAK480TDT', rev ' D3MU001', applying quirks: zeroaftertrim Jul 16 00:48:07.961704 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 16 00:48:07.973423 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 16 00:48:07.973439 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 16 00:48:07.992424 kernel: ata2.00: Features: NCQ-prio Jul 16 00:48:07.992469 kernel: ata1.00: Features: NCQ-prio Jul 16 00:48:08.013396 kernel: ata2.00: configured for UDMA/133 Jul 16 00:48:08.013441 kernel: ata1.00: configured for UDMA/133 Jul 16 00:48:08.018398 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 16 00:48:08.027437 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 16 00:48:08.042361 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jul 16 00:48:08.042485 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jul 16 00:48:08.042566 kernel: ata1.00: Enabling discard_zeroes_data Jul 16 00:48:08.052657 kernel: ata2.00: Enabling discard_zeroes_data Jul 16 00:48:08.052676 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 16 00:48:08.057371 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 16 00:48:08.072344 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 16 00:48:08.072457 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 16 00:48:08.072536 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 16 00:48:08.077576 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 16 00:48:08.082808 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 16 00:48:08.082902 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 16 00:48:08.087611 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 16 00:48:08.092413 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jul 16 00:48:08.101738 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 16 00:48:08.124621 kernel: ata2.00: Enabling discard_zeroes_data Jul 16 00:48:08.124648 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jul 16 00:48:08.136413 kernel: ata1.00: Enabling discard_zeroes_data Jul 16 00:48:08.147390 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 16 00:48:08.158939 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 16 00:48:08.158956 kernel: GPT:9289727 != 937703087 Jul 16 00:48:08.165217 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 16 00:48:08.169072 kernel: GPT:9289727 != 937703087 Jul 16 00:48:08.174480 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 16 00:48:08.179732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:48:08.184852 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 16 00:48:08.279092 kernel: mlx5_core 0000:02:00.0: PTM is not supported by PCIe Jul 16 00:48:08.279211 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Jul 16 00:48:08.286113 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 16 00:48:08.286141 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 16 00:48:08.305659 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jul 16 00:48:08.322161 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jul 16 00:48:08.331042 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 16 00:48:08.367421 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 16 00:48:08.384052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jul 16 00:48:08.405420 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 16 00:48:08.420264 kernel: usbcore: registered new interface driver usbhid Jul 16 00:48:08.420280 kernel: usbhid: USB HID core driver Jul 16 00:48:08.435429 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 16 00:48:08.437928 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 16 00:48:08.470280 disk-uuid[762]: Primary Header is updated. Jul 16 00:48:08.470280 disk-uuid[762]: Secondary Entries is updated. Jul 16 00:48:08.470280 disk-uuid[762]: Secondary Header is updated. Jul 16 00:48:08.537484 kernel: ata1.00: Enabling discard_zeroes_data Jul 16 00:48:08.537514 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 16 00:48:08.537663 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:48:08.537672 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 16 00:48:08.537680 kernel: ata1.00: Enabling discard_zeroes_data Jul 16 00:48:08.537687 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 16 00:48:08.544400 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:48:08.559361 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 16 00:48:08.569889 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jul 16 00:48:08.789456 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:48:08.806866 kernel: mlx5_core 0000:02:00.1: PTM is not supported by PCIe Jul 16 00:48:08.807400 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Jul 16 00:48:08.807809 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 16 00:48:09.108383 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 16 00:48:09.120843 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jul 16 00:48:09.379446 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:48:09.391405 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jul 16 00:48:09.391515 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jul 16 00:48:09.404842 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 16 00:48:09.414913 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:48:09.434623 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:48:09.453581 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:48:09.472898 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 16 00:48:09.510140 kernel: ata1.00: Enabling discard_zeroes_data Jul 16 00:48:09.523742 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:48:09.545405 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:48:09.545420 disk-uuid[763]: The operation has completed successfully. Jul 16 00:48:09.564393 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 16 00:48:09.564446 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 16 00:48:09.606912 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 16 00:48:09.644650 sh[815]: Success Jul 16 00:48:09.679415 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 16 00:48:09.679435 kernel: device-mapper: uevent: version 1.0.3 Jul 16 00:48:09.693237 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 16 00:48:09.706358 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 16 00:48:09.748579 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 16 00:48:09.758676 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 16 00:48:09.800878 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 16 00:48:09.863499 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 16 00:48:09.863513 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (827) Jul 16 00:48:09.863521 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 16 00:48:09.863529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:48:09.863536 kernel: BTRFS info (device dm-0): using free-space-tree Jul 16 00:48:09.871644 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 16 00:48:09.878746 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:48:09.903643 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 16 00:48:09.904269 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 16 00:48:09.927188 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 16 00:48:09.978143 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (850) Jul 16 00:48:09.978162 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:48:09.986248 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:48:09.992176 kernel: BTRFS info (device sda6): using free-space-tree Jul 16 00:48:10.007361 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:48:10.008367 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 16 00:48:10.008952 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 16 00:48:10.040748 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:48:10.060618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:48:10.106656 systemd-networkd[997]: lo: Link UP Jul 16 00:48:10.106659 systemd-networkd[997]: lo: Gained carrier Jul 16 00:48:10.109505 systemd-networkd[997]: Enumeration completed Jul 16 00:48:10.109581 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:48:10.110045 systemd-networkd[997]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:48:10.127490 systemd[1]: Reached target network.target - Network. Jul 16 00:48:10.138098 systemd-networkd[997]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:48:10.163702 ignition[920]: Ignition 2.21.0 Jul 16 00:48:10.165289 systemd-networkd[997]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:48:10.163707 ignition[920]: Stage: fetch-offline Jul 16 00:48:10.166525 unknown[920]: fetched base config from "system" Jul 16 00:48:10.163727 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:48:10.166528 unknown[920]: fetched user config from "system" Jul 16 00:48:10.163733 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:48:10.167560 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:48:10.163788 ignition[920]: parsed url from cmdline: "" Jul 16 00:48:10.188445 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 16 00:48:10.163790 ignition[920]: no config URL provided Jul 16 00:48:10.190853 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 16 00:48:10.163793 ignition[920]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 00:48:10.163819 ignition[920]: parsing config with SHA512: 438a7aca38c58e63c6cd8088dff56444f3a29315de8536e5d47dfbd43dc5d7784992d71a137a52c8940827aad22e008e7f54788f1313f09ff3475766a1746e9a Jul 16 00:48:10.166714 ignition[920]: fetch-offline: fetch-offline passed Jul 16 00:48:10.166717 ignition[920]: POST message to Packet Timeline Jul 16 00:48:10.166720 ignition[920]: POST Status error: resource requires networking Jul 16 00:48:10.166754 ignition[920]: Ignition finished successfully Jul 16 00:48:10.274205 ignition[1014]: Ignition 2.21.0 Jul 16 00:48:10.274225 ignition[1014]: Stage: kargs Jul 16 00:48:10.346591 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jul 16 00:48:10.274657 ignition[1014]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:48:10.346666 systemd-networkd[997]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:48:10.274687 ignition[1014]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:48:10.279997 ignition[1014]: kargs: kargs passed Jul 16 00:48:10.280019 ignition[1014]: POST message to Packet Timeline Jul 16 00:48:10.280083 ignition[1014]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:48:10.281822 ignition[1014]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36198->[::1]:53: read: connection refused Jul 16 00:48:10.481983 ignition[1014]: GET https://metadata.packet.net/metadata: attempt #2 Jul 16 00:48:10.482558 ignition[1014]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48513->[::1]:53: read: connection refused Jul 16 00:48:10.532397 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jul 16 00:48:10.533889 systemd-networkd[997]: eno1: Link UP Jul 16 00:48:10.534095 systemd-networkd[997]: eno2: Link UP Jul 16 00:48:10.534276 systemd-networkd[997]: enp2s0f0np0: Link UP Jul 16 00:48:10.534498 systemd-networkd[997]: enp2s0f0np0: Gained carrier Jul 16 00:48:10.547890 systemd-networkd[997]: enp2s0f1np1: Link UP Jul 16 00:48:10.549318 systemd-networkd[997]: enp2s0f1np1: Gained carrier Jul 16 00:48:10.590593 systemd-networkd[997]: enp2s0f0np0: DHCPv4 address 147.75.90.137/31, gateway 147.75.90.136 acquired from 145.40.83.140 Jul 16 00:48:10.883112 ignition[1014]: GET https://metadata.packet.net/metadata: attempt #3 Jul 16 00:48:10.884266 ignition[1014]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41620->[::1]:53: read: connection refused Jul 16 00:48:11.536900 systemd-networkd[997]: enp2s0f0np0: Gained IPv6LL Jul 16 00:48:11.664915 systemd-networkd[997]: enp2s0f1np1: Gained IPv6LL Jul 16 00:48:11.684657 ignition[1014]: GET https://metadata.packet.net/metadata: attempt #4 Jul 16 00:48:11.685944 ignition[1014]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58793->[::1]:53: read: connection refused Jul 16 00:48:13.287669 ignition[1014]: GET https://metadata.packet.net/metadata: attempt #5 Jul 16 00:48:13.288891 ignition[1014]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53612->[::1]:53: read: connection refused Jul 16 00:48:16.489342 ignition[1014]: GET https://metadata.packet.net/metadata: attempt #6 Jul 16 00:48:17.440496 ignition[1014]: GET result: OK Jul 16 00:48:18.134708 ignition[1014]: Ignition finished successfully Jul 16 00:48:18.140843 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 16 00:48:18.152347 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 16 00:48:18.193717 ignition[1035]: Ignition 2.21.0 Jul 16 00:48:18.193722 ignition[1035]: Stage: disks Jul 16 00:48:18.193809 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:48:18.193816 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:48:18.194299 ignition[1035]: disks: disks passed Jul 16 00:48:18.194302 ignition[1035]: POST message to Packet Timeline Jul 16 00:48:18.194313 ignition[1035]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:48:19.208984 ignition[1035]: GET result: OK Jul 16 00:48:19.734962 ignition[1035]: Ignition finished successfully Jul 16 00:48:19.739893 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 16 00:48:19.751634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 16 00:48:19.769671 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 16 00:48:19.788750 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:48:19.808762 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:48:19.817928 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:48:19.844711 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 16 00:48:19.894516 systemd-fsck[1054]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 16 00:48:19.903777 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 16 00:48:19.918106 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 16 00:48:20.026396 kernel: EXT4-fs (sda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 16 00:48:20.026522 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 16 00:48:20.033832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 16 00:48:20.058814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:48:20.067211 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 16 00:48:20.089321 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 16 00:48:20.140578 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 (8:6) scanned by mount (1063) Jul 16 00:48:20.140595 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:48:20.140603 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:48:20.140611 kernel: BTRFS info (device sda6): using free-space-tree Jul 16 00:48:20.108263 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 16 00:48:20.150573 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 16 00:48:20.150593 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:48:20.209519 coreos-metadata[1066]: Jul 16 00:48:20.207 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:48:20.218668 coreos-metadata[1065]: Jul 16 00:48:20.207 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:48:20.151761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:48:20.175604 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 16 00:48:20.201294 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 16 00:48:20.273142 initrd-setup-root[1095]: cut: /sysroot/etc/passwd: No such file or directory Jul 16 00:48:20.282457 initrd-setup-root[1102]: cut: /sysroot/etc/group: No such file or directory Jul 16 00:48:20.292560 initrd-setup-root[1109]: cut: /sysroot/etc/shadow: No such file or directory Jul 16 00:48:20.301552 initrd-setup-root[1116]: cut: /sysroot/etc/gshadow: No such file or directory Jul 16 00:48:20.336992 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 16 00:48:20.347283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 16 00:48:20.356062 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 16 00:48:20.404897 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 16 00:48:20.422428 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:48:20.423722 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 16 00:48:20.439499 ignition[1184]: INFO : Ignition 2.21.0 Jul 16 00:48:20.439499 ignition[1184]: INFO : Stage: mount Jul 16 00:48:20.439499 ignition[1184]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:48:20.439499 ignition[1184]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:48:20.439499 ignition[1184]: INFO : mount: mount passed Jul 16 00:48:20.439499 ignition[1184]: INFO : POST message to Packet Timeline Jul 16 00:48:20.439499 ignition[1184]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:48:21.182073 coreos-metadata[1065]: Jul 16 00:48:21.181 INFO Fetch successful Jul 16 00:48:21.258297 coreos-metadata[1065]: Jul 16 00:48:21.258 INFO wrote hostname ci-4372.0.1-n-bd48696324 to /sysroot/etc/hostname Jul 16 00:48:21.259522 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:48:21.340717 ignition[1184]: INFO : GET result: OK Jul 16 00:48:21.364504 coreos-metadata[1066]: Jul 16 00:48:21.364 INFO Fetch successful Jul 16 00:48:21.445173 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 16 00:48:21.445231 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 16 00:48:21.900789 ignition[1184]: INFO : Ignition finished successfully Jul 16 00:48:21.904951 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 16 00:48:21.920851 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 16 00:48:21.957232 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:48:22.007419 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 (8:6) scanned by mount (1209) Jul 16 00:48:22.007448 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:48:22.015500 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:48:22.021420 kernel: BTRFS info (device sda6): using free-space-tree Jul 16 00:48:22.026158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:48:22.058066 ignition[1226]: INFO : Ignition 2.21.0 Jul 16 00:48:22.058066 ignition[1226]: INFO : Stage: files Jul 16 00:48:22.070630 ignition[1226]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:48:22.070630 ignition[1226]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:48:22.070630 ignition[1226]: DEBUG : files: compiled without relabeling support, skipping Jul 16 00:48:22.070630 ignition[1226]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 16 00:48:22.070630 ignition[1226]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 16 00:48:22.070630 ignition[1226]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 16 00:48:22.070630 ignition[1226]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 16 00:48:22.070630 ignition[1226]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 16 00:48:22.070630 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 16 00:48:22.070630 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 16 00:48:22.061694 unknown[1226]: wrote ssh authorized keys file for user: core Jul 16 00:48:22.194603 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 16 00:48:22.252211 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 16 00:48:22.252211 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 16 00:48:22.283579 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 16 00:48:23.172782 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 16 00:48:23.686656 ignition[1226]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 16 00:48:23.686656 ignition[1226]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:48:23.714606 ignition[1226]: INFO : files: files passed Jul 16 00:48:23.714606 ignition[1226]: INFO : POST message to Packet Timeline Jul 16 00:48:23.714606 ignition[1226]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:48:24.556113 ignition[1226]: INFO : GET result: OK Jul 16 00:48:24.939122 ignition[1226]: INFO : Ignition finished successfully Jul 16 00:48:24.942717 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 16 00:48:24.961178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 16 00:48:24.984001 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 16 00:48:24.995662 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 16 00:48:24.995726 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 16 00:48:25.014934 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:48:25.040515 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 16 00:48:25.060804 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 16 00:48:25.087625 initrd-setup-root-after-ignition[1264]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:48:25.087625 initrd-setup-root-after-ignition[1264]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:48:25.113600 initrd-setup-root-after-ignition[1268]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:48:25.175480 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 16 00:48:25.175538 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 16 00:48:25.192735 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 16 00:48:25.212655 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 16 00:48:25.230873 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 16 00:48:25.232929 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 16 00:48:25.310570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:48:25.324567 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 16 00:48:25.391607 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:48:25.401920 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:48:25.421034 systemd[1]: Stopped target timers.target - Timer Units. Jul 16 00:48:25.438024 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 16 00:48:25.438468 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:48:25.474705 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 16 00:48:25.483910 systemd[1]: Stopped target basic.target - Basic System. Jul 16 00:48:25.500964 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 16 00:48:25.517967 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:48:25.537971 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 16 00:48:25.557116 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:48:25.576123 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 16 00:48:25.594121 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:48:25.613175 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 16 00:48:25.633164 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 16 00:48:25.651126 systemd[1]: Stopped target swap.target - Swaps. Jul 16 00:48:25.667891 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 16 00:48:25.668305 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:48:25.700727 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:48:25.710021 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:48:25.729849 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 16 00:48:25.730334 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:48:25.750879 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 16 00:48:25.751301 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 16 00:48:25.780060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 16 00:48:25.780555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:48:25.798165 systemd[1]: Stopped target paths.target - Path Units. Jul 16 00:48:25.814809 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 16 00:48:25.815300 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:48:25.834979 systemd[1]: Stopped target slices.target - Slice Units. Jul 16 00:48:25.851978 systemd[1]: Stopped target sockets.target - Socket Units. Jul 16 00:48:25.868966 systemd[1]: iscsid.socket: Deactivated successfully. Jul 16 00:48:25.869275 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:48:25.887010 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 16 00:48:25.887306 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:48:25.909117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 16 00:48:25.909569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:48:26.038597 ignition[1289]: INFO : Ignition 2.21.0 Jul 16 00:48:26.038597 ignition[1289]: INFO : Stage: umount Jul 16 00:48:26.038597 ignition[1289]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:48:26.038597 ignition[1289]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:48:26.038597 ignition[1289]: INFO : umount: umount passed Jul 16 00:48:26.038597 ignition[1289]: INFO : POST message to Packet Timeline Jul 16 00:48:26.038597 ignition[1289]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:48:25.927044 systemd[1]: ignition-files.service: Deactivated successfully. Jul 16 00:48:25.927475 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 16 00:48:25.942910 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 16 00:48:25.943254 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:48:25.962460 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 16 00:48:25.974563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 16 00:48:25.974645 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:48:25.988934 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 16 00:48:26.000521 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 16 00:48:26.000735 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:48:26.028802 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 16 00:48:26.029048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:48:26.069696 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 16 00:48:26.070432 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 16 00:48:26.070505 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 16 00:48:26.079754 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 16 00:48:26.079840 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 16 00:48:27.027267 ignition[1289]: INFO : GET result: OK Jul 16 00:48:27.422431 ignition[1289]: INFO : Ignition finished successfully Jul 16 00:48:27.426317 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 16 00:48:27.426656 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 16 00:48:27.441703 systemd[1]: Stopped target network.target - Network. Jul 16 00:48:27.454618 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 16 00:48:27.454881 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 16 00:48:27.472798 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 16 00:48:27.472935 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 16 00:48:27.488852 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 16 00:48:27.489025 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 16 00:48:27.504858 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 16 00:48:27.505018 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 16 00:48:27.522829 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 16 00:48:27.523007 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 16 00:48:27.539190 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 16 00:48:27.556930 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 16 00:48:27.575418 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 16 00:48:27.575700 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 16 00:48:27.597982 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 16 00:48:27.598820 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 16 00:48:27.599084 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 16 00:48:27.614133 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 16 00:48:27.616228 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 16 00:48:27.628793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 16 00:48:27.628914 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:48:27.650030 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 16 00:48:27.672547 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 16 00:48:27.672690 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:48:27.699708 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 00:48:27.699786 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:48:27.718214 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 16 00:48:27.718337 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 16 00:48:27.736818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 16 00:48:27.736992 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:48:27.756395 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:48:27.779079 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 16 00:48:27.779263 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:48:27.780453 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 16 00:48:27.780805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:48:27.799307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 16 00:48:27.799488 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 16 00:48:27.813608 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 16 00:48:27.813630 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:48:27.813674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 16 00:48:27.813702 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:48:27.846756 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 16 00:48:27.846827 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 16 00:48:27.885581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 16 00:48:27.885895 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:48:27.923591 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 16 00:48:27.930580 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 16 00:48:27.930608 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:48:28.205693 systemd-journald[299]: Received SIGTERM from PID 1 (systemd). Jul 16 00:48:27.977628 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 16 00:48:27.977692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:48:27.999009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:48:27.999144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:48:28.023018 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 16 00:48:28.023176 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 16 00:48:28.023302 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:48:28.024444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 16 00:48:28.024672 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 16 00:48:28.066287 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 16 00:48:28.066710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 16 00:48:28.078607 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 16 00:48:28.098802 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 16 00:48:28.151072 systemd[1]: Switching root. Jul 16 00:48:28.336513 systemd-journald[299]: Journal stopped Jul 16 00:48:30.071860 kernel: SELinux: policy capability network_peer_controls=1 Jul 16 00:48:30.071877 kernel: SELinux: policy capability open_perms=1 Jul 16 00:48:30.071885 kernel: SELinux: policy capability extended_socket_class=1 Jul 16 00:48:30.071891 kernel: SELinux: policy capability always_check_network=0 Jul 16 00:48:30.071896 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 16 00:48:30.071902 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 16 00:48:30.071908 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 16 00:48:30.071915 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 16 00:48:30.071921 kernel: SELinux: policy capability userspace_initial_context=0 Jul 16 00:48:30.071927 kernel: audit: type=1403 audit(1752626908.457:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 16 00:48:30.071935 systemd[1]: Successfully loaded SELinux policy in 88.124ms. Jul 16 00:48:30.071942 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.809ms. Jul 16 00:48:30.071949 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:48:30.071956 systemd[1]: Detected architecture x86-64. Jul 16 00:48:30.071964 systemd[1]: Detected first boot. Jul 16 00:48:30.071971 systemd[1]: Hostname set to . Jul 16 00:48:30.071978 systemd[1]: Initializing machine ID from random generator. Jul 16 00:48:30.071985 zram_generator::config[1343]: No configuration found. Jul 16 00:48:30.071993 systemd[1]: Populated /etc with preset unit settings. Jul 16 00:48:30.072000 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 16 00:48:30.072007 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 16 00:48:30.072013 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 16 00:48:30.072020 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 16 00:48:30.072027 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 16 00:48:30.072033 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 16 00:48:30.072041 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 16 00:48:30.072048 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 16 00:48:30.072055 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 16 00:48:30.072062 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 16 00:48:30.072069 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 16 00:48:30.072076 systemd[1]: Created slice user.slice - User and Session Slice. Jul 16 00:48:30.072083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:48:30.072090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:48:30.072098 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 16 00:48:30.072105 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 16 00:48:30.072112 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 16 00:48:30.072119 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:48:30.072126 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jul 16 00:48:30.072133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:48:30.072140 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:48:30.072148 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 16 00:48:30.072155 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 16 00:48:30.072164 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 16 00:48:30.072171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 16 00:48:30.072177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:48:30.072186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:48:30.072192 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:48:30.072199 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:48:30.072206 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 16 00:48:30.072215 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 16 00:48:30.072222 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 16 00:48:30.072229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:48:30.072236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:48:30.072243 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:48:30.072252 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 16 00:48:30.072259 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 16 00:48:30.072266 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 16 00:48:30.072273 systemd[1]: Mounting media.mount - External Media Directory... Jul 16 00:48:30.072280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:48:30.072287 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 16 00:48:30.072294 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 16 00:48:30.072301 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 16 00:48:30.072310 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 16 00:48:30.072317 systemd[1]: Reached target machines.target - Containers. Jul 16 00:48:30.072324 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 16 00:48:30.072332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:48:30.072339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:48:30.072346 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 16 00:48:30.072356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:48:30.072364 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:48:30.072372 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:48:30.072379 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 16 00:48:30.072386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:48:30.072393 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 16 00:48:30.072401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 16 00:48:30.072408 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 16 00:48:30.072415 kernel: fuse: init (API version 7.41) Jul 16 00:48:30.072421 kernel: ACPI: bus type drm_connector registered Jul 16 00:48:30.072427 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 16 00:48:30.072436 systemd[1]: Stopped systemd-fsck-usr.service. Jul 16 00:48:30.072443 kernel: loop: module loaded Jul 16 00:48:30.072450 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:48:30.072457 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:48:30.072464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:48:30.072471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:48:30.072491 systemd-journald[1447]: Collecting audit messages is disabled. Jul 16 00:48:30.072508 systemd-journald[1447]: Journal started Jul 16 00:48:30.072523 systemd-journald[1447]: Runtime Journal (/run/log/journal/5ab564e2059747cdad72fa277882e466) is 8M, max 639.3M, 631.3M free. Jul 16 00:48:28.934919 systemd[1]: Queued start job for default target multi-user.target. Jul 16 00:48:28.950345 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 16 00:48:28.950659 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 16 00:48:30.088416 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 16 00:48:30.112408 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 16 00:48:30.122400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:48:30.152685 systemd[1]: verity-setup.service: Deactivated successfully. Jul 16 00:48:30.152718 systemd[1]: Stopped verity-setup.service. Jul 16 00:48:30.178418 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:48:30.187405 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:48:30.195883 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 16 00:48:30.204503 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 16 00:48:30.213501 systemd[1]: Mounted media.mount - External Media Directory. Jul 16 00:48:30.222492 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 16 00:48:30.231660 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 16 00:48:30.241645 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 16 00:48:30.250737 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 16 00:48:30.261776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:48:30.272271 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 16 00:48:30.272786 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 16 00:48:30.283300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:48:30.283805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:48:30.294374 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:48:30.294871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:48:30.304295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:48:30.304994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:48:30.316312 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 16 00:48:30.316814 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 16 00:48:30.326304 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:48:30.326795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:48:30.336403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:48:30.346335 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:48:30.357466 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 16 00:48:30.368343 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 16 00:48:30.379331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:48:30.412707 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:48:30.425036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 16 00:48:30.440006 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 16 00:48:30.449634 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 16 00:48:30.449725 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:48:30.461507 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 16 00:48:30.474290 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 16 00:48:30.482844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:48:30.495586 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 16 00:48:30.514624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 16 00:48:30.524483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:48:30.540624 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 16 00:48:30.543897 systemd-journald[1447]: Time spent on flushing to /var/log/journal/5ab564e2059747cdad72fa277882e466 is 12.396ms for 1418 entries. Jul 16 00:48:30.543897 systemd-journald[1447]: System Journal (/var/log/journal/5ab564e2059747cdad72fa277882e466) is 8M, max 195.6M, 187.6M free. Jul 16 00:48:30.566880 systemd-journald[1447]: Received client request to flush runtime journal. Jul 16 00:48:30.557454 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:48:30.568595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:48:30.589969 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 16 00:48:30.607682 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 16 00:48:30.619409 kernel: loop0: detected capacity change from 0 to 146240 Jul 16 00:48:30.622692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 16 00:48:30.634060 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 16 00:48:30.644654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 16 00:48:30.654680 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 16 00:48:30.669792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:48:30.670356 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 16 00:48:30.678587 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 16 00:48:30.689014 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 16 00:48:30.699220 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 16 00:48:30.708359 kernel: loop1: detected capacity change from 0 to 224512 Jul 16 00:48:30.722657 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:48:30.738733 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 16 00:48:30.746503 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 16 00:48:30.755361 kernel: loop2: detected capacity change from 0 to 113872 Jul 16 00:48:30.768521 systemd-tmpfiles[1497]: ACLs are not supported, ignoring. Jul 16 00:48:30.768532 systemd-tmpfiles[1497]: ACLs are not supported, ignoring. Jul 16 00:48:30.771587 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:48:30.813399 kernel: loop3: detected capacity change from 0 to 8 Jul 16 00:48:30.843419 kernel: loop4: detected capacity change from 0 to 146240 Jul 16 00:48:30.869406 kernel: loop5: detected capacity change from 0 to 224512 Jul 16 00:48:30.874893 ldconfig[1477]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 16 00:48:30.876910 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 16 00:48:30.890397 kernel: loop6: detected capacity change from 0 to 113872 Jul 16 00:48:30.904409 kernel: loop7: detected capacity change from 0 to 8 Jul 16 00:48:30.904465 (sd-merge)[1504]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 16 00:48:30.904759 (sd-merge)[1504]: Merged extensions into '/usr'. Jul 16 00:48:30.907325 systemd[1]: Reload requested from client PID 1483 ('systemd-sysext') (unit systemd-sysext.service)... Jul 16 00:48:30.907333 systemd[1]: Reloading... Jul 16 00:48:30.928421 zram_generator::config[1529]: No configuration found. Jul 16 00:48:30.991769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:48:31.056401 systemd[1]: Reloading finished in 148 ms. Jul 16 00:48:31.080606 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 16 00:48:31.090739 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 16 00:48:31.117547 systemd[1]: Starting ensure-sysext.service... Jul 16 00:48:31.124435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:48:31.144207 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:48:31.161941 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 16 00:48:31.162339 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 16 00:48:31.162901 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 16 00:48:31.163125 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 16 00:48:31.163847 systemd-tmpfiles[1588]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 16 00:48:31.164082 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jul 16 00:48:31.164133 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Jul 16 00:48:31.166908 systemd-tmpfiles[1588]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:48:31.166915 systemd-tmpfiles[1588]: Skipping /boot Jul 16 00:48:31.175344 systemd-tmpfiles[1588]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:48:31.175350 systemd-tmpfiles[1588]: Skipping /boot Jul 16 00:48:31.182519 systemd[1]: Reload requested from client PID 1587 ('systemctl') (unit ensure-sysext.service)... Jul 16 00:48:31.182532 systemd[1]: Reloading... Jul 16 00:48:31.194620 systemd-udevd[1589]: Using default interface naming scheme 'v255'. Jul 16 00:48:31.210367 zram_generator::config[1616]: No configuration found. Jul 16 00:48:31.266658 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 16 00:48:31.266927 kernel: ACPI: button: Sleep Button [SLPB] Jul 16 00:48:31.274535 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 16 00:48:31.278361 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 16 00:48:31.278546 kernel: IPMI message handler: version 39.2 Jul 16 00:48:31.278561 kernel: mousedev: PS/2 mouse device common for all mice Jul 16 00:48:31.278572 kernel: ACPI: button: Power Button [PWRF] Jul 16 00:48:31.279358 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 16 00:48:31.291188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:48:31.318370 kernel: ipmi device interface Jul 16 00:48:31.318430 kernel: ACPI: video: Video Device [GFX0] (multi-head: yes rom: no post: no) Jul 16 00:48:31.336484 kernel: input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input4 Jul 16 00:48:31.336544 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 16 00:48:31.350142 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 16 00:48:31.361367 kernel: iTCO_vendor_support: vendor-support=0 Jul 16 00:48:31.365361 kernel: MACsec IEEE 802.1AE Jul 16 00:48:31.394592 kernel: ipmi_si: IPMI System Interface driver Jul 16 00:48:31.394647 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 16 00:48:31.402127 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 16 00:48:31.408476 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 16 00:48:31.414777 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 16 00:48:31.423162 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 16 00:48:31.432613 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 16 00:48:31.438704 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 16 00:48:31.441244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jul 16 00:48:31.448979 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 16 00:48:31.466364 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Jul 16 00:48:31.468454 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jul 16 00:48:31.468709 systemd[1]: Reloading finished in 285 ms. Jul 16 00:48:31.486367 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 16 00:48:31.500941 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:48:31.521132 kernel: intel_rapl_common: Found RAPL domain package Jul 16 00:48:31.521174 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Jul 16 00:48:31.521289 kernel: intel_rapl_common: Found RAPL domain core Jul 16 00:48:31.521305 kernel: intel_rapl_common: Found RAPL domain uncore Jul 16 00:48:31.539250 kernel: intel_rapl_common: Found RAPL domain dram Jul 16 00:48:31.573826 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:48:31.578358 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 16 00:48:31.603485 systemd[1]: Finished ensure-sysext.service. Jul 16 00:48:31.605358 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 16 00:48:31.631572 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 16 00:48:31.640466 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:48:31.641188 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:48:31.872568 kernel: i915 0000:00:02.0: can't derive routing for PCI INT A Jul 16 00:48:31.872712 kernel: i915 0000:00:02.0: PCI INT A: not connected Jul 16 00:48:31.882356 kernel: i915 0000:00:02.0: [drm] Found COFFEELAKE (device ID 3e9a) display version 9.00 stepping N/A Jul 16 00:48:31.885840 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 16 00:48:31.897186 kernel: i915 0000:00:02.0: [drm] VT-d active for gfx access Jul 16 00:48:31.897319 kernel: i915 0000:00:02.0: [drm] Using Transparent Hugepages Jul 16 00:48:31.905250 augenrules[1817]: No rules Jul 16 00:48:31.917310 kernel: i915 0000:00:02.0: ROM [??? 0x00000000 flags 0x20000000]: can't assign; bogus alignment Jul 16 00:48:31.917440 kernel: i915 0000:00:02.0: [drm] Failed to find VBIOS tables (VBT) Jul 16 00:48:31.930356 kernel: i915 0000:00:02.0: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4) Jul 16 00:48:31.931541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:48:31.932252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:48:31.940954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:48:31.949932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:48:31.959947 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:48:31.968480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:48:31.969012 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 16 00:48:31.978387 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:48:31.978997 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 16 00:48:31.990402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:48:31.991537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:48:32.000485 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 16 00:48:32.017977 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 16 00:48:32.039469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:48:32.049393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:48:32.050124 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:48:32.059422 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:48:32.068780 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 16 00:48:32.069012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:48:32.069117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:48:32.069287 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:48:32.069399 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:48:32.069574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:48:32.069673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:48:32.069840 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:48:32.069940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:48:32.070165 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 16 00:48:32.070420 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 16 00:48:32.074969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:48:32.075044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:48:32.075897 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 16 00:48:32.076843 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 16 00:48:32.076871 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 00:48:32.077166 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 16 00:48:32.096381 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 16 00:48:32.114671 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 16 00:48:32.157696 systemd-resolved[1830]: Positive Trust Anchors: Jul 16 00:48:32.157706 systemd-resolved[1830]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:48:32.157735 systemd-resolved[1830]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:48:32.161003 systemd-resolved[1830]: Using system hostname 'ci-4372.0.1-n-bd48696324'. Jul 16 00:48:32.161765 systemd-networkd[1829]: lo: Link UP Jul 16 00:48:32.161769 systemd-networkd[1829]: lo: Gained carrier Jul 16 00:48:32.165124 systemd-networkd[1829]: bond0: netdev ready Jul 16 00:48:32.166311 systemd-networkd[1829]: Enumeration completed Jul 16 00:48:32.167394 systemd-networkd[1829]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:66:0d:82.network. Jul 16 00:48:32.174928 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 16 00:48:32.184681 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:48:32.193421 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:48:32.203582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:48:32.214704 systemd[1]: Reached target network.target - Network. Jul 16 00:48:32.221395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:48:32.231400 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:48:32.239452 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 16 00:48:32.249476 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 16 00:48:32.259391 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 16 00:48:32.269397 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 16 00:48:32.280394 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 16 00:48:32.280416 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:48:32.288400 systemd[1]: Reached target time-set.target - System Time Set. Jul 16 00:48:32.296357 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jul 16 00:48:32.308358 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Jul 16 00:48:32.308815 systemd-networkd[1829]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:66:0d:83.network. Jul 16 00:48:32.311496 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 16 00:48:32.321459 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 16 00:48:32.332398 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:48:32.340079 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 16 00:48:32.350340 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 16 00:48:32.359811 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 16 00:48:32.371641 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 16 00:48:32.380628 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 16 00:48:32.392122 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 16 00:48:32.404006 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 16 00:48:32.414699 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 16 00:48:32.425020 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:48:32.435362 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jul 16 00:48:32.445625 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:48:32.446279 systemd-networkd[1829]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 16 00:48:32.446356 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Jul 16 00:48:32.447171 systemd-networkd[1829]: enp2s0f0np0: Link UP Jul 16 00:48:32.447324 systemd-networkd[1829]: enp2s0f0np0: Gained carrier Jul 16 00:48:32.457356 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 16 00:48:32.463442 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:48:32.463461 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:48:32.464195 systemd[1]: Starting containerd.service - containerd container runtime... Jul 16 00:48:32.465690 systemd-networkd[1829]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:66:0d:82.network. Jul 16 00:48:32.465846 systemd-networkd[1829]: enp2s0f1np1: Link UP Jul 16 00:48:32.465991 systemd-networkd[1829]: enp2s0f1np1: Gained carrier Jul 16 00:48:32.484453 systemd-networkd[1829]: bond0: Link UP Jul 16 00:48:32.484621 systemd-networkd[1829]: bond0: Gained carrier Jul 16 00:48:32.484728 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:32.485044 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:32.485215 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:32.485302 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:32.489838 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 16 00:48:32.508464 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 16 00:48:32.516958 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 16 00:48:32.523284 coreos-metadata[1870]: Jul 16 00:48:32.523 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:48:32.539447 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 16 00:48:32.565684 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Jul 16 00:48:32.565708 kernel: bond0: active interface up! Jul 16 00:48:32.569467 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 16 00:48:32.571611 jq[1876]: false Jul 16 00:48:32.578392 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 16 00:48:32.578987 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 16 00:48:32.583656 extend-filesystems[1877]: Found /dev/sda6 Jul 16 00:48:32.588446 extend-filesystems[1877]: Found /dev/sda9 Jul 16 00:48:32.588446 extend-filesystems[1877]: Checking size of /dev/sda9 Jul 16 00:48:32.614457 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 16 00:48:32.611883 oslogin_cache_refresh[1878]: Refreshing passwd entry cache Jul 16 00:48:32.589027 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 16 00:48:32.614692 extend-filesystems[1877]: Resized partition /dev/sda9 Jul 16 00:48:32.644023 kernel: i915 0000:00:02.0: [drm] [ENCODER:98:DDI A/PHY A] failed to retrieve link info, disabling eDP Jul 16 00:48:32.644172 kernel: [drm] Initialized i915 1.6.0 for 0000:00:02.0 on minor 0 Jul 16 00:48:32.602082 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 16 00:48:32.644435 google_oslogin_nss_cache[1878]: oslogin_cache_refresh[1878]: Refreshing passwd entry cache Jul 16 00:48:32.644729 extend-filesystems[1889]: resize2fs 1.47.2 (1-Jan-2025) Jul 16 00:48:32.621907 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 16 00:48:32.654209 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 16 00:48:32.662286 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 16 00:48:32.682404 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 16 00:48:32.683550 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jul 16 00:48:32.697681 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 16 00:48:32.698034 systemd[1]: Starting update-engine.service - Update Engine... Jul 16 00:48:32.718472 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 16 00:48:32.720733 systemd-logind[1904]: New seat seat0. Jul 16 00:48:32.721988 systemd-logind[1904]: Watching system buttons on /dev/input/event3 (Power Button) Jul 16 00:48:32.722469 systemd-logind[1904]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 16 00:48:32.722495 systemd-logind[1904]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 16 00:48:32.726038 update_engine[1909]: I20250716 00:48:32.725970 1909 main.cc:92] Flatcar Update Engine starting Jul 16 00:48:32.729156 systemd[1]: Started systemd-logind.service - User Login Management. Jul 16 00:48:32.730482 jq[1910]: true Jul 16 00:48:32.738632 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 16 00:48:32.748917 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 16 00:48:32.758556 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 16 00:48:32.758672 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 16 00:48:32.758833 systemd[1]: motdgen.service: Deactivated successfully. Jul 16 00:48:32.770489 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 16 00:48:32.780901 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 16 00:48:32.781012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 16 00:48:32.808298 jq[1915]: true Jul 16 00:48:32.808841 (ntainerd)[1916]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 16 00:48:32.818856 tar[1914]: linux-amd64/LICENSE Jul 16 00:48:32.818987 tar[1914]: linux-amd64/helm Jul 16 00:48:32.824644 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 16 00:48:32.824769 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jul 16 00:48:32.864521 dbus-daemon[1871]: [system] SELinux support is enabled Jul 16 00:48:32.864675 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 16 00:48:32.865104 bash[1946]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:48:32.867050 update_engine[1909]: I20250716 00:48:32.866992 1909 update_check_scheduler.cc:74] Next update check in 11m58s Jul 16 00:48:32.875041 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 16 00:48:32.886746 dbus-daemon[1871]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 16 00:48:32.886943 systemd[1]: Starting sshkeys.service... Jul 16 00:48:32.892425 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 16 00:48:32.892445 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 16 00:48:32.902410 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 16 00:48:32.902423 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 16 00:48:32.918133 systemd[1]: Started update-engine.service - Update Engine. Jul 16 00:48:32.925785 sshd_keygen[1908]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 16 00:48:32.927475 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 16 00:48:32.938441 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 16 00:48:32.963086 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 16 00:48:32.974010 coreos-metadata[1962]: Jul 16 00:48:32.973 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:48:32.984472 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 16 00:48:32.995059 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 16 00:48:33.007200 locksmithd[1963]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 16 00:48:33.014234 containerd[1916]: time="2025-07-16T00:48:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 16 00:48:33.014899 containerd[1916]: time="2025-07-16T00:48:33.014854698Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 16 00:48:33.019562 containerd[1916]: time="2025-07-16T00:48:33.019543076Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.044µs" Jul 16 00:48:33.019562 containerd[1916]: time="2025-07-16T00:48:33.019561385Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 16 00:48:33.019614 containerd[1916]: time="2025-07-16T00:48:33.019575650Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 16 00:48:33.019662 containerd[1916]: time="2025-07-16T00:48:33.019654001Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 16 00:48:33.019678 containerd[1916]: time="2025-07-16T00:48:33.019664582Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 16 00:48:33.019692 containerd[1916]: time="2025-07-16T00:48:33.019682469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:48:33.019723 containerd[1916]: time="2025-07-16T00:48:33.019715533Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:48:33.019742 containerd[1916]: time="2025-07-16T00:48:33.019727122Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:48:33.019870 containerd[1916]: time="2025-07-16T00:48:33.019860787Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:48:33.019886 containerd[1916]: time="2025-07-16T00:48:33.019869687Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:48:33.019886 containerd[1916]: time="2025-07-16T00:48:33.019882328Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:48:33.019912 containerd[1916]: time="2025-07-16T00:48:33.019887471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 16 00:48:33.019938 containerd[1916]: time="2025-07-16T00:48:33.019931514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 16 00:48:33.020069 containerd[1916]: time="2025-07-16T00:48:33.020061490Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:48:33.020085 containerd[1916]: time="2025-07-16T00:48:33.020078149Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:48:33.020100 containerd[1916]: time="2025-07-16T00:48:33.020084379Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 16 00:48:33.020117 containerd[1916]: time="2025-07-16T00:48:33.020104057Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 16 00:48:33.020278 containerd[1916]: time="2025-07-16T00:48:33.020262154Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 16 00:48:33.020317 containerd[1916]: time="2025-07-16T00:48:33.020309251Z" level=info msg="metadata content store policy set" policy=shared Jul 16 00:48:33.022248 systemd[1]: issuegen.service: Deactivated successfully. Jul 16 00:48:33.022381 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 16 00:48:33.031240 containerd[1916]: time="2025-07-16T00:48:33.031196575Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 16 00:48:33.031240 containerd[1916]: time="2025-07-16T00:48:33.031233213Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 16 00:48:33.031286 containerd[1916]: time="2025-07-16T00:48:33.031246587Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 16 00:48:33.031286 containerd[1916]: time="2025-07-16T00:48:33.031258525Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 16 00:48:33.031286 containerd[1916]: time="2025-07-16T00:48:33.031271248Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 16 00:48:33.031286 containerd[1916]: time="2025-07-16T00:48:33.031280709Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 16 00:48:33.031357 containerd[1916]: time="2025-07-16T00:48:33.031297785Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 16 00:48:33.031357 containerd[1916]: time="2025-07-16T00:48:33.031309564Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 16 00:48:33.031357 containerd[1916]: time="2025-07-16T00:48:33.031320589Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 16 00:48:33.031357 containerd[1916]: time="2025-07-16T00:48:33.031330743Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 16 00:48:33.031873 containerd[1916]: time="2025-07-16T00:48:33.031694772Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 16 00:48:33.031922 containerd[1916]: time="2025-07-16T00:48:33.031884822Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 16 00:48:33.032040 containerd[1916]: time="2025-07-16T00:48:33.032026542Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 16 00:48:33.032073 containerd[1916]: time="2025-07-16T00:48:33.032046931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 16 00:48:33.032073 containerd[1916]: time="2025-07-16T00:48:33.032062850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 16 00:48:33.032124 containerd[1916]: time="2025-07-16T00:48:33.032073836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 16 00:48:33.032124 containerd[1916]: time="2025-07-16T00:48:33.032084622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 16 00:48:33.032124 containerd[1916]: time="2025-07-16T00:48:33.032095865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 16 00:48:33.032124 containerd[1916]: time="2025-07-16T00:48:33.032106954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 16 00:48:33.032124 containerd[1916]: time="2025-07-16T00:48:33.032116611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 16 00:48:33.032276 containerd[1916]: time="2025-07-16T00:48:33.032128101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 16 00:48:33.032276 containerd[1916]: time="2025-07-16T00:48:33.032138770Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 16 00:48:33.032276 containerd[1916]: time="2025-07-16T00:48:33.032148759Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 16 00:48:33.032276 containerd[1916]: time="2025-07-16T00:48:33.032202636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 16 00:48:33.032276 containerd[1916]: time="2025-07-16T00:48:33.032245906Z" level=info msg="Start snapshots syncer" Jul 16 00:48:33.032276 containerd[1916]: time="2025-07-16T00:48:33.032264031Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 16 00:48:33.032185 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 16 00:48:33.032494 containerd[1916]: time="2025-07-16T00:48:33.032462884Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 16 00:48:33.032590 containerd[1916]: time="2025-07-16T00:48:33.032508475Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 16 00:48:33.032590 containerd[1916]: time="2025-07-16T00:48:33.032561822Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 16 00:48:33.032644 containerd[1916]: time="2025-07-16T00:48:33.032631551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 16 00:48:33.032672 containerd[1916]: time="2025-07-16T00:48:33.032649882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 16 00:48:33.032672 containerd[1916]: time="2025-07-16T00:48:33.032661193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 16 00:48:33.032726 containerd[1916]: time="2025-07-16T00:48:33.032670540Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 16 00:48:33.032726 containerd[1916]: time="2025-07-16T00:48:33.032682623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 16 00:48:33.032726 containerd[1916]: time="2025-07-16T00:48:33.032693612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 16 00:48:33.032726 containerd[1916]: time="2025-07-16T00:48:33.032704385Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 16 00:48:33.032820 containerd[1916]: time="2025-07-16T00:48:33.032727948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 16 00:48:33.032820 containerd[1916]: time="2025-07-16T00:48:33.032739531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 16 00:48:33.032820 containerd[1916]: time="2025-07-16T00:48:33.032749722Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 16 00:48:33.033133 containerd[1916]: time="2025-07-16T00:48:33.033120568Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:48:33.033167 containerd[1916]: time="2025-07-16T00:48:33.033136367Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:48:33.033167 containerd[1916]: time="2025-07-16T00:48:33.033146951Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:48:33.033167 containerd[1916]: time="2025-07-16T00:48:33.033156848Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:48:33.033167 containerd[1916]: time="2025-07-16T00:48:33.033165099Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 16 00:48:33.033270 containerd[1916]: time="2025-07-16T00:48:33.033174076Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 16 00:48:33.033270 containerd[1916]: time="2025-07-16T00:48:33.033185027Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 16 00:48:33.033270 containerd[1916]: time="2025-07-16T00:48:33.033199590Z" level=info msg="runtime interface created" Jul 16 00:48:33.033270 containerd[1916]: time="2025-07-16T00:48:33.033205403Z" level=info msg="created NRI interface" Jul 16 00:48:33.033270 containerd[1916]: time="2025-07-16T00:48:33.033213227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 16 00:48:33.033270 containerd[1916]: time="2025-07-16T00:48:33.033223307Z" level=info msg="Connect containerd service" Jul 16 00:48:33.033270 containerd[1916]: time="2025-07-16T00:48:33.033247830Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 16 00:48:33.033696 containerd[1916]: time="2025-07-16T00:48:33.033682499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:48:33.055097 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 16 00:48:33.066865 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 16 00:48:33.075235 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jul 16 00:48:33.086563 systemd[1]: Reached target getty.target - Login Prompts. Jul 16 00:48:33.108203 tar[1914]: linux-amd64/README.md Jul 16 00:48:33.130453 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 16 00:48:33.142880 containerd[1916]: time="2025-07-16T00:48:33.142859638Z" level=info msg="Start subscribing containerd event" Jul 16 00:48:33.142931 containerd[1916]: time="2025-07-16T00:48:33.142894728Z" level=info msg="Start recovering state" Jul 16 00:48:33.142951 containerd[1916]: time="2025-07-16T00:48:33.142946087Z" level=info msg="Start event monitor" Jul 16 00:48:33.142981 containerd[1916]: time="2025-07-16T00:48:33.142965977Z" level=info msg="Start cni network conf syncer for default" Jul 16 00:48:33.142981 containerd[1916]: time="2025-07-16T00:48:33.142970517Z" level=info msg="Start streaming server" Jul 16 00:48:33.143012 containerd[1916]: time="2025-07-16T00:48:33.142896590Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 16 00:48:33.143038 containerd[1916]: time="2025-07-16T00:48:33.142982391Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 16 00:48:33.143059 containerd[1916]: time="2025-07-16T00:48:33.143042676Z" level=info msg="runtime interface starting up..." Jul 16 00:48:33.143059 containerd[1916]: time="2025-07-16T00:48:33.143047666Z" level=info msg="starting plugins..." Jul 16 00:48:33.143086 containerd[1916]: time="2025-07-16T00:48:33.143058307Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 16 00:48:33.143116 containerd[1916]: time="2025-07-16T00:48:33.143033943Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 16 00:48:33.143161 containerd[1916]: time="2025-07-16T00:48:33.143152921Z" level=info msg="containerd successfully booted in 0.129146s" Jul 16 00:48:33.143184 systemd[1]: Started containerd.service - containerd container runtime. Jul 16 00:48:33.180360 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 16 00:48:33.205599 extend-filesystems[1889]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 16 00:48:33.205599 extend-filesystems[1889]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 16 00:48:33.205599 extend-filesystems[1889]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 16 00:48:33.242440 extend-filesystems[1877]: Resized filesystem in /dev/sda9 Jul 16 00:48:33.206048 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 16 00:48:33.206181 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 16 00:48:33.936478 systemd-networkd[1829]: bond0: Gained IPv6LL Jul 16 00:48:33.936895 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:34.448772 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:34.448933 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:34.449906 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 16 00:48:34.460400 systemd[1]: Reached target network-online.target - Network is Online. Jul 16 00:48:34.470943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:48:34.490745 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 16 00:48:34.512094 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 16 00:48:35.108517 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Jul 16 00:48:35.108661 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Jul 16 00:48:35.265387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:48:35.276024 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:48:35.736329 kubelet[2032]: E0716 00:48:35.736248 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:48:35.737283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:48:35.737376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:48:35.737598 systemd[1]: kubelet.service: Consumed 617ms CPU time, 265.6M memory peak. Jul 16 00:48:35.966611 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 16 00:48:35.976379 systemd[1]: Started sshd@0-147.75.90.137:22-147.75.109.163:56756.service - OpenSSH per-connection server daemon (147.75.109.163:56756). Jul 16 00:48:36.043264 sshd[2047]: Accepted publickey for core from 147.75.109.163 port 56756 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:36.044557 sshd-session[2047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:36.052283 systemd-logind[1904]: New session 1 of user core. Jul 16 00:48:36.053153 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 16 00:48:36.062368 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 16 00:48:36.092540 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 16 00:48:36.104098 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 16 00:48:36.122340 (systemd)[2051]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 16 00:48:36.124792 systemd-logind[1904]: New session c1 of user core. Jul 16 00:48:36.250411 systemd[2051]: Queued start job for default target default.target. Jul 16 00:48:36.262038 systemd[2051]: Created slice app.slice - User Application Slice. Jul 16 00:48:36.262071 systemd[2051]: Reached target paths.target - Paths. Jul 16 00:48:36.262094 systemd[2051]: Reached target timers.target - Timers. Jul 16 00:48:36.262770 systemd[2051]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 16 00:48:36.268344 systemd[2051]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 16 00:48:36.268418 systemd[2051]: Reached target sockets.target - Sockets. Jul 16 00:48:36.268443 systemd[2051]: Reached target basic.target - Basic System. Jul 16 00:48:36.268466 systemd[2051]: Reached target default.target - Main User Target. Jul 16 00:48:36.268484 systemd[2051]: Startup finished in 137ms. Jul 16 00:48:36.268553 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 16 00:48:36.291645 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 16 00:48:36.367672 systemd[1]: Started sshd@1-147.75.90.137:22-147.75.109.163:56758.service - OpenSSH per-connection server daemon (147.75.109.163:56758). Jul 16 00:48:36.419841 sshd[2062]: Accepted publickey for core from 147.75.109.163 port 56758 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:36.420715 sshd-session[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:36.424335 systemd-logind[1904]: New session 2 of user core. Jul 16 00:48:36.438648 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 16 00:48:36.501917 sshd[2064]: Connection closed by 147.75.109.163 port 56758 Jul 16 00:48:36.502061 sshd-session[2062]: pam_unix(sshd:session): session closed for user core Jul 16 00:48:36.523000 systemd[1]: sshd@1-147.75.90.137:22-147.75.109.163:56758.service: Deactivated successfully. Jul 16 00:48:36.524052 systemd[1]: session-2.scope: Deactivated successfully. Jul 16 00:48:36.524733 systemd-logind[1904]: Session 2 logged out. Waiting for processes to exit. Jul 16 00:48:36.526376 systemd[1]: Started sshd@2-147.75.90.137:22-147.75.109.163:56774.service - OpenSSH per-connection server daemon (147.75.109.163:56774). Jul 16 00:48:36.537569 systemd-logind[1904]: Removed session 2. Jul 16 00:48:36.596390 sshd[2070]: Accepted publickey for core from 147.75.109.163 port 56774 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:36.597915 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:36.604021 systemd-logind[1904]: New session 3 of user core. Jul 16 00:48:36.616646 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 16 00:48:36.617197 google_oslogin_nss_cache[1878]: oslogin_cache_refresh[1878]: Failure getting users, quitting Jul 16 00:48:36.617197 google_oslogin_nss_cache[1878]: oslogin_cache_refresh[1878]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 16 00:48:36.617165 oslogin_cache_refresh[1878]: Failure getting users, quitting Jul 16 00:48:36.617462 google_oslogin_nss_cache[1878]: oslogin_cache_refresh[1878]: Refreshing group entry cache Jul 16 00:48:36.617179 oslogin_cache_refresh[1878]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 16 00:48:36.617211 oslogin_cache_refresh[1878]: Refreshing group entry cache Jul 16 00:48:36.618098 google_oslogin_nss_cache[1878]: oslogin_cache_refresh[1878]: Failure getting groups, quitting Jul 16 00:48:36.618098 google_oslogin_nss_cache[1878]: oslogin_cache_refresh[1878]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 16 00:48:36.618043 oslogin_cache_refresh[1878]: Failure getting groups, quitting Jul 16 00:48:36.618050 oslogin_cache_refresh[1878]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 16 00:48:36.626897 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 16 00:48:36.627042 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 16 00:48:36.696854 sshd[2074]: Connection closed by 147.75.109.163 port 56774 Jul 16 00:48:36.697579 sshd-session[2070]: pam_unix(sshd:session): session closed for user core Jul 16 00:48:36.704436 systemd[1]: sshd@2-147.75.90.137:22-147.75.109.163:56774.service: Deactivated successfully. Jul 16 00:48:36.708242 systemd[1]: session-3.scope: Deactivated successfully. Jul 16 00:48:36.713079 systemd-logind[1904]: Session 3 logged out. Waiting for processes to exit. Jul 16 00:48:36.716112 systemd-logind[1904]: Removed session 3. Jul 16 00:48:37.367038 coreos-metadata[1870]: Jul 16 00:48:37.366 INFO Fetch successful Jul 16 00:48:37.417682 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 16 00:48:37.427760 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 16 00:48:37.851599 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 16 00:48:37.967223 coreos-metadata[1962]: Jul 16 00:48:37.967 INFO Fetch successful Jul 16 00:48:38.048274 unknown[1962]: wrote ssh authorized keys file for user: core Jul 16 00:48:38.079067 update-ssh-keys[2086]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:48:38.079396 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 16 00:48:38.090217 systemd[1]: Finished sshkeys.service. Jul 16 00:48:38.098804 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 16 00:48:38.098981 systemd[1]: Startup finished in 4.513s (kernel) + 23.152s (initrd) + 9.729s (userspace) = 37.394s. Jul 16 00:48:38.148289 login[1996]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 16 00:48:38.148791 login[1997]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 16 00:48:38.151290 systemd-logind[1904]: New session 4 of user core. Jul 16 00:48:38.165587 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 16 00:48:38.167416 systemd-logind[1904]: New session 5 of user core. Jul 16 00:48:38.168080 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 16 00:48:38.990275 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:45.865054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 16 00:48:45.868522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:48:46.182507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:48:46.205500 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:48:46.256071 kubelet[2125]: E0716 00:48:46.256015 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:48:46.258190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:48:46.258283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:48:46.258478 systemd[1]: kubelet.service: Consumed 225ms CPU time, 115M memory peak. Jul 16 00:48:46.717460 systemd[1]: Started sshd@3-147.75.90.137:22-147.75.109.163:45594.service - OpenSSH per-connection server daemon (147.75.109.163:45594). Jul 16 00:48:46.761062 sshd[2143]: Accepted publickey for core from 147.75.109.163 port 45594 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:46.761900 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:46.765323 systemd-logind[1904]: New session 6 of user core. Jul 16 00:48:46.780596 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 16 00:48:46.834010 sshd[2145]: Connection closed by 147.75.109.163 port 45594 Jul 16 00:48:46.834184 sshd-session[2143]: pam_unix(sshd:session): session closed for user core Jul 16 00:48:46.847619 systemd[1]: sshd@3-147.75.90.137:22-147.75.109.163:45594.service: Deactivated successfully. Jul 16 00:48:46.848469 systemd[1]: session-6.scope: Deactivated successfully. Jul 16 00:48:46.849030 systemd-logind[1904]: Session 6 logged out. Waiting for processes to exit. Jul 16 00:48:46.850426 systemd[1]: Started sshd@4-147.75.90.137:22-147.75.109.163:45596.service - OpenSSH per-connection server daemon (147.75.109.163:45596). Jul 16 00:48:46.850859 systemd-logind[1904]: Removed session 6. Jul 16 00:48:46.890737 sshd[2151]: Accepted publickey for core from 147.75.109.163 port 45596 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:46.891949 sshd-session[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:46.897200 systemd-logind[1904]: New session 7 of user core. Jul 16 00:48:46.907698 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 16 00:48:46.964407 sshd[2153]: Connection closed by 147.75.109.163 port 45596 Jul 16 00:48:46.965206 sshd-session[2151]: pam_unix(sshd:session): session closed for user core Jul 16 00:48:46.989029 systemd[1]: sshd@4-147.75.90.137:22-147.75.109.163:45596.service: Deactivated successfully. Jul 16 00:48:46.989917 systemd[1]: session-7.scope: Deactivated successfully. Jul 16 00:48:46.990369 systemd-logind[1904]: Session 7 logged out. Waiting for processes to exit. Jul 16 00:48:46.991720 systemd[1]: Started sshd@5-147.75.90.137:22-147.75.109.163:45602.service - OpenSSH per-connection server daemon (147.75.109.163:45602). Jul 16 00:48:46.992173 systemd-logind[1904]: Removed session 7. Jul 16 00:48:47.036404 sshd[2159]: Accepted publickey for core from 147.75.109.163 port 45602 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:47.037341 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:47.041487 systemd-logind[1904]: New session 8 of user core. Jul 16 00:48:47.058119 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 16 00:48:47.127979 sshd[2162]: Connection closed by 147.75.109.163 port 45602 Jul 16 00:48:47.128770 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Jul 16 00:48:47.152582 systemd[1]: sshd@5-147.75.90.137:22-147.75.109.163:45602.service: Deactivated successfully. Jul 16 00:48:47.156812 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 00:48:47.159289 systemd-logind[1904]: Session 8 logged out. Waiting for processes to exit. Jul 16 00:48:47.165684 systemd[1]: Started sshd@6-147.75.90.137:22-147.75.109.163:45610.service - OpenSSH per-connection server daemon (147.75.109.163:45610). Jul 16 00:48:47.168147 systemd-logind[1904]: Removed session 8. Jul 16 00:48:47.257578 sshd[2168]: Accepted publickey for core from 147.75.109.163 port 45610 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:47.258685 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:47.263268 systemd-logind[1904]: New session 9 of user core. Jul 16 00:48:47.284915 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 16 00:48:47.355380 sudo[2171]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 16 00:48:47.355733 sudo[2171]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:48:47.388512 sudo[2171]: pam_unix(sudo:session): session closed for user root Jul 16 00:48:47.389349 sshd[2170]: Connection closed by 147.75.109.163 port 45610 Jul 16 00:48:47.389549 sshd-session[2168]: pam_unix(sshd:session): session closed for user core Jul 16 00:48:47.411630 systemd[1]: sshd@6-147.75.90.137:22-147.75.109.163:45610.service: Deactivated successfully. Jul 16 00:48:47.415870 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 00:48:47.418384 systemd-logind[1904]: Session 9 logged out. Waiting for processes to exit. Jul 16 00:48:47.424477 systemd[1]: Started sshd@7-147.75.90.137:22-147.75.109.163:45618.service - OpenSSH per-connection server daemon (147.75.109.163:45618). Jul 16 00:48:47.426249 systemd-logind[1904]: Removed session 9. Jul 16 00:48:47.475737 sshd[2177]: Accepted publickey for core from 147.75.109.163 port 45618 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:47.476506 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:47.479595 systemd-logind[1904]: New session 10 of user core. Jul 16 00:48:47.490809 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 16 00:48:47.554534 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 16 00:48:47.554692 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:48:47.563124 sudo[2181]: pam_unix(sudo:session): session closed for user root Jul 16 00:48:47.566037 sudo[2180]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 16 00:48:47.566226 sudo[2180]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:48:47.572451 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:48:47.622215 augenrules[2203]: No rules Jul 16 00:48:47.623577 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:48:47.624015 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:48:47.625567 sudo[2180]: pam_unix(sudo:session): session closed for user root Jul 16 00:48:47.627566 sshd[2179]: Connection closed by 147.75.109.163 port 45618 Jul 16 00:48:47.628121 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Jul 16 00:48:47.651173 systemd[1]: sshd@7-147.75.90.137:22-147.75.109.163:45618.service: Deactivated successfully. Jul 16 00:48:47.654676 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 00:48:47.656772 systemd-logind[1904]: Session 10 logged out. Waiting for processes to exit. Jul 16 00:48:47.662391 systemd[1]: Started sshd@8-147.75.90.137:22-147.75.109.163:34308.service - OpenSSH per-connection server daemon (147.75.109.163:34308). Jul 16 00:48:47.664166 systemd-logind[1904]: Removed session 10. Jul 16 00:48:47.749388 sshd[2212]: Accepted publickey for core from 147.75.109.163 port 34308 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:48:47.750534 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:48:47.755302 systemd-logind[1904]: New session 11 of user core. Jul 16 00:48:47.770727 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 16 00:48:47.825900 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 16 00:48:47.826062 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:48:48.179690 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 16 00:48:48.192706 (dockerd)[2241]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 16 00:48:48.404883 dockerd[2241]: time="2025-07-16T00:48:48.404829833Z" level=info msg="Starting up" Jul 16 00:48:48.405634 dockerd[2241]: time="2025-07-16T00:48:48.405590711Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 16 00:48:48.433552 dockerd[2241]: time="2025-07-16T00:48:48.433464675Z" level=info msg="Loading containers: start." Jul 16 00:48:48.444378 kernel: Initializing XFRM netlink socket Jul 16 00:48:48.654852 systemd-timesyncd[1831]: Network configuration changed, trying to establish connection. Jul 16 00:48:49.087417 systemd-timesyncd[1831]: Contacted time server [2606:4700:f1::1]:123 (2.flatcar.pool.ntp.org). Jul 16 00:48:49.087444 systemd-timesyncd[1831]: Initial clock synchronization to Wed 2025-07-16 00:48:49.087322 UTC. Jul 16 00:48:49.087460 systemd-resolved[1830]: Clock change detected. Flushing caches. Jul 16 00:48:49.104346 systemd-networkd[1829]: docker0: Link UP Jul 16 00:48:49.105573 dockerd[2241]: time="2025-07-16T00:48:49.105528046Z" level=info msg="Loading containers: done." Jul 16 00:48:49.112673 dockerd[2241]: time="2025-07-16T00:48:49.112623823Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 16 00:48:49.112673 dockerd[2241]: time="2025-07-16T00:48:49.112664227Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 16 00:48:49.112759 dockerd[2241]: time="2025-07-16T00:48:49.112719049Z" level=info msg="Initializing buildkit" Jul 16 00:48:49.124077 dockerd[2241]: time="2025-07-16T00:48:49.124011744Z" level=info msg="Completed buildkit initialization" Jul 16 00:48:49.126368 dockerd[2241]: time="2025-07-16T00:48:49.126335147Z" level=info msg="Daemon has completed initialization" Jul 16 00:48:49.126402 dockerd[2241]: time="2025-07-16T00:48:49.126363058Z" level=info msg="API listen on /run/docker.sock" Jul 16 00:48:49.126465 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 16 00:48:49.959733 containerd[1916]: time="2025-07-16T00:48:49.959662474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 16 00:48:50.501290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407018500.mount: Deactivated successfully. Jul 16 00:48:51.377243 containerd[1916]: time="2025-07-16T00:48:51.377188859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:51.377463 containerd[1916]: time="2025-07-16T00:48:51.377360243Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Jul 16 00:48:51.377756 containerd[1916]: time="2025-07-16T00:48:51.377716817Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:51.379167 containerd[1916]: time="2025-07-16T00:48:51.379122865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:51.379685 containerd[1916]: time="2025-07-16T00:48:51.379645353Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.419918001s" Jul 16 00:48:51.379685 containerd[1916]: time="2025-07-16T00:48:51.379661945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Jul 16 00:48:51.380067 containerd[1916]: time="2025-07-16T00:48:51.380025007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 16 00:48:52.645349 containerd[1916]: time="2025-07-16T00:48:52.645296474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:52.645569 containerd[1916]: time="2025-07-16T00:48:52.645496718Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Jul 16 00:48:52.645865 containerd[1916]: time="2025-07-16T00:48:52.645852560Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:52.647098 containerd[1916]: time="2025-07-16T00:48:52.647086847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:52.647613 containerd[1916]: time="2025-07-16T00:48:52.647600869Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.26755876s" Jul 16 00:48:52.647643 containerd[1916]: time="2025-07-16T00:48:52.647618096Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Jul 16 00:48:52.647931 containerd[1916]: time="2025-07-16T00:48:52.647877125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 16 00:48:53.662114 containerd[1916]: time="2025-07-16T00:48:53.662057613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:53.662326 containerd[1916]: time="2025-07-16T00:48:53.662258654Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Jul 16 00:48:53.662678 containerd[1916]: time="2025-07-16T00:48:53.662637970Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:53.663745 containerd[1916]: time="2025-07-16T00:48:53.663732699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:53.664715 containerd[1916]: time="2025-07-16T00:48:53.664672326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.016778852s" Jul 16 00:48:53.664715 containerd[1916]: time="2025-07-16T00:48:53.664689315Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Jul 16 00:48:53.664966 containerd[1916]: time="2025-07-16T00:48:53.664953592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 16 00:48:53.758193 systemd[1]: Started sshd@9-147.75.90.137:22-34.121.50.154:34566.service - OpenSSH per-connection server daemon (34.121.50.154:34566). Jul 16 00:48:53.996144 sshd[2538]: Connection closed by authenticating user root 34.121.50.154 port 34566 [preauth] Jul 16 00:48:54.000887 systemd[1]: sshd@9-147.75.90.137:22-34.121.50.154:34566.service: Deactivated successfully. Jul 16 00:48:54.068707 systemd[1]: Started sshd@10-147.75.90.137:22-34.121.50.154:38130.service - OpenSSH per-connection server daemon (34.121.50.154:38130). Jul 16 00:48:54.269580 sshd[2543]: Invalid user admin from 34.121.50.154 port 38130 Jul 16 00:48:54.311669 sshd[2543]: Connection closed by invalid user admin 34.121.50.154 port 38130 [preauth] Jul 16 00:48:54.312959 systemd[1]: sshd@10-147.75.90.137:22-34.121.50.154:38130.service: Deactivated successfully. Jul 16 00:48:54.365410 systemd[1]: Started sshd@11-147.75.90.137:22-34.121.50.154:38138.service - OpenSSH per-connection server daemon (34.121.50.154:38138). Jul 16 00:48:54.438139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120379995.mount: Deactivated successfully. Jul 16 00:48:54.545393 sshd[2552]: Invalid user admin from 34.121.50.154 port 38138 Jul 16 00:48:54.586729 sshd[2552]: Connection closed by invalid user admin 34.121.50.154 port 38138 [preauth] Jul 16 00:48:54.587609 systemd[1]: sshd@11-147.75.90.137:22-34.121.50.154:38138.service: Deactivated successfully. Jul 16 00:48:54.634884 containerd[1916]: time="2025-07-16T00:48:54.634813100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:54.635042 containerd[1916]: time="2025-07-16T00:48:54.635000133Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Jul 16 00:48:54.635356 containerd[1916]: time="2025-07-16T00:48:54.635319898Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:54.636072 containerd[1916]: time="2025-07-16T00:48:54.636027931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:54.636442 containerd[1916]: time="2025-07-16T00:48:54.636400734Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 971.43156ms" Jul 16 00:48:54.636442 containerd[1916]: time="2025-07-16T00:48:54.636416270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Jul 16 00:48:54.636663 containerd[1916]: time="2025-07-16T00:48:54.636649320Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 16 00:48:54.644459 systemd[1]: Started sshd@12-147.75.90.137:22-34.121.50.154:38142.service - OpenSSH per-connection server daemon (34.121.50.154:38142). Jul 16 00:48:54.838725 sshd[2561]: Invalid user forum from 34.121.50.154 port 38142 Jul 16 00:48:54.881732 sshd[2561]: Connection closed by invalid user forum 34.121.50.154 port 38142 [preauth] Jul 16 00:48:54.886633 systemd[1]: sshd@12-147.75.90.137:22-34.121.50.154:38142.service: Deactivated successfully. Jul 16 00:48:54.948733 systemd[1]: Started sshd@13-147.75.90.137:22-34.121.50.154:38148.service - OpenSSH per-connection server daemon (34.121.50.154:38148). Jul 16 00:48:55.131745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452232309.mount: Deactivated successfully. Jul 16 00:48:55.164841 sshd[2567]: Invalid user azureuser from 34.121.50.154 port 38148 Jul 16 00:48:55.205702 sshd[2567]: Connection closed by invalid user azureuser 34.121.50.154 port 38148 [preauth] Jul 16 00:48:55.206515 systemd[1]: sshd@13-147.75.90.137:22-34.121.50.154:38148.service: Deactivated successfully. Jul 16 00:48:55.258540 systemd[1]: Started sshd@14-147.75.90.137:22-34.121.50.154:38160.service - OpenSSH per-connection server daemon (34.121.50.154:38160). Jul 16 00:48:55.437334 sshd[2592]: Invalid user jenkins from 34.121.50.154 port 38160 Jul 16 00:48:55.478590 sshd[2592]: Connection closed by invalid user jenkins 34.121.50.154 port 38160 [preauth] Jul 16 00:48:55.479392 systemd[1]: sshd@14-147.75.90.137:22-34.121.50.154:38160.service: Deactivated successfully. Jul 16 00:48:55.537712 systemd[1]: Started sshd@15-147.75.90.137:22-34.121.50.154:38162.service - OpenSSH per-connection server daemon (34.121.50.154:38162). Jul 16 00:48:55.661580 containerd[1916]: time="2025-07-16T00:48:55.661557097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:55.661861 containerd[1916]: time="2025-07-16T00:48:55.661710375Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 16 00:48:55.662199 containerd[1916]: time="2025-07-16T00:48:55.662186722Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:55.663975 containerd[1916]: time="2025-07-16T00:48:55.663954663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:55.665046 containerd[1916]: time="2025-07-16T00:48:55.665002427Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.028337926s" Jul 16 00:48:55.665046 containerd[1916]: time="2025-07-16T00:48:55.665019329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 16 00:48:55.665276 containerd[1916]: time="2025-07-16T00:48:55.665260725Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 16 00:48:55.782190 sshd[2629]: Connection closed by authenticating user root 34.121.50.154 port 38162 [preauth] Jul 16 00:48:55.786872 systemd[1]: sshd@15-147.75.90.137:22-34.121.50.154:38162.service: Deactivated successfully. Jul 16 00:48:56.166667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933009436.mount: Deactivated successfully. Jul 16 00:48:56.167691 containerd[1916]: time="2025-07-16T00:48:56.167650133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:48:56.167895 containerd[1916]: time="2025-07-16T00:48:56.167835694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 16 00:48:56.168240 containerd[1916]: time="2025-07-16T00:48:56.168204591Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:48:56.169248 containerd[1916]: time="2025-07-16T00:48:56.169209103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:48:56.169695 containerd[1916]: time="2025-07-16T00:48:56.169654117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 504.374123ms" Jul 16 00:48:56.169695 containerd[1916]: time="2025-07-16T00:48:56.169667977Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 16 00:48:56.169977 containerd[1916]: time="2025-07-16T00:48:56.169945604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 16 00:48:56.711961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 16 00:48:56.712897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:48:56.717484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920216542.mount: Deactivated successfully. Jul 16 00:48:56.848456 systemd[1]: Started sshd@16-147.75.90.137:22-34.121.50.154:38170.service - OpenSSH per-connection server daemon (34.121.50.154:38170). Jul 16 00:48:57.017637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:48:57.019789 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:48:57.029747 sshd[2654]: Invalid user oracle from 34.121.50.154 port 38170 Jul 16 00:48:57.043051 kubelet[2665]: E0716 00:48:57.042983 2665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:48:57.044167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:48:57.044275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:48:57.044542 systemd[1]: kubelet.service: Consumed 109ms CPU time, 115.9M memory peak. Jul 16 00:48:57.071210 sshd[2654]: Connection closed by invalid user oracle 34.121.50.154 port 38170 [preauth] Jul 16 00:48:57.072216 systemd[1]: sshd@16-147.75.90.137:22-34.121.50.154:38170.service: Deactivated successfully. Jul 16 00:48:57.127364 systemd[1]: Started sshd@17-147.75.90.137:22-34.121.50.154:38180.service - OpenSSH per-connection server daemon (34.121.50.154:38180). Jul 16 00:48:57.307368 sshd[2718]: Invalid user max from 34.121.50.154 port 38180 Jul 16 00:48:57.347954 sshd[2718]: Connection closed by invalid user max 34.121.50.154 port 38180 [preauth] Jul 16 00:48:57.349255 systemd[1]: sshd@17-147.75.90.137:22-34.121.50.154:38180.service: Deactivated successfully. Jul 16 00:48:57.403538 systemd[1]: Started sshd@18-147.75.90.137:22-34.121.50.154:38186.service - OpenSSH per-connection server daemon (34.121.50.154:38186). Jul 16 00:48:57.581436 sshd[2727]: Invalid user kafka from 34.121.50.154 port 38186 Jul 16 00:48:57.621967 sshd[2727]: Connection closed by invalid user kafka 34.121.50.154 port 38186 [preauth] Jul 16 00:48:57.622860 systemd[1]: sshd@18-147.75.90.137:22-34.121.50.154:38186.service: Deactivated successfully. Jul 16 00:48:57.675446 systemd[1]: Started sshd@19-147.75.90.137:22-34.121.50.154:38194.service - OpenSSH per-connection server daemon (34.121.50.154:38194). Jul 16 00:48:57.756096 containerd[1916]: time="2025-07-16T00:48:57.756040314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:57.756322 containerd[1916]: time="2025-07-16T00:48:57.756251606Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 16 00:48:57.756750 containerd[1916]: time="2025-07-16T00:48:57.756709599Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:57.758170 containerd[1916]: time="2025-07-16T00:48:57.758130003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:48:57.758753 containerd[1916]: time="2025-07-16T00:48:57.758710461Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.588750343s" Jul 16 00:48:57.758753 containerd[1916]: time="2025-07-16T00:48:57.758730065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 16 00:48:57.902250 sshd[2732]: Connection closed by authenticating user root 34.121.50.154 port 38194 [preauth] Jul 16 00:48:57.905718 systemd[1]: sshd@19-147.75.90.137:22-34.121.50.154:38194.service: Deactivated successfully. Jul 16 00:48:57.965366 systemd[1]: Started sshd@20-147.75.90.137:22-34.121.50.154:38202.service - OpenSSH per-connection server daemon (34.121.50.154:38202). Jul 16 00:48:58.197890 sshd[2758]: Invalid user ubuntu from 34.121.50.154 port 38202 Jul 16 00:48:58.239319 sshd[2758]: Connection closed by invalid user ubuntu 34.121.50.154 port 38202 [preauth] Jul 16 00:48:58.239996 systemd[1]: sshd@20-147.75.90.137:22-34.121.50.154:38202.service: Deactivated successfully. Jul 16 00:48:58.289220 systemd[1]: Started sshd@21-147.75.90.137:22-34.121.50.154:38212.service - OpenSSH per-connection server daemon (34.121.50.154:38212). Jul 16 00:48:58.467327 sshd[2789]: Invalid user odoo from 34.121.50.154 port 38212 Jul 16 00:48:58.508247 sshd[2789]: Connection closed by invalid user odoo 34.121.50.154 port 38212 [preauth] Jul 16 00:48:58.509003 systemd[1]: sshd@21-147.75.90.137:22-34.121.50.154:38212.service: Deactivated successfully. Jul 16 00:48:58.560381 systemd[1]: Started sshd@22-147.75.90.137:22-34.121.50.154:38220.service - OpenSSH per-connection server daemon (34.121.50.154:38220). Jul 16 00:48:58.741305 sshd[2794]: Invalid user service from 34.121.50.154 port 38220 Jul 16 00:48:58.783004 sshd[2794]: Connection closed by invalid user service 34.121.50.154 port 38220 [preauth] Jul 16 00:48:58.783649 systemd[1]: sshd@22-147.75.90.137:22-34.121.50.154:38220.service: Deactivated successfully. Jul 16 00:48:58.834138 systemd[1]: Started sshd@23-147.75.90.137:22-34.121.50.154:38234.service - OpenSSH per-connection server daemon (34.121.50.154:38234). Jul 16 00:48:59.016150 sshd[2800]: Invalid user db from 34.121.50.154 port 38234 Jul 16 00:48:59.058267 sshd[2800]: Connection closed by invalid user db 34.121.50.154 port 38234 [preauth] Jul 16 00:48:59.058876 systemd[1]: sshd@23-147.75.90.137:22-34.121.50.154:38234.service: Deactivated successfully. Jul 16 00:48:59.112152 systemd[1]: Started sshd@24-147.75.90.137:22-34.121.50.154:38246.service - OpenSSH per-connection server daemon (34.121.50.154:38246). Jul 16 00:48:59.290739 sshd[2805]: Invalid user ubuntu from 34.121.50.154 port 38246 Jul 16 00:48:59.331425 sshd[2805]: Connection closed by invalid user ubuntu 34.121.50.154 port 38246 [preauth] Jul 16 00:48:59.332088 systemd[1]: sshd@24-147.75.90.137:22-34.121.50.154:38246.service: Deactivated successfully. Jul 16 00:48:59.388157 systemd[1]: Started sshd@25-147.75.90.137:22-34.121.50.154:38248.service - OpenSSH per-connection server daemon (34.121.50.154:38248). Jul 16 00:48:59.412046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:48:59.412135 systemd[1]: kubelet.service: Consumed 109ms CPU time, 115.9M memory peak. Jul 16 00:48:59.413329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:48:59.427575 systemd[1]: Reload requested from client PID 2817 ('systemctl') (unit session-11.scope)... Jul 16 00:48:59.427582 systemd[1]: Reloading... Jul 16 00:48:59.475949 zram_generator::config[2863]: No configuration found. Jul 16 00:48:59.540346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:48:59.635793 systemd[1]: Reloading finished in 207 ms. Jul 16 00:48:59.685503 sshd[2810]: Invalid user minecraft from 34.121.50.154 port 38248 Jul 16 00:48:59.694552 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 16 00:48:59.694625 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 16 00:48:59.694819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:48:59.696543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:48:59.726556 sshd[2810]: Connection closed by invalid user minecraft 34.121.50.154 port 38248 [preauth] Jul 16 00:48:59.730518 systemd[1]: sshd@25-147.75.90.137:22-34.121.50.154:38248.service: Deactivated successfully. Jul 16 00:48:59.783464 systemd[1]: Started sshd@26-147.75.90.137:22-34.121.50.154:38260.service - OpenSSH per-connection server daemon (34.121.50.154:38260). Jul 16 00:48:59.968840 sshd[2927]: Invalid user git from 34.121.50.154 port 38260 Jul 16 00:49:00.009997 sshd[2927]: Connection closed by invalid user git 34.121.50.154 port 38260 [preauth] Jul 16 00:49:00.010609 systemd[1]: sshd@26-147.75.90.137:22-34.121.50.154:38260.service: Deactivated successfully. Jul 16 00:49:00.026196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:49:00.028927 (kubelet)[2934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:49:00.049769 kubelet[2934]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:49:00.049769 kubelet[2934]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 16 00:49:00.049769 kubelet[2934]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:49:00.050023 kubelet[2934]: I0716 00:49:00.049773 2934 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:49:00.065544 systemd[1]: Started sshd@27-147.75.90.137:22-34.121.50.154:38268.service - OpenSSH per-connection server daemon (34.121.50.154:38268). Jul 16 00:49:00.248269 sshd[2954]: Invalid user ftptest from 34.121.50.154 port 38268 Jul 16 00:49:00.289373 sshd[2954]: Connection closed by invalid user ftptest 34.121.50.154 port 38268 [preauth] Jul 16 00:49:00.290119 systemd[1]: sshd@27-147.75.90.137:22-34.121.50.154:38268.service: Deactivated successfully. Jul 16 00:49:00.344263 systemd[1]: Started sshd@28-147.75.90.137:22-34.121.50.154:38280.service - OpenSSH per-connection server daemon (34.121.50.154:38280). Jul 16 00:49:00.348786 kubelet[2934]: I0716 00:49:00.348744 2934 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 16 00:49:00.348786 kubelet[2934]: I0716 00:49:00.348757 2934 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:49:00.349019 kubelet[2934]: I0716 00:49:00.348983 2934 server.go:954] "Client rotation is on, will bootstrap in background" Jul 16 00:49:00.372351 kubelet[2934]: E0716 00:49:00.372305 2934 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.90.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.90.137:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:49:00.372710 kubelet[2934]: I0716 00:49:00.372673 2934 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:49:00.377184 kubelet[2934]: I0716 00:49:00.377173 2934 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:49:00.386089 kubelet[2934]: I0716 00:49:00.386080 2934 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:49:00.386252 kubelet[2934]: I0716 00:49:00.386209 2934 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:49:00.386384 kubelet[2934]: I0716 00:49:00.386224 2934 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-bd48696324","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:49:00.386959 kubelet[2934]: I0716 00:49:00.386922 2934 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:49:00.386959 kubelet[2934]: I0716 00:49:00.386933 2934 container_manager_linux.go:304] "Creating device plugin manager" Jul 16 00:49:00.387038 kubelet[2934]: I0716 00:49:00.387002 2934 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:49:00.390140 kubelet[2934]: I0716 00:49:00.390094 2934 kubelet.go:446] "Attempting to sync node with API server" Jul 16 00:49:00.390140 kubelet[2934]: I0716 00:49:00.390109 2934 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:49:00.390140 kubelet[2934]: I0716 00:49:00.390121 2934 kubelet.go:352] "Adding apiserver pod source" Jul 16 00:49:00.390140 kubelet[2934]: I0716 00:49:00.390128 2934 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:49:00.392790 kubelet[2934]: I0716 00:49:00.392781 2934 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:49:00.393064 kubelet[2934]: I0716 00:49:00.393055 2934 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:49:00.393102 kubelet[2934]: W0716 00:49:00.393090 2934 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 16 00:49:00.395070 kubelet[2934]: I0716 00:49:00.395061 2934 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 16 00:49:00.395104 kubelet[2934]: I0716 00:49:00.395080 2934 server.go:1287] "Started kubelet" Jul 16 00:49:00.395265 kubelet[2934]: I0716 00:49:00.395200 2934 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:49:00.395519 kubelet[2934]: W0716 00:49:00.395499 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.90.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-bd48696324&limit=500&resourceVersion=0": dial tcp 147.75.90.137:6443: connect: connection refused Jul 16 00:49:00.395519 kubelet[2934]: W0716 00:49:00.395498 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.90.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.90.137:6443: connect: connection refused Jul 16 00:49:00.395610 kubelet[2934]: E0716 00:49:00.395528 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.90.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-bd48696324&limit=500&resourceVersion=0\": dial tcp 147.75.90.137:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:49:00.395610 kubelet[2934]: E0716 00:49:00.395529 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.90.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.90.137:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:49:00.396015 kubelet[2934]: I0716 00:49:00.396005 2934 server.go:479] "Adding debug handlers to kubelet server" Jul 16 00:49:00.396705 kubelet[2934]: I0716 00:49:00.396696 2934 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:49:00.396747 kubelet[2934]: I0716 00:49:00.396705 2934 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:49:00.397066 kubelet[2934]: E0716 00:49:00.397034 2934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-bd48696324\" not found" Jul 16 00:49:00.397288 kubelet[2934]: I0716 00:49:00.397274 2934 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 16 00:49:00.397319 kubelet[2934]: I0716 00:49:00.397306 2934 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 16 00:49:00.397370 kubelet[2934]: I0716 00:49:00.397364 2934 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:49:00.400308 kubelet[2934]: E0716 00:49:00.400225 2934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-bd48696324?timeout=10s\": dial tcp 147.75.90.137:6443: connect: connection refused" interval="200ms" Jul 16 00:49:00.400525 kubelet[2934]: I0716 00:49:00.400338 2934 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:49:00.400643 kubelet[2934]: W0716 00:49:00.400613 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.90.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.90.137:6443: connect: connection refused Jul 16 00:49:00.400690 kubelet[2934]: E0716 00:49:00.400657 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.90.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.90.137:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:49:00.400724 kubelet[2934]: I0716 00:49:00.400715 2934 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:49:00.401143 kubelet[2934]: I0716 00:49:00.401131 2934 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:49:00.401228 kubelet[2934]: I0716 00:49:00.401218 2934 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:49:00.401732 kubelet[2934]: I0716 00:49:00.401721 2934 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:49:00.402096 kubelet[2934]: E0716 00:49:00.402082 2934 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:49:00.403324 kubelet[2934]: E0716 00:49:00.402479 2934 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.90.137:6443/api/v1/namespaces/default/events\": dial tcp 147.75.90.137:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-n-bd48696324.185294ea9e07172e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-bd48696324,UID:ci-4372.0.1-n-bd48696324,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-bd48696324,},FirstTimestamp:2025-07-16 00:49:00.395067182 +0000 UTC m=+0.364093607,LastTimestamp:2025-07-16 00:49:00.395067182 +0000 UTC m=+0.364093607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-bd48696324,}" Jul 16 00:49:00.408854 kubelet[2934]: I0716 00:49:00.408843 2934 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 16 00:49:00.408854 kubelet[2934]: I0716 00:49:00.408854 2934 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 16 00:49:00.408920 kubelet[2934]: I0716 00:49:00.408865 2934 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:49:00.409012 kubelet[2934]: I0716 00:49:00.408999 2934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:49:00.409551 kubelet[2934]: I0716 00:49:00.409541 2934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:49:00.409551 kubelet[2934]: I0716 00:49:00.409552 2934 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 16 00:49:00.409600 kubelet[2934]: I0716 00:49:00.409564 2934 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 16 00:49:00.409600 kubelet[2934]: I0716 00:49:00.409573 2934 kubelet.go:2382] "Starting kubelet main sync loop" Jul 16 00:49:00.409600 kubelet[2934]: E0716 00:49:00.409596 2934 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:49:00.409749 kubelet[2934]: I0716 00:49:00.409742 2934 policy_none.go:49] "None policy: Start" Jul 16 00:49:00.409784 kubelet[2934]: I0716 00:49:00.409751 2934 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 16 00:49:00.409784 kubelet[2934]: I0716 00:49:00.409758 2934 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:49:00.409851 kubelet[2934]: W0716 00:49:00.409811 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.90.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.90.137:6443: connect: connection refused Jul 16 00:49:00.409851 kubelet[2934]: E0716 00:49:00.409835 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.90.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.90.137:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:49:00.412148 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 16 00:49:00.425652 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 16 00:49:00.427665 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 16 00:49:00.438468 kubelet[2934]: I0716 00:49:00.438425 2934 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:49:00.438620 kubelet[2934]: I0716 00:49:00.438585 2934 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:49:00.438620 kubelet[2934]: I0716 00:49:00.438593 2934 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:49:00.438720 kubelet[2934]: I0716 00:49:00.438707 2934 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:49:00.439178 kubelet[2934]: E0716 00:49:00.439162 2934 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 16 00:49:00.439228 kubelet[2934]: E0716 00:49:00.439187 2934 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-n-bd48696324\" not found" Jul 16 00:49:00.532867 systemd[1]: Created slice kubepods-burstable-pod76825ec198001eb2348c9343fde6d3b3.slice - libcontainer container kubepods-burstable-pod76825ec198001eb2348c9343fde6d3b3.slice. Jul 16 00:49:00.542592 kubelet[2934]: I0716 00:49:00.542507 2934 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.543333 kubelet[2934]: E0716 00:49:00.543237 2934 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.90.137:6443/api/v1/nodes\": dial tcp 147.75.90.137:6443: connect: connection refused" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.557663 kubelet[2934]: E0716 00:49:00.557598 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-bd48696324\" not found" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.567028 sshd[2961]: Connection closed by authenticating user root 34.121.50.154 port 38280 [preauth] Jul 16 00:49:00.568210 systemd[1]: Created slice kubepods-burstable-pod5834da51e102f537123b7c91d6ffbd09.slice - libcontainer container kubepods-burstable-pod5834da51e102f537123b7c91d6ffbd09.slice. Jul 16 00:49:00.592499 systemd[1]: sshd@28-147.75.90.137:22-34.121.50.154:38280.service: Deactivated successfully. Jul 16 00:49:00.598681 kubelet[2934]: I0716 00:49:00.598579 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76825ec198001eb2348c9343fde6d3b3-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-bd48696324\" (UID: \"76825ec198001eb2348c9343fde6d3b3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.598894 kubelet[2934]: I0716 00:49:00.598690 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76825ec198001eb2348c9343fde6d3b3-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-bd48696324\" (UID: \"76825ec198001eb2348c9343fde6d3b3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.598894 kubelet[2934]: I0716 00:49:00.598765 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.598894 kubelet[2934]: I0716 00:49:00.598868 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.599177 kubelet[2934]: I0716 00:49:00.598937 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee32cf95bd5e92c19cf6af7af12ddb28-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-bd48696324\" (UID: \"ee32cf95bd5e92c19cf6af7af12ddb28\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.599177 kubelet[2934]: I0716 00:49:00.599030 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76825ec198001eb2348c9343fde6d3b3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-bd48696324\" (UID: \"76825ec198001eb2348c9343fde6d3b3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.599177 kubelet[2934]: I0716 00:49:00.599097 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.599177 kubelet[2934]: I0716 00:49:00.599157 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.599492 kubelet[2934]: I0716 00:49:00.599222 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.600446 kubelet[2934]: E0716 00:49:00.600361 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-bd48696324\" not found" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.601597 kubelet[2934]: E0716 00:49:00.601482 2934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-bd48696324?timeout=10s\": dial tcp 147.75.90.137:6443: connect: connection refused" interval="400ms" Jul 16 00:49:00.609378 systemd[1]: Created slice kubepods-burstable-podee32cf95bd5e92c19cf6af7af12ddb28.slice - libcontainer container kubepods-burstable-podee32cf95bd5e92c19cf6af7af12ddb28.slice. Jul 16 00:49:00.635863 kubelet[2934]: E0716 00:49:00.635756 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-bd48696324\" not found" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.638131 systemd[1]: Started sshd@29-147.75.90.137:22-34.121.50.154:38282.service - OpenSSH per-connection server daemon (34.121.50.154:38282). Jul 16 00:49:00.748141 kubelet[2934]: I0716 00:49:00.748047 2934 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.748823 kubelet[2934]: E0716 00:49:00.748721 2934 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.90.137:6443/api/v1/nodes\": dial tcp 147.75.90.137:6443: connect: connection refused" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:00.859974 containerd[1916]: time="2025-07-16T00:49:00.859894868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-bd48696324,Uid:76825ec198001eb2348c9343fde6d3b3,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:00.867667 sshd[2991]: Invalid user cs2srv from 34.121.50.154 port 38282 Jul 16 00:49:00.885521 containerd[1916]: time="2025-07-16T00:49:00.885499579Z" level=info msg="connecting to shim fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d" address="unix:///run/containerd/s/8caa56c038109838b9acbfc0fd38b721932962766112cb375520f19546bea023" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:00.901435 containerd[1916]: time="2025-07-16T00:49:00.901369832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-bd48696324,Uid:5834da51e102f537123b7c91d6ffbd09,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:00.908260 sshd[2991]: Connection closed by invalid user cs2srv 34.121.50.154 port 38282 [preauth] Jul 16 00:49:00.910115 containerd[1916]: time="2025-07-16T00:49:00.910096946Z" level=info msg="connecting to shim 563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff" address="unix:///run/containerd/s/d6aab7f097e6d86d24494f7117fdd40ee2703890cc22dd611e9606b8b3582173" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:00.911010 systemd[1]: Started cri-containerd-fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d.scope - libcontainer container fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d. Jul 16 00:49:00.915289 systemd[1]: sshd@29-147.75.90.137:22-34.121.50.154:38282.service: Deactivated successfully. Jul 16 00:49:00.919213 systemd[1]: Started cri-containerd-563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff.scope - libcontainer container 563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff. Jul 16 00:49:00.936852 containerd[1916]: time="2025-07-16T00:49:00.936825335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-bd48696324,Uid:ee32cf95bd5e92c19cf6af7af12ddb28,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:00.940404 containerd[1916]: time="2025-07-16T00:49:00.940368345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-bd48696324,Uid:76825ec198001eb2348c9343fde6d3b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d\"" Jul 16 00:49:00.941771 containerd[1916]: time="2025-07-16T00:49:00.941759072Z" level=info msg="CreateContainer within sandbox \"fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 16 00:49:00.944318 containerd[1916]: time="2025-07-16T00:49:00.944295103Z" level=info msg="Container bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:00.945005 containerd[1916]: time="2025-07-16T00:49:00.944967320Z" level=info msg="connecting to shim 4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02" address="unix:///run/containerd/s/76359a886b8d788e2a9389344d6dfe34e8e45527ba5eaf499ad1c07186109dff" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:00.947739 containerd[1916]: time="2025-07-16T00:49:00.947719147Z" level=info msg="CreateContainer within sandbox \"fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c\"" Jul 16 00:49:00.948013 containerd[1916]: time="2025-07-16T00:49:00.947999624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-bd48696324,Uid:5834da51e102f537123b7c91d6ffbd09,Namespace:kube-system,Attempt:0,} returns sandbox id \"563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff\"" Jul 16 00:49:00.948114 containerd[1916]: time="2025-07-16T00:49:00.948100460Z" level=info msg="StartContainer for \"bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c\"" Jul 16 00:49:00.948911 containerd[1916]: time="2025-07-16T00:49:00.948900516Z" level=info msg="CreateContainer within sandbox \"563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 16 00:49:00.948974 containerd[1916]: time="2025-07-16T00:49:00.948958716Z" level=info msg="connecting to shim bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c" address="unix:///run/containerd/s/8caa56c038109838b9acbfc0fd38b721932962766112cb375520f19546bea023" protocol=ttrpc version=3 Jul 16 00:49:00.951748 containerd[1916]: time="2025-07-16T00:49:00.951728011Z" level=info msg="Container 0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:00.954644 containerd[1916]: time="2025-07-16T00:49:00.954627075Z" level=info msg="CreateContainer within sandbox \"563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c\"" Jul 16 00:49:00.954806 containerd[1916]: time="2025-07-16T00:49:00.954795469Z" level=info msg="StartContainer for \"0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c\"" Jul 16 00:49:00.955379 containerd[1916]: time="2025-07-16T00:49:00.955368560Z" level=info msg="connecting to shim 0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c" address="unix:///run/containerd/s/d6aab7f097e6d86d24494f7117fdd40ee2703890cc22dd611e9606b8b3582173" protocol=ttrpc version=3 Jul 16 00:49:00.965171 systemd[1]: Started cri-containerd-4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02.scope - libcontainer container 4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02. Jul 16 00:49:00.970432 systemd[1]: Started cri-containerd-0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c.scope - libcontainer container 0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c. Jul 16 00:49:00.971086 systemd[1]: Started cri-containerd-bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c.scope - libcontainer container bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c. Jul 16 00:49:00.971806 systemd[1]: Started sshd@30-147.75.90.137:22-34.121.50.154:38284.service - OpenSSH per-connection server daemon (34.121.50.154:38284). Jul 16 00:49:00.994395 containerd[1916]: time="2025-07-16T00:49:00.994370816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-bd48696324,Uid:ee32cf95bd5e92c19cf6af7af12ddb28,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02\"" Jul 16 00:49:00.995568 containerd[1916]: time="2025-07-16T00:49:00.995554174Z" level=info msg="CreateContainer within sandbox \"4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 16 00:49:00.998575 containerd[1916]: time="2025-07-16T00:49:00.998556471Z" level=info msg="Container 365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:00.998959 containerd[1916]: time="2025-07-16T00:49:00.998946492Z" level=info msg="StartContainer for \"bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c\" returns successfully" Jul 16 00:49:01.000174 containerd[1916]: time="2025-07-16T00:49:01.000156839Z" level=info msg="StartContainer for \"0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c\" returns successfully" Jul 16 00:49:01.000833 containerd[1916]: time="2025-07-16T00:49:01.000818535Z" level=info msg="CreateContainer within sandbox \"4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f\"" Jul 16 00:49:01.001022 containerd[1916]: time="2025-07-16T00:49:01.001011987Z" level=info msg="StartContainer for \"365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f\"" Jul 16 00:49:01.001580 containerd[1916]: time="2025-07-16T00:49:01.001569330Z" level=info msg="connecting to shim 365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f" address="unix:///run/containerd/s/76359a886b8d788e2a9389344d6dfe34e8e45527ba5eaf499ad1c07186109dff" protocol=ttrpc version=3 Jul 16 00:49:01.002313 kubelet[2934]: E0716 00:49:01.002297 2934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.90.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-bd48696324?timeout=10s\": dial tcp 147.75.90.137:6443: connect: connection refused" interval="800ms" Jul 16 00:49:01.018278 systemd[1]: Started cri-containerd-365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f.scope - libcontainer container 365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f. Jul 16 00:49:01.048343 containerd[1916]: time="2025-07-16T00:49:01.048321624Z" level=info msg="StartContainer for \"365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f\" returns successfully" Jul 16 00:49:01.150432 kubelet[2934]: I0716 00:49:01.150357 2934 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.151898 sshd[3147]: Invalid user kafka from 34.121.50.154 port 38284 Jul 16 00:49:01.192389 sshd[3147]: Connection closed by invalid user kafka 34.121.50.154 port 38284 [preauth] Jul 16 00:49:01.193108 systemd[1]: sshd@30-147.75.90.137:22-34.121.50.154:38284.service: Deactivated successfully. Jul 16 00:49:01.251637 systemd[1]: Started sshd@31-147.75.90.137:22-34.121.50.154:38286.service - OpenSSH per-connection server daemon (34.121.50.154:38286). Jul 16 00:49:01.413045 kubelet[2934]: E0716 00:49:01.412993 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-bd48696324\" not found" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.413603 kubelet[2934]: E0716 00:49:01.413594 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-bd48696324\" not found" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.414958 kubelet[2934]: E0716 00:49:01.414848 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-bd48696324\" not found" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.428533 sshd[3263]: Invalid user ubuntu from 34.121.50.154 port 38286 Jul 16 00:49:01.469841 sshd[3263]: Connection closed by invalid user ubuntu 34.121.50.154 port 38286 [preauth] Jul 16 00:49:01.470579 systemd[1]: sshd@31-147.75.90.137:22-34.121.50.154:38286.service: Deactivated successfully. Jul 16 00:49:01.529671 systemd[1]: Started sshd@32-147.75.90.137:22-34.121.50.154:38288.service - OpenSSH per-connection server daemon (34.121.50.154:38288). Jul 16 00:49:01.711875 kubelet[2934]: I0716 00:49:01.711806 2934 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.711875 kubelet[2934]: E0716 00:49:01.711831 2934 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-n-bd48696324\": node \"ci-4372.0.1-n-bd48696324\" not found" Jul 16 00:49:01.755236 sshd[3275]: Connection closed by authenticating user root 34.121.50.154 port 38288 [preauth] Jul 16 00:49:01.756015 systemd[1]: sshd@32-147.75.90.137:22-34.121.50.154:38288.service: Deactivated successfully. Jul 16 00:49:01.798284 kubelet[2934]: I0716 00:49:01.798219 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.800593 kubelet[2934]: E0716 00:49:01.800547 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-bd48696324\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.800593 kubelet[2934]: I0716 00:49:01.800558 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.801285 kubelet[2934]: E0716 00:49:01.801274 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-bd48696324\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.801285 kubelet[2934]: I0716 00:49:01.801282 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.802013 kubelet[2934]: E0716 00:49:01.801974 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:01.805511 systemd[1]: Started sshd@33-147.75.90.137:22-34.121.50.154:38302.service - OpenSSH per-connection server daemon (34.121.50.154:38302). Jul 16 00:49:01.984327 sshd[3281]: Invalid user test from 34.121.50.154 port 38302 Jul 16 00:49:02.027394 sshd[3281]: Connection closed by invalid user test 34.121.50.154 port 38302 [preauth] Jul 16 00:49:02.031806 systemd[1]: sshd@33-147.75.90.137:22-34.121.50.154:38302.service: Deactivated successfully. Jul 16 00:49:02.090941 systemd[1]: Started sshd@34-147.75.90.137:22-34.121.50.154:38310.service - OpenSSH per-connection server daemon (34.121.50.154:38310). Jul 16 00:49:02.340033 sshd[3287]: Invalid user games from 34.121.50.154 port 38310 Jul 16 00:49:02.383198 sshd[3287]: Connection closed by invalid user games 34.121.50.154 port 38310 [preauth] Jul 16 00:49:02.386405 systemd[1]: sshd@34-147.75.90.137:22-34.121.50.154:38310.service: Deactivated successfully. Jul 16 00:49:02.391001 kubelet[2934]: I0716 00:49:02.390904 2934 apiserver.go:52] "Watching apiserver" Jul 16 00:49:02.397538 kubelet[2934]: I0716 00:49:02.397453 2934 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 16 00:49:02.415597 kubelet[2934]: I0716 00:49:02.415515 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:02.415752 kubelet[2934]: I0716 00:49:02.415729 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:02.419364 kubelet[2934]: E0716 00:49:02.419303 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-bd48696324\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:02.419570 kubelet[2934]: E0716 00:49:02.419306 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-bd48696324\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:02.446905 systemd[1]: Started sshd@35-147.75.90.137:22-34.121.50.154:38316.service - OpenSSH per-connection server daemon (34.121.50.154:38316). Jul 16 00:49:02.685504 sshd[3292]: Invalid user hduser from 34.121.50.154 port 38316 Jul 16 00:49:02.727364 sshd[3292]: Connection closed by invalid user hduser 34.121.50.154 port 38316 [preauth] Jul 16 00:49:02.732585 systemd[1]: sshd@35-147.75.90.137:22-34.121.50.154:38316.service: Deactivated successfully. Jul 16 00:49:02.793296 systemd[1]: Started sshd@36-147.75.90.137:22-34.121.50.154:38332.service - OpenSSH per-connection server daemon (34.121.50.154:38332). Jul 16 00:49:03.036243 sshd[3297]: Invalid user vagrant from 34.121.50.154 port 38332 Jul 16 00:49:03.079537 sshd[3297]: Connection closed by invalid user vagrant 34.121.50.154 port 38332 [preauth] Jul 16 00:49:03.084015 systemd[1]: sshd@36-147.75.90.137:22-34.121.50.154:38332.service: Deactivated successfully. Jul 16 00:49:03.150821 systemd[1]: Started sshd@37-147.75.90.137:22-34.121.50.154:38336.service - OpenSSH per-connection server daemon (34.121.50.154:38336). Jul 16 00:49:03.374367 sshd[3302]: Invalid user admin from 34.121.50.154 port 38336 Jul 16 00:49:03.417122 sshd[3302]: Connection closed by invalid user admin 34.121.50.154 port 38336 [preauth] Jul 16 00:49:03.421673 systemd[1]: sshd@37-147.75.90.137:22-34.121.50.154:38336.service: Deactivated successfully. Jul 16 00:49:03.483204 systemd[1]: Started sshd@38-147.75.90.137:22-34.121.50.154:33010.service - OpenSSH per-connection server daemon (34.121.50.154:33010). Jul 16 00:49:03.728314 sshd[3307]: Invalid user centos from 34.121.50.154 port 33010 Jul 16 00:49:03.770901 sshd[3307]: Connection closed by invalid user centos 34.121.50.154 port 33010 [preauth] Jul 16 00:49:03.775344 systemd[1]: sshd@38-147.75.90.137:22-34.121.50.154:33010.service: Deactivated successfully. Jul 16 00:49:03.843273 systemd[1]: Started sshd@39-147.75.90.137:22-34.121.50.154:33022.service - OpenSSH per-connection server daemon (34.121.50.154:33022). Jul 16 00:49:03.971121 systemd[1]: Reload requested from client PID 3316 ('systemctl') (unit session-11.scope)... Jul 16 00:49:03.971129 systemd[1]: Reloading... Jul 16 00:49:04.017838 zram_generator::config[3363]: No configuration found. Jul 16 00:49:04.079208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:49:04.117065 sshd[3313]: Connection closed by authenticating user root 34.121.50.154 port 33022 [preauth] Jul 16 00:49:04.183734 systemd[1]: Reloading finished in 212 ms. Jul 16 00:49:04.191083 kubelet[2934]: I0716 00:49:04.191043 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.194270 kubelet[2934]: W0716 00:49:04.194224 2934 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:49:04.217529 systemd[1]: sshd@39-147.75.90.137:22-34.121.50.154:33022.service: Deactivated successfully. Jul 16 00:49:04.254318 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:49:04.255266 systemd[1]: Started sshd@40-147.75.90.137:22-34.121.50.154:33032.service - OpenSSH per-connection server daemon (34.121.50.154:33032). Jul 16 00:49:04.266011 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 00:49:04.267115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:49:04.267142 systemd[1]: kubelet.service: Consumed 912ms CPU time, 143.2M memory peak. Jul 16 00:49:04.268397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:49:04.450635 sshd[3421]: Invalid user deployer from 34.121.50.154 port 33032 Jul 16 00:49:04.492214 sshd[3421]: Connection closed by invalid user deployer 34.121.50.154 port 33032 [preauth] Jul 16 00:49:04.493377 systemd[1]: sshd@40-147.75.90.137:22-34.121.50.154:33032.service: Deactivated successfully. Jul 16 00:49:04.517219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:49:04.520912 (kubelet)[3434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:49:04.537600 systemd[1]: Started sshd@41-147.75.90.137:22-34.121.50.154:33034.service - OpenSSH per-connection server daemon (34.121.50.154:33034). Jul 16 00:49:04.550419 kubelet[3434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:49:04.550419 kubelet[3434]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 16 00:49:04.550419 kubelet[3434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:49:04.550672 kubelet[3434]: I0716 00:49:04.550445 3434 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:49:04.556103 kubelet[3434]: I0716 00:49:04.556050 3434 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 16 00:49:04.556103 kubelet[3434]: I0716 00:49:04.556067 3434 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:49:04.556345 kubelet[3434]: I0716 00:49:04.556303 3434 server.go:954] "Client rotation is on, will bootstrap in background" Jul 16 00:49:04.557361 kubelet[3434]: I0716 00:49:04.557323 3434 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 16 00:49:04.559189 kubelet[3434]: I0716 00:49:04.559141 3434 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:49:04.561711 kubelet[3434]: I0716 00:49:04.561689 3434 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:49:04.571314 kubelet[3434]: I0716 00:49:04.571265 3434 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:49:04.571483 kubelet[3434]: I0716 00:49:04.571428 3434 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:49:04.571633 kubelet[3434]: I0716 00:49:04.571451 3434 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-bd48696324","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:49:04.571633 kubelet[3434]: I0716 00:49:04.571607 3434 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:49:04.571633 kubelet[3434]: I0716 00:49:04.571617 3434 container_manager_linux.go:304] "Creating device plugin manager" Jul 16 00:49:04.571796 kubelet[3434]: I0716 00:49:04.571658 3434 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:49:04.571824 kubelet[3434]: I0716 00:49:04.571808 3434 kubelet.go:446] "Attempting to sync node with API server" Jul 16 00:49:04.571861 kubelet[3434]: I0716 00:49:04.571825 3434 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:49:04.571861 kubelet[3434]: I0716 00:49:04.571851 3434 kubelet.go:352] "Adding apiserver pod source" Jul 16 00:49:04.571861 kubelet[3434]: I0716 00:49:04.571859 3434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:49:04.573079 kubelet[3434]: I0716 00:49:04.572988 3434 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:49:04.574038 kubelet[3434]: I0716 00:49:04.574020 3434 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:49:04.574411 kubelet[3434]: I0716 00:49:04.574398 3434 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 16 00:49:04.574467 kubelet[3434]: I0716 00:49:04.574422 3434 server.go:1287] "Started kubelet" Jul 16 00:49:04.574551 kubelet[3434]: I0716 00:49:04.574514 3434 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:49:04.574599 kubelet[3434]: I0716 00:49:04.574558 3434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:49:04.574775 kubelet[3434]: I0716 00:49:04.574762 3434 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:49:04.575611 kubelet[3434]: I0716 00:49:04.575596 3434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:49:04.575670 kubelet[3434]: I0716 00:49:04.575609 3434 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:49:04.575712 kubelet[3434]: E0716 00:49:04.575676 3434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-bd48696324\" not found" Jul 16 00:49:04.575712 kubelet[3434]: I0716 00:49:04.575681 3434 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 16 00:49:04.575874 kubelet[3434]: E0716 00:49:04.575849 3434 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:49:04.575874 kubelet[3434]: I0716 00:49:04.575867 3434 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 16 00:49:04.576119 kubelet[3434]: I0716 00:49:04.576106 3434 server.go:479] "Adding debug handlers to kubelet server" Jul 16 00:49:04.576283 kubelet[3434]: I0716 00:49:04.576234 3434 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:49:04.576813 kubelet[3434]: I0716 00:49:04.576791 3434 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:49:04.578674 kubelet[3434]: I0716 00:49:04.578636 3434 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:49:04.578745 kubelet[3434]: I0716 00:49:04.578677 3434 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:49:04.585215 kubelet[3434]: I0716 00:49:04.585181 3434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:49:04.586159 kubelet[3434]: I0716 00:49:04.586141 3434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:49:04.586224 kubelet[3434]: I0716 00:49:04.586163 3434 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 16 00:49:04.586224 kubelet[3434]: I0716 00:49:04.586177 3434 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 16 00:49:04.586224 kubelet[3434]: I0716 00:49:04.586185 3434 kubelet.go:2382] "Starting kubelet main sync loop" Jul 16 00:49:04.586384 kubelet[3434]: E0716 00:49:04.586229 3434 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:49:04.602973 kubelet[3434]: I0716 00:49:04.602928 3434 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 16 00:49:04.602973 kubelet[3434]: I0716 00:49:04.602941 3434 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 16 00:49:04.602973 kubelet[3434]: I0716 00:49:04.602955 3434 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:49:04.603102 kubelet[3434]: I0716 00:49:04.603086 3434 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 16 00:49:04.603145 kubelet[3434]: I0716 00:49:04.603096 3434 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 16 00:49:04.603145 kubelet[3434]: I0716 00:49:04.603112 3434 policy_none.go:49] "None policy: Start" Jul 16 00:49:04.603145 kubelet[3434]: I0716 00:49:04.603119 3434 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 16 00:49:04.603145 kubelet[3434]: I0716 00:49:04.603128 3434 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:49:04.603282 kubelet[3434]: I0716 00:49:04.603241 3434 state_mem.go:75] "Updated machine memory state" Jul 16 00:49:04.606271 kubelet[3434]: I0716 00:49:04.606238 3434 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:49:04.606406 kubelet[3434]: I0716 00:49:04.606396 3434 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:49:04.606455 kubelet[3434]: I0716 00:49:04.606411 3434 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:49:04.606549 kubelet[3434]: I0716 00:49:04.606537 3434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:49:04.606960 kubelet[3434]: E0716 00:49:04.606945 3434 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 16 00:49:04.688416 kubelet[3434]: I0716 00:49:04.688358 3434 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.688416 kubelet[3434]: I0716 00:49:04.688395 3434 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.688709 kubelet[3434]: I0716 00:49:04.688377 3434 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.695929 kubelet[3434]: W0716 00:49:04.695886 3434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:49:04.696149 kubelet[3434]: W0716 00:49:04.696120 3434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:49:04.696554 kubelet[3434]: W0716 00:49:04.696511 3434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:49:04.696672 kubelet[3434]: E0716 00:49:04.696643 3434 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-bd48696324\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.713961 kubelet[3434]: I0716 00:49:04.713773 3434 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.722660 kubelet[3434]: I0716 00:49:04.722570 3434 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.722822 kubelet[3434]: I0716 00:49:04.722697 3434 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779147 kubelet[3434]: I0716 00:49:04.779033 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779147 kubelet[3434]: I0716 00:49:04.779117 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee32cf95bd5e92c19cf6af7af12ddb28-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-bd48696324\" (UID: \"ee32cf95bd5e92c19cf6af7af12ddb28\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779498 kubelet[3434]: I0716 00:49:04.779169 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76825ec198001eb2348c9343fde6d3b3-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-bd48696324\" (UID: \"76825ec198001eb2348c9343fde6d3b3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779498 kubelet[3434]: I0716 00:49:04.779219 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76825ec198001eb2348c9343fde6d3b3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-bd48696324\" (UID: \"76825ec198001eb2348c9343fde6d3b3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779498 kubelet[3434]: I0716 00:49:04.779277 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779498 kubelet[3434]: I0716 00:49:04.779319 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779498 kubelet[3434]: I0716 00:49:04.779415 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779949 kubelet[3434]: I0716 00:49:04.779546 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76825ec198001eb2348c9343fde6d3b3-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-bd48696324\" (UID: \"76825ec198001eb2348c9343fde6d3b3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.779949 kubelet[3434]: I0716 00:49:04.779657 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5834da51e102f537123b7c91d6ffbd09-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" (UID: \"5834da51e102f537123b7c91d6ffbd09\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:04.785497 sshd[3447]: Connection closed by authenticating user root 34.121.50.154 port 33034 [preauth] Jul 16 00:49:04.788904 systemd[1]: sshd@41-147.75.90.137:22-34.121.50.154:33034.service: Deactivated successfully. Jul 16 00:49:04.850424 systemd[1]: Started sshd@42-147.75.90.137:22-34.121.50.154:33046.service - OpenSSH per-connection server daemon (34.121.50.154:33046). Jul 16 00:49:05.116195 sshd[3479]: Connection closed by authenticating user root 34.121.50.154 port 33046 [preauth] Jul 16 00:49:05.116986 systemd[1]: sshd@42-147.75.90.137:22-34.121.50.154:33046.service: Deactivated successfully. Jul 16 00:49:05.572754 kubelet[3434]: I0716 00:49:05.572695 3434 apiserver.go:52] "Watching apiserver" Jul 16 00:49:05.576388 kubelet[3434]: I0716 00:49:05.576295 3434 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 16 00:49:05.593470 kubelet[3434]: I0716 00:49:05.593383 3434 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:05.593625 kubelet[3434]: I0716 00:49:05.593470 3434 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:05.593625 kubelet[3434]: I0716 00:49:05.593562 3434 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:05.600208 kubelet[3434]: W0716 00:49:05.600161 3434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:49:05.600372 kubelet[3434]: E0716 00:49:05.600288 3434 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-bd48696324\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" Jul 16 00:49:05.600685 kubelet[3434]: W0716 00:49:05.600631 3434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:49:05.600926 kubelet[3434]: W0716 00:49:05.600748 3434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:49:05.600926 kubelet[3434]: E0716 00:49:05.600744 3434 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-n-bd48696324\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" Jul 16 00:49:05.600926 kubelet[3434]: E0716 00:49:05.600901 3434 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-bd48696324\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" Jul 16 00:49:05.640336 kubelet[3434]: I0716 00:49:05.640257 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-n-bd48696324" podStartSLOduration=1.640237102 podStartE2EDuration="1.640237102s" podCreationTimestamp="2025-07-16 00:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:49:05.633749789 +0000 UTC m=+1.110108756" watchObservedRunningTime="2025-07-16 00:49:05.640237102 +0000 UTC m=+1.116596070" Jul 16 00:49:05.640515 kubelet[3434]: I0716 00:49:05.640404 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-n-bd48696324" podStartSLOduration=1.6403938120000001 podStartE2EDuration="1.640393812s" podCreationTimestamp="2025-07-16 00:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:49:05.640344218 +0000 UTC m=+1.116703186" watchObservedRunningTime="2025-07-16 00:49:05.640393812 +0000 UTC m=+1.116752772" Jul 16 00:49:05.657281 kubelet[3434]: I0716 00:49:05.657204 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-bd48696324" podStartSLOduration=1.657188828 podStartE2EDuration="1.657188828s" podCreationTimestamp="2025-07-16 00:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:49:05.648183939 +0000 UTC m=+1.124542920" watchObservedRunningTime="2025-07-16 00:49:05.657188828 +0000 UTC m=+1.133547788" Jul 16 00:49:06.205725 systemd[1]: Started sshd@43-147.75.90.137:22-34.121.50.154:33062.service - OpenSSH per-connection server daemon (34.121.50.154:33062). Jul 16 00:49:06.454375 sshd[3524]: Invalid user guest from 34.121.50.154 port 33062 Jul 16 00:49:06.497011 sshd[3524]: Connection closed by invalid user guest 34.121.50.154 port 33062 [preauth] Jul 16 00:49:06.501961 systemd[1]: sshd@43-147.75.90.137:22-34.121.50.154:33062.service: Deactivated successfully. Jul 16 00:49:06.556432 systemd[1]: Started sshd@44-147.75.90.137:22-34.121.50.154:33066.service - OpenSSH per-connection server daemon (34.121.50.154:33066). Jul 16 00:49:06.763052 sshd[3529]: Invalid user craft from 34.121.50.154 port 33066 Jul 16 00:49:06.806291 sshd[3529]: Connection closed by invalid user craft 34.121.50.154 port 33066 [preauth] Jul 16 00:49:06.809741 systemd[1]: sshd@44-147.75.90.137:22-34.121.50.154:33066.service: Deactivated successfully. Jul 16 00:49:06.875168 systemd[1]: Started sshd@45-147.75.90.137:22-34.121.50.154:33070.service - OpenSSH per-connection server daemon (34.121.50.154:33070). Jul 16 00:49:07.121816 sshd[3534]: Invalid user cs2sv from 34.121.50.154 port 33070 Jul 16 00:49:07.162607 sshd[3534]: Connection closed by invalid user cs2sv 34.121.50.154 port 33070 [preauth] Jul 16 00:49:07.163842 systemd[1]: sshd@45-147.75.90.137:22-34.121.50.154:33070.service: Deactivated successfully. Jul 16 00:49:07.231688 systemd[1]: Started sshd@46-147.75.90.137:22-34.121.50.154:33080.service - OpenSSH per-connection server daemon (34.121.50.154:33080). Jul 16 00:49:07.500065 sshd[3540]: Invalid user test from 34.121.50.154 port 33080 Jul 16 00:49:07.542954 sshd[3540]: Connection closed by invalid user test 34.121.50.154 port 33080 [preauth] Jul 16 00:49:07.547426 systemd[1]: sshd@46-147.75.90.137:22-34.121.50.154:33080.service: Deactivated successfully. Jul 16 00:49:07.605379 systemd[1]: Started sshd@47-147.75.90.137:22-34.121.50.154:33096.service - OpenSSH per-connection server daemon (34.121.50.154:33096). Jul 16 00:49:07.856415 sshd[3547]: Invalid user admin from 34.121.50.154 port 33096 Jul 16 00:49:07.898537 sshd[3547]: Connection closed by invalid user admin 34.121.50.154 port 33096 [preauth] Jul 16 00:49:07.902954 systemd[1]: sshd@47-147.75.90.137:22-34.121.50.154:33096.service: Deactivated successfully. Jul 16 00:49:07.968692 systemd[1]: Started sshd@48-147.75.90.137:22-34.121.50.154:33098.service - OpenSSH per-connection server daemon (34.121.50.154:33098). Jul 16 00:49:08.215549 sshd[3552]: Invalid user odoo from 34.121.50.154 port 33098 Jul 16 00:49:08.258316 sshd[3552]: Connection closed by invalid user odoo 34.121.50.154 port 33098 [preauth] Jul 16 00:49:08.261817 systemd[1]: sshd@48-147.75.90.137:22-34.121.50.154:33098.service: Deactivated successfully. Jul 16 00:49:08.327682 systemd[1]: Started sshd@49-147.75.90.137:22-34.121.50.154:33112.service - OpenSSH per-connection server daemon (34.121.50.154:33112). Jul 16 00:49:08.572966 sshd[3557]: Invalid user es from 34.121.50.154 port 33112 Jul 16 00:49:08.615331 sshd[3557]: Connection closed by invalid user es 34.121.50.154 port 33112 [preauth] Jul 16 00:49:08.620349 systemd[1]: sshd@49-147.75.90.137:22-34.121.50.154:33112.service: Deactivated successfully. Jul 16 00:49:08.668869 systemd[1]: Started sshd@50-147.75.90.137:22-34.121.50.154:33122.service - OpenSSH per-connection server daemon (34.121.50.154:33122). Jul 16 00:49:08.872438 sshd[3562]: Invalid user ubuntu from 34.121.50.154 port 33122 Jul 16 00:49:08.915986 sshd[3562]: Connection closed by invalid user ubuntu 34.121.50.154 port 33122 [preauth] Jul 16 00:49:08.920890 systemd[1]: sshd@50-147.75.90.137:22-34.121.50.154:33122.service: Deactivated successfully. Jul 16 00:49:08.985498 systemd[1]: Started sshd@51-147.75.90.137:22-34.121.50.154:33128.service - OpenSSH per-connection server daemon (34.121.50.154:33128). Jul 16 00:49:09.205935 sshd[3567]: Invalid user odoo from 34.121.50.154 port 33128 Jul 16 00:49:09.248367 sshd[3567]: Connection closed by invalid user odoo 34.121.50.154 port 33128 [preauth] Jul 16 00:49:09.253287 systemd[1]: sshd@51-147.75.90.137:22-34.121.50.154:33128.service: Deactivated successfully. Jul 16 00:49:09.310554 systemd[1]: Started sshd@52-147.75.90.137:22-34.121.50.154:33144.service - OpenSSH per-connection server daemon (34.121.50.154:33144). Jul 16 00:49:09.563350 sshd[3573]: Invalid user apache from 34.121.50.154 port 33144 Jul 16 00:49:09.605807 sshd[3573]: Connection closed by invalid user apache 34.121.50.154 port 33144 [preauth] Jul 16 00:49:09.610902 systemd[1]: sshd@52-147.75.90.137:22-34.121.50.154:33144.service: Deactivated successfully. Jul 16 00:49:09.661087 systemd[1]: Started sshd@53-147.75.90.137:22-34.121.50.154:33156.service - OpenSSH per-connection server daemon (34.121.50.154:33156). Jul 16 00:49:09.767427 kubelet[3434]: I0716 00:49:09.767368 3434 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 16 00:49:09.768306 containerd[1916]: time="2025-07-16T00:49:09.768069710Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 16 00:49:09.768978 kubelet[3434]: I0716 00:49:09.768501 3434 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 16 00:49:09.938004 sshd[3579]: Connection closed by authenticating user root 34.121.50.154 port 33156 [preauth] Jul 16 00:49:09.941513 systemd[1]: sshd@53-147.75.90.137:22-34.121.50.154:33156.service: Deactivated successfully. Jul 16 00:49:10.003415 systemd[1]: Started sshd@54-147.75.90.137:22-34.121.50.154:33160.service - OpenSSH per-connection server daemon (34.121.50.154:33160). Jul 16 00:49:10.252003 sshd[3584]: Invalid user pi from 34.121.50.154 port 33160 Jul 16 00:49:10.296194 sshd[3584]: Connection closed by invalid user pi 34.121.50.154 port 33160 [preauth] Jul 16 00:49:10.297095 systemd[1]: sshd@54-147.75.90.137:22-34.121.50.154:33160.service: Deactivated successfully. Jul 16 00:49:10.362388 systemd[1]: Started sshd@55-147.75.90.137:22-34.121.50.154:33174.service - OpenSSH per-connection server daemon (34.121.50.154:33174). Jul 16 00:49:10.433426 systemd[1]: Created slice kubepods-besteffort-pod92949807_8b51_457a_9864_e953e6982e54.slice - libcontainer container kubepods-besteffort-pod92949807_8b51_457a_9864_e953e6982e54.slice. Jul 16 00:49:10.520313 kubelet[3434]: I0716 00:49:10.520108 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92949807-8b51-457a-9864-e953e6982e54-kube-proxy\") pod \"kube-proxy-6srvd\" (UID: \"92949807-8b51-457a-9864-e953e6982e54\") " pod="kube-system/kube-proxy-6srvd" Jul 16 00:49:10.520313 kubelet[3434]: I0716 00:49:10.520197 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92949807-8b51-457a-9864-e953e6982e54-xtables-lock\") pod \"kube-proxy-6srvd\" (UID: \"92949807-8b51-457a-9864-e953e6982e54\") " pod="kube-system/kube-proxy-6srvd" Jul 16 00:49:10.520313 kubelet[3434]: I0716 00:49:10.520250 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92949807-8b51-457a-9864-e953e6982e54-lib-modules\") pod \"kube-proxy-6srvd\" (UID: \"92949807-8b51-457a-9864-e953e6982e54\") " pod="kube-system/kube-proxy-6srvd" Jul 16 00:49:10.520787 kubelet[3434]: I0716 00:49:10.520355 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xknsn\" (UniqueName: \"kubernetes.io/projected/92949807-8b51-457a-9864-e953e6982e54-kube-api-access-xknsn\") pod \"kube-proxy-6srvd\" (UID: \"92949807-8b51-457a-9864-e953e6982e54\") " pod="kube-system/kube-proxy-6srvd" Jul 16 00:49:10.647111 sshd[3589]: Connection closed by authenticating user root 34.121.50.154 port 33174 [preauth] Jul 16 00:49:10.650317 systemd[1]: sshd@55-147.75.90.137:22-34.121.50.154:33174.service: Deactivated successfully. Jul 16 00:49:10.716346 systemd[1]: Started sshd@56-147.75.90.137:22-34.121.50.154:33190.service - OpenSSH per-connection server daemon (34.121.50.154:33190). Jul 16 00:49:10.754081 containerd[1916]: time="2025-07-16T00:49:10.754005631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6srvd,Uid:92949807-8b51-457a-9864-e953e6982e54,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:10.761336 containerd[1916]: time="2025-07-16T00:49:10.761308147Z" level=info msg="connecting to shim feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add" address="unix:///run/containerd/s/d1fc15f0969b9d6c0abf81c750bb0e35457c644fbe1bde86473b243e5d704865" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:10.786306 systemd[1]: Started cri-containerd-feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add.scope - libcontainer container feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add. Jul 16 00:49:10.841961 containerd[1916]: time="2025-07-16T00:49:10.841937037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6srvd,Uid:92949807-8b51-457a-9864-e953e6982e54,Namespace:kube-system,Attempt:0,} returns sandbox id \"feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add\"" Jul 16 00:49:10.843427 containerd[1916]: time="2025-07-16T00:49:10.843410645Z" level=info msg="CreateContainer within sandbox \"feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 16 00:49:10.853051 systemd[1]: Created slice kubepods-besteffort-poda09d3f5d_bee4_43f4_a8fd_00689e9f75fd.slice - libcontainer container kubepods-besteffort-poda09d3f5d_bee4_43f4_a8fd_00689e9f75fd.slice. Jul 16 00:49:10.896848 containerd[1916]: time="2025-07-16T00:49:10.896762684Z" level=info msg="Container d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:10.905636 containerd[1916]: time="2025-07-16T00:49:10.905594885Z" level=info msg="CreateContainer within sandbox \"feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97\"" Jul 16 00:49:10.905972 containerd[1916]: time="2025-07-16T00:49:10.905926482Z" level=info msg="StartContainer for \"d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97\"" Jul 16 00:49:10.906694 containerd[1916]: time="2025-07-16T00:49:10.906654166Z" level=info msg="connecting to shim d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97" address="unix:///run/containerd/s/d1fc15f0969b9d6c0abf81c750bb0e35457c644fbe1bde86473b243e5d704865" protocol=ttrpc version=3 Jul 16 00:49:10.922818 kubelet[3434]: I0716 00:49:10.922802 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a09d3f5d-bee4-43f4-a8fd-00689e9f75fd-var-lib-calico\") pod \"tigera-operator-747864d56d-tqtlh\" (UID: \"a09d3f5d-bee4-43f4-a8fd-00689e9f75fd\") " pod="tigera-operator/tigera-operator-747864d56d-tqtlh" Jul 16 00:49:10.923011 kubelet[3434]: I0716 00:49:10.922822 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2cg\" (UniqueName: \"kubernetes.io/projected/a09d3f5d-bee4-43f4-a8fd-00689e9f75fd-kube-api-access-2n2cg\") pod \"tigera-operator-747864d56d-tqtlh\" (UID: \"a09d3f5d-bee4-43f4-a8fd-00689e9f75fd\") " pod="tigera-operator/tigera-operator-747864d56d-tqtlh" Jul 16 00:49:10.923973 systemd[1]: Started cri-containerd-d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97.scope - libcontainer container d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97. Jul 16 00:49:10.930604 sshd[3595]: Invalid user cloud from 34.121.50.154 port 33190 Jul 16 00:49:10.961064 containerd[1916]: time="2025-07-16T00:49:10.961010566Z" level=info msg="StartContainer for \"d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97\" returns successfully" Jul 16 00:49:10.971590 sshd[3595]: Connection closed by invalid user cloud 34.121.50.154 port 33190 [preauth] Jul 16 00:49:10.972404 systemd[1]: sshd@56-147.75.90.137:22-34.121.50.154:33190.service: Deactivated successfully. Jul 16 00:49:11.038514 systemd[1]: Started sshd@57-147.75.90.137:22-34.121.50.154:33202.service - OpenSSH per-connection server daemon (34.121.50.154:33202). Jul 16 00:49:11.156071 containerd[1916]: time="2025-07-16T00:49:11.155963361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-tqtlh,Uid:a09d3f5d-bee4-43f4-a8fd-00689e9f75fd,Namespace:tigera-operator,Attempt:0,}" Jul 16 00:49:11.262214 systemd[1]: Started sshd@58-147.75.90.137:22-144.126.219.123:60468.service - OpenSSH per-connection server daemon (144.126.219.123:60468). Jul 16 00:49:11.276521 sshd[3695]: Invalid user admin from 34.121.50.154 port 33202 Jul 16 00:49:11.315822 sshd[3695]: Connection closed by invalid user admin 34.121.50.154 port 33202 [preauth] Jul 16 00:49:11.316631 systemd[1]: sshd@57-147.75.90.137:22-34.121.50.154:33202.service: Deactivated successfully. Jul 16 00:49:11.370941 systemd[1]: Started sshd@59-147.75.90.137:22-34.121.50.154:33214.service - OpenSSH per-connection server daemon (34.121.50.154:33214). Jul 16 00:49:11.389048 sshd[3748]: Received disconnect from 144.126.219.123 port 60468:11: Bye Bye [preauth] Jul 16 00:49:11.389048 sshd[3748]: Disconnected from authenticating user root 144.126.219.123 port 60468 [preauth] Jul 16 00:49:11.389937 systemd[1]: sshd@58-147.75.90.137:22-144.126.219.123:60468.service: Deactivated successfully. Jul 16 00:49:11.403984 containerd[1916]: time="2025-07-16T00:49:11.403944179Z" level=info msg="connecting to shim 8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29" address="unix:///run/containerd/s/7538b7d58a8843fc148912eedaf4f3985b5804e909abfb411148c13f616cb2c3" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:11.434173 systemd[1]: Started cri-containerd-8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29.scope - libcontainer container 8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29. Jul 16 00:49:11.497527 containerd[1916]: time="2025-07-16T00:49:11.497485691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-tqtlh,Uid:a09d3f5d-bee4-43f4-a8fd-00689e9f75fd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29\"" Jul 16 00:49:11.498956 containerd[1916]: time="2025-07-16T00:49:11.498929499Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 16 00:49:11.557944 sshd[3780]: Invalid user odoo from 34.121.50.154 port 33214 Jul 16 00:49:11.600406 sshd[3780]: Connection closed by invalid user odoo 34.121.50.154 port 33214 [preauth] Jul 16 00:49:11.605409 systemd[1]: sshd@59-147.75.90.137:22-34.121.50.154:33214.service: Deactivated successfully. Jul 16 00:49:11.687521 systemd[1]: Started sshd@60-147.75.90.137:22-34.121.50.154:33216.service - OpenSSH per-connection server daemon (34.121.50.154:33216). Jul 16 00:49:11.960039 sshd[3877]: Connection closed by authenticating user root 34.121.50.154 port 33216 [preauth] Jul 16 00:49:11.961125 systemd[1]: sshd@60-147.75.90.137:22-34.121.50.154:33216.service: Deactivated successfully. Jul 16 00:49:12.024912 systemd[1]: Started sshd@61-147.75.90.137:22-34.121.50.154:33230.service - OpenSSH per-connection server daemon (34.121.50.154:33230). Jul 16 00:49:12.241316 sshd[3882]: Invalid user mysql from 34.121.50.154 port 33230 Jul 16 00:49:12.283775 sshd[3882]: Connection closed by invalid user mysql 34.121.50.154 port 33230 [preauth] Jul 16 00:49:12.287106 systemd[1]: sshd@61-147.75.90.137:22-34.121.50.154:33230.service: Deactivated successfully. Jul 16 00:49:12.902410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604951372.mount: Deactivated successfully. Jul 16 00:49:13.245028 containerd[1916]: time="2025-07-16T00:49:13.244972584Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:13.245232 containerd[1916]: time="2025-07-16T00:49:13.245180138Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 16 00:49:13.245558 containerd[1916]: time="2025-07-16T00:49:13.245520968Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:13.246494 containerd[1916]: time="2025-07-16T00:49:13.246453174Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:13.247146 containerd[1916]: time="2025-07-16T00:49:13.247105723Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.748144378s" Jul 16 00:49:13.247146 containerd[1916]: time="2025-07-16T00:49:13.247121342Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 16 00:49:13.248012 containerd[1916]: time="2025-07-16T00:49:13.247999797Z" level=info msg="CreateContainer within sandbox \"8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 16 00:49:13.250784 containerd[1916]: time="2025-07-16T00:49:13.250771523Z" level=info msg="Container bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:13.253418 containerd[1916]: time="2025-07-16T00:49:13.253370718Z" level=info msg="CreateContainer within sandbox \"8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8\"" Jul 16 00:49:13.253677 containerd[1916]: time="2025-07-16T00:49:13.253663794Z" level=info msg="StartContainer for \"bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8\"" Jul 16 00:49:13.253679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016884763.mount: Deactivated successfully. Jul 16 00:49:13.254240 containerd[1916]: time="2025-07-16T00:49:13.254197063Z" level=info msg="connecting to shim bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8" address="unix:///run/containerd/s/7538b7d58a8843fc148912eedaf4f3985b5804e909abfb411148c13f616cb2c3" protocol=ttrpc version=3 Jul 16 00:49:13.267044 systemd[1]: Started cri-containerd-bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8.scope - libcontainer container bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8. Jul 16 00:49:13.280524 containerd[1916]: time="2025-07-16T00:49:13.280467943Z" level=info msg="StartContainer for \"bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8\" returns successfully" Jul 16 00:49:13.639289 kubelet[3434]: I0716 00:49:13.639146 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6srvd" podStartSLOduration=3.639090434 podStartE2EDuration="3.639090434s" podCreationTimestamp="2025-07-16 00:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:49:11.63214256 +0000 UTC m=+7.108501562" watchObservedRunningTime="2025-07-16 00:49:13.639090434 +0000 UTC m=+9.115449468" Jul 16 00:49:13.640217 kubelet[3434]: I0716 00:49:13.639490 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-tqtlh" podStartSLOduration=1.8904910510000001 podStartE2EDuration="3.639458498s" podCreationTimestamp="2025-07-16 00:49:10 +0000 UTC" firstStartedPulling="2025-07-16 00:49:11.498496078 +0000 UTC m=+6.974855038" lastFinishedPulling="2025-07-16 00:49:13.247463541 +0000 UTC m=+8.723822485" observedRunningTime="2025-07-16 00:49:13.639384541 +0000 UTC m=+9.115743551" watchObservedRunningTime="2025-07-16 00:49:13.639458498 +0000 UTC m=+9.115817493" Jul 16 00:49:14.622767 systemd[1]: Started sshd@62-147.75.90.137:22-203.55.224.216:46044.service - OpenSSH per-connection server daemon (203.55.224.216:46044). Jul 16 00:49:17.647564 sudo[2215]: pam_unix(sudo:session): session closed for user root Jul 16 00:49:17.648475 sshd[2214]: Connection closed by 147.75.109.163 port 34308 Jul 16 00:49:17.648695 sshd-session[2212]: pam_unix(sshd:session): session closed for user core Jul 16 00:49:17.650558 systemd[1]: sshd@8-147.75.90.137:22-147.75.109.163:34308.service: Deactivated successfully. Jul 16 00:49:17.651629 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 00:49:17.651755 systemd[1]: session-11.scope: Consumed 3.325s CPU time, 231.6M memory peak. Jul 16 00:49:17.653158 systemd-logind[1904]: Session 11 logged out. Waiting for processes to exit. Jul 16 00:49:17.653860 systemd-logind[1904]: Removed session 11. Jul 16 00:49:18.133988 update_engine[1909]: I20250716 00:49:18.133921 1909 update_attempter.cc:509] Updating boot flags... Jul 16 00:49:19.791854 systemd[1]: Created slice kubepods-besteffort-pod3a884450_4ca0_4131_855c_b45f22d9924b.slice - libcontainer container kubepods-besteffort-pod3a884450_4ca0_4131_855c_b45f22d9924b.slice. Jul 16 00:49:19.883862 kubelet[3434]: I0716 00:49:19.883810 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a884450-4ca0-4131-855c-b45f22d9924b-tigera-ca-bundle\") pod \"calico-typha-fd95d4c78-9dt2l\" (UID: \"3a884450-4ca0-4131-855c-b45f22d9924b\") " pod="calico-system/calico-typha-fd95d4c78-9dt2l" Jul 16 00:49:19.884274 kubelet[3434]: I0716 00:49:19.883887 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3a884450-4ca0-4131-855c-b45f22d9924b-typha-certs\") pod \"calico-typha-fd95d4c78-9dt2l\" (UID: \"3a884450-4ca0-4131-855c-b45f22d9924b\") " pod="calico-system/calico-typha-fd95d4c78-9dt2l" Jul 16 00:49:19.884274 kubelet[3434]: I0716 00:49:19.883919 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtgn7\" (UniqueName: \"kubernetes.io/projected/3a884450-4ca0-4131-855c-b45f22d9924b-kube-api-access-gtgn7\") pod \"calico-typha-fd95d4c78-9dt2l\" (UID: \"3a884450-4ca0-4131-855c-b45f22d9924b\") " pod="calico-system/calico-typha-fd95d4c78-9dt2l" Jul 16 00:49:20.094465 containerd[1916]: time="2025-07-16T00:49:20.094418666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd95d4c78-9dt2l,Uid:3a884450-4ca0-4131-855c-b45f22d9924b,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:20.101527 containerd[1916]: time="2025-07-16T00:49:20.101506912Z" level=info msg="connecting to shim 51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682" address="unix:///run/containerd/s/94f8ea5d8704c63eff6d7177b53ea8a0aeb291eec7b55f840d62a59df888ff4a" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:20.119972 systemd[1]: Started cri-containerd-51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682.scope - libcontainer container 51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682. Jul 16 00:49:20.123872 systemd[1]: Created slice kubepods-besteffort-pod8acb592d_d2e5_49b9_8513_71d2ea13c9cc.slice - libcontainer container kubepods-besteffort-pod8acb592d_d2e5_49b9_8513_71d2ea13c9cc.slice. Jul 16 00:49:20.151504 containerd[1916]: time="2025-07-16T00:49:20.151482799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd95d4c78-9dt2l,Uid:3a884450-4ca0-4131-855c-b45f22d9924b,Namespace:calico-system,Attempt:0,} returns sandbox id \"51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682\"" Jul 16 00:49:20.152131 containerd[1916]: time="2025-07-16T00:49:20.152120335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 16 00:49:20.186935 kubelet[3434]: I0716 00:49:20.186890 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-cni-bin-dir\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.186935 kubelet[3434]: I0716 00:49:20.186926 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-cni-net-dir\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187059 kubelet[3434]: I0716 00:49:20.186944 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-xtables-lock\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187059 kubelet[3434]: I0716 00:49:20.186973 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-lib-modules\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187059 kubelet[3434]: I0716 00:49:20.186999 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2xx\" (UniqueName: \"kubernetes.io/projected/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-kube-api-access-5s2xx\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187059 kubelet[3434]: I0716 00:49:20.187016 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-var-run-calico\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187059 kubelet[3434]: I0716 00:49:20.187030 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-cni-log-dir\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187164 kubelet[3434]: I0716 00:49:20.187042 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-node-certs\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187164 kubelet[3434]: I0716 00:49:20.187053 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-flexvol-driver-host\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187164 kubelet[3434]: I0716 00:49:20.187064 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-tigera-ca-bundle\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187164 kubelet[3434]: I0716 00:49:20.187076 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-policysync\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.187164 kubelet[3434]: I0716 00:49:20.187086 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8acb592d-d2e5-49b9-8513-71d2ea13c9cc-var-lib-calico\") pod \"calico-node-kl7bg\" (UID: \"8acb592d-d2e5-49b9-8513-71d2ea13c9cc\") " pod="calico-system/calico-node-kl7bg" Jul 16 00:49:20.292871 kubelet[3434]: E0716 00:49:20.292678 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.292871 kubelet[3434]: W0716 00:49:20.292757 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.293425 kubelet[3434]: E0716 00:49:20.293344 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.298459 kubelet[3434]: E0716 00:49:20.298362 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.298459 kubelet[3434]: W0716 00:49:20.298416 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.298603 kubelet[3434]: E0716 00:49:20.298484 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.301463 kubelet[3434]: E0716 00:49:20.301420 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.301463 kubelet[3434]: W0716 00:49:20.301430 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.301463 kubelet[3434]: E0716 00:49:20.301440 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.427388 containerd[1916]: time="2025-07-16T00:49:20.427181131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kl7bg,Uid:8acb592d-d2e5-49b9-8513-71d2ea13c9cc,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:20.435173 containerd[1916]: time="2025-07-16T00:49:20.435150854Z" level=info msg="connecting to shim bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464" address="unix:///run/containerd/s/02505df8b003a13615a11a0ea88f5380afe0edf5ebd171a6ac5fd450f65b56ec" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:20.445320 kubelet[3434]: E0716 00:49:20.445289 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8xm9" podUID="05542b76-a42a-41b7-a7f7-a1f97b8c0b25" Jul 16 00:49:20.456017 systemd[1]: Started cri-containerd-bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464.scope - libcontainer container bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464. Jul 16 00:49:20.467919 containerd[1916]: time="2025-07-16T00:49:20.467876021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kl7bg,Uid:8acb592d-d2e5-49b9-8513-71d2ea13c9cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464\"" Jul 16 00:49:20.471928 kubelet[3434]: E0716 00:49:20.471915 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.471928 kubelet[3434]: W0716 00:49:20.471926 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.471999 kubelet[3434]: E0716 00:49:20.471938 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472101 kubelet[3434]: E0716 00:49:20.472093 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472101 kubelet[3434]: W0716 00:49:20.472100 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472146 kubelet[3434]: E0716 00:49:20.472107 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472236 kubelet[3434]: E0716 00:49:20.472228 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472256 kubelet[3434]: W0716 00:49:20.472235 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472256 kubelet[3434]: E0716 00:49:20.472242 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472384 kubelet[3434]: E0716 00:49:20.472377 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472384 kubelet[3434]: W0716 00:49:20.472383 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472428 kubelet[3434]: E0716 00:49:20.472388 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472478 kubelet[3434]: E0716 00:49:20.472472 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472478 kubelet[3434]: W0716 00:49:20.472477 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472516 kubelet[3434]: E0716 00:49:20.472481 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472558 kubelet[3434]: E0716 00:49:20.472553 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472575 kubelet[3434]: W0716 00:49:20.472558 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472575 kubelet[3434]: E0716 00:49:20.472563 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472651 kubelet[3434]: E0716 00:49:20.472646 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472651 kubelet[3434]: W0716 00:49:20.472650 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472685 kubelet[3434]: E0716 00:49:20.472655 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472735 kubelet[3434]: E0716 00:49:20.472731 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472753 kubelet[3434]: W0716 00:49:20.472736 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472753 kubelet[3434]: E0716 00:49:20.472741 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472822 kubelet[3434]: E0716 00:49:20.472817 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472851 kubelet[3434]: W0716 00:49:20.472821 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472851 kubelet[3434]: E0716 00:49:20.472826 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472914 kubelet[3434]: E0716 00:49:20.472909 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472914 kubelet[3434]: W0716 00:49:20.472913 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.472956 kubelet[3434]: E0716 00:49:20.472918 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.472994 kubelet[3434]: E0716 00:49:20.472989 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.472994 kubelet[3434]: W0716 00:49:20.472994 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473031 kubelet[3434]: E0716 00:49:20.472998 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473074 kubelet[3434]: E0716 00:49:20.473069 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473074 kubelet[3434]: W0716 00:49:20.473073 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473115 kubelet[3434]: E0716 00:49:20.473078 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473156 kubelet[3434]: E0716 00:49:20.473151 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473156 kubelet[3434]: W0716 00:49:20.473156 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473193 kubelet[3434]: E0716 00:49:20.473160 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473235 kubelet[3434]: E0716 00:49:20.473231 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473255 kubelet[3434]: W0716 00:49:20.473235 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473255 kubelet[3434]: E0716 00:49:20.473240 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473315 kubelet[3434]: E0716 00:49:20.473310 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473315 kubelet[3434]: W0716 00:49:20.473315 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473347 kubelet[3434]: E0716 00:49:20.473319 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473394 kubelet[3434]: E0716 00:49:20.473389 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473414 kubelet[3434]: W0716 00:49:20.473394 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473414 kubelet[3434]: E0716 00:49:20.473398 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473478 kubelet[3434]: E0716 00:49:20.473474 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473478 kubelet[3434]: W0716 00:49:20.473478 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473517 kubelet[3434]: E0716 00:49:20.473483 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473557 kubelet[3434]: E0716 00:49:20.473552 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473557 kubelet[3434]: W0716 00:49:20.473556 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473593 kubelet[3434]: E0716 00:49:20.473560 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473635 kubelet[3434]: E0716 00:49:20.473630 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473635 kubelet[3434]: W0716 00:49:20.473634 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473668 kubelet[3434]: E0716 00:49:20.473639 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.473717 kubelet[3434]: E0716 00:49:20.473712 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.473738 kubelet[3434]: W0716 00:49:20.473717 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.473738 kubelet[3434]: E0716 00:49:20.473721 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.490296 kubelet[3434]: E0716 00:49:20.490252 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.490296 kubelet[3434]: W0716 00:49:20.490262 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.490296 kubelet[3434]: E0716 00:49:20.490271 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.490296 kubelet[3434]: I0716 00:49:20.490288 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/05542b76-a42a-41b7-a7f7-a1f97b8c0b25-registration-dir\") pod \"csi-node-driver-c8xm9\" (UID: \"05542b76-a42a-41b7-a7f7-a1f97b8c0b25\") " pod="calico-system/csi-node-driver-c8xm9" Jul 16 00:49:20.490512 kubelet[3434]: E0716 00:49:20.490480 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.490512 kubelet[3434]: W0716 00:49:20.490490 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.490512 kubelet[3434]: E0716 00:49:20.490501 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.490512 kubelet[3434]: I0716 00:49:20.490514 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/05542b76-a42a-41b7-a7f7-a1f97b8c0b25-socket-dir\") pod \"csi-node-driver-c8xm9\" (UID: \"05542b76-a42a-41b7-a7f7-a1f97b8c0b25\") " pod="calico-system/csi-node-driver-c8xm9" Jul 16 00:49:20.490691 kubelet[3434]: E0716 00:49:20.490681 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.490691 kubelet[3434]: W0716 00:49:20.490689 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.490745 kubelet[3434]: E0716 00:49:20.490699 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.490745 kubelet[3434]: I0716 00:49:20.490711 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/05542b76-a42a-41b7-a7f7-a1f97b8c0b25-varrun\") pod \"csi-node-driver-c8xm9\" (UID: \"05542b76-a42a-41b7-a7f7-a1f97b8c0b25\") " pod="calico-system/csi-node-driver-c8xm9" Jul 16 00:49:20.490895 kubelet[3434]: E0716 00:49:20.490852 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.490895 kubelet[3434]: W0716 00:49:20.490861 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.490895 kubelet[3434]: E0716 00:49:20.490872 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.491081 kubelet[3434]: E0716 00:49:20.491043 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.491081 kubelet[3434]: W0716 00:49:20.491052 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.491081 kubelet[3434]: E0716 00:49:20.491063 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.491265 kubelet[3434]: E0716 00:49:20.491230 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.491265 kubelet[3434]: W0716 00:49:20.491239 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.491265 kubelet[3434]: E0716 00:49:20.491249 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.491433 kubelet[3434]: E0716 00:49:20.491399 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.491433 kubelet[3434]: W0716 00:49:20.491407 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.491433 kubelet[3434]: E0716 00:49:20.491418 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.491568 kubelet[3434]: E0716 00:49:20.491560 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.491596 kubelet[3434]: W0716 00:49:20.491567 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.491596 kubelet[3434]: E0716 00:49:20.491577 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.491596 kubelet[3434]: I0716 00:49:20.491591 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05542b76-a42a-41b7-a7f7-a1f97b8c0b25-kubelet-dir\") pod \"csi-node-driver-c8xm9\" (UID: \"05542b76-a42a-41b7-a7f7-a1f97b8c0b25\") " pod="calico-system/csi-node-driver-c8xm9" Jul 16 00:49:20.491711 kubelet[3434]: E0716 00:49:20.491704 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.491737 kubelet[3434]: W0716 00:49:20.491711 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.491737 kubelet[3434]: E0716 00:49:20.491728 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.491782 kubelet[3434]: I0716 00:49:20.491740 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnqsq\" (UniqueName: \"kubernetes.io/projected/05542b76-a42a-41b7-a7f7-a1f97b8c0b25-kube-api-access-vnqsq\") pod \"csi-node-driver-c8xm9\" (UID: \"05542b76-a42a-41b7-a7f7-a1f97b8c0b25\") " pod="calico-system/csi-node-driver-c8xm9" Jul 16 00:49:20.491878 kubelet[3434]: E0716 00:49:20.491870 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.491908 kubelet[3434]: W0716 00:49:20.491877 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.491908 kubelet[3434]: E0716 00:49:20.491892 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.492038 kubelet[3434]: E0716 00:49:20.492031 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.492038 kubelet[3434]: W0716 00:49:20.492038 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.492081 kubelet[3434]: E0716 00:49:20.492050 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.492145 kubelet[3434]: E0716 00:49:20.492139 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.492168 kubelet[3434]: W0716 00:49:20.492145 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.492168 kubelet[3434]: E0716 00:49:20.492154 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.492257 kubelet[3434]: E0716 00:49:20.492251 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.492281 kubelet[3434]: W0716 00:49:20.492257 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.492281 kubelet[3434]: E0716 00:49:20.492263 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.492371 kubelet[3434]: E0716 00:49:20.492365 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.492399 kubelet[3434]: W0716 00:49:20.492371 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.492399 kubelet[3434]: E0716 00:49:20.492377 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.492482 kubelet[3434]: E0716 00:49:20.492475 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.492482 kubelet[3434]: W0716 00:49:20.492481 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.492529 kubelet[3434]: E0716 00:49:20.492487 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.592397 kubelet[3434]: E0716 00:49:20.592343 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.592397 kubelet[3434]: W0716 00:49:20.592364 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.592397 kubelet[3434]: E0716 00:49:20.592388 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.592606 kubelet[3434]: E0716 00:49:20.592551 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.592606 kubelet[3434]: W0716 00:49:20.592557 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.592606 kubelet[3434]: E0716 00:49:20.592567 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.592727 kubelet[3434]: E0716 00:49:20.592695 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.592727 kubelet[3434]: W0716 00:49:20.592700 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.592727 kubelet[3434]: E0716 00:49:20.592705 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.592832 kubelet[3434]: E0716 00:49:20.592820 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.592859 kubelet[3434]: W0716 00:49:20.592830 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.592859 kubelet[3434]: E0716 00:49:20.592841 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.592963 kubelet[3434]: E0716 00:49:20.592956 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.592963 kubelet[3434]: W0716 00:49:20.592961 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593023 kubelet[3434]: E0716 00:49:20.592967 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593058 kubelet[3434]: E0716 00:49:20.593050 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593058 kubelet[3434]: W0716 00:49:20.593055 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593109 kubelet[3434]: E0716 00:49:20.593061 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593176 kubelet[3434]: E0716 00:49:20.593169 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593176 kubelet[3434]: W0716 00:49:20.593173 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593237 kubelet[3434]: E0716 00:49:20.593180 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593270 kubelet[3434]: E0716 00:49:20.593266 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593302 kubelet[3434]: W0716 00:49:20.593271 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593302 kubelet[3434]: E0716 00:49:20.593277 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593364 kubelet[3434]: E0716 00:49:20.593358 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593364 kubelet[3434]: W0716 00:49:20.593363 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593404 kubelet[3434]: E0716 00:49:20.593368 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593453 kubelet[3434]: E0716 00:49:20.593447 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593453 kubelet[3434]: W0716 00:49:20.593452 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593513 kubelet[3434]: E0716 00:49:20.593457 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593562 kubelet[3434]: E0716 00:49:20.593556 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593562 kubelet[3434]: W0716 00:49:20.593561 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593605 kubelet[3434]: E0716 00:49:20.593567 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593718 kubelet[3434]: E0716 00:49:20.593711 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593738 kubelet[3434]: W0716 00:49:20.593718 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593738 kubelet[3434]: E0716 00:49:20.593726 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593819 kubelet[3434]: E0716 00:49:20.593814 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593844 kubelet[3434]: W0716 00:49:20.593819 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593844 kubelet[3434]: E0716 00:49:20.593830 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593911 kubelet[3434]: E0716 00:49:20.593905 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.593911 kubelet[3434]: W0716 00:49:20.593910 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.593946 kubelet[3434]: E0716 00:49:20.593921 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.593990 kubelet[3434]: E0716 00:49:20.593986 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594012 kubelet[3434]: W0716 00:49:20.593990 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594012 kubelet[3434]: E0716 00:49:20.593996 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594074 kubelet[3434]: E0716 00:49:20.594069 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594074 kubelet[3434]: W0716 00:49:20.594073 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594109 kubelet[3434]: E0716 00:49:20.594079 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594157 kubelet[3434]: E0716 00:49:20.594152 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594178 kubelet[3434]: W0716 00:49:20.594157 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594178 kubelet[3434]: E0716 00:49:20.594163 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594260 kubelet[3434]: E0716 00:49:20.594254 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594260 kubelet[3434]: W0716 00:49:20.594259 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594298 kubelet[3434]: E0716 00:49:20.594265 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594346 kubelet[3434]: E0716 00:49:20.594341 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594346 kubelet[3434]: W0716 00:49:20.594345 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594383 kubelet[3434]: E0716 00:49:20.594351 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594426 kubelet[3434]: E0716 00:49:20.594421 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594444 kubelet[3434]: W0716 00:49:20.594426 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594444 kubelet[3434]: E0716 00:49:20.594431 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594508 kubelet[3434]: E0716 00:49:20.594503 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594529 kubelet[3434]: W0716 00:49:20.594508 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594529 kubelet[3434]: E0716 00:49:20.594514 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594606 kubelet[3434]: E0716 00:49:20.594600 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594637 kubelet[3434]: W0716 00:49:20.594606 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594637 kubelet[3434]: E0716 00:49:20.594613 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594693 kubelet[3434]: E0716 00:49:20.594686 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594714 kubelet[3434]: W0716 00:49:20.594693 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594714 kubelet[3434]: E0716 00:49:20.594699 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594836 kubelet[3434]: E0716 00:49:20.594831 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594855 kubelet[3434]: W0716 00:49:20.594836 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594855 kubelet[3434]: E0716 00:49:20.594846 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.594940 kubelet[3434]: E0716 00:49:20.594935 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.594940 kubelet[3434]: W0716 00:49:20.594940 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.594980 kubelet[3434]: E0716 00:49:20.594945 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:20.597965 kubelet[3434]: E0716 00:49:20.597926 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:20.597965 kubelet[3434]: W0716 00:49:20.597933 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:20.597965 kubelet[3434]: E0716 00:49:20.597939 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:21.795804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209379809.mount: Deactivated successfully. Jul 16 00:49:22.505250 containerd[1916]: time="2025-07-16T00:49:22.505226953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:22.505464 containerd[1916]: time="2025-07-16T00:49:22.505447567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 16 00:49:22.505797 containerd[1916]: time="2025-07-16T00:49:22.505756576Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:22.506703 containerd[1916]: time="2025-07-16T00:49:22.506690166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:22.507011 containerd[1916]: time="2025-07-16T00:49:22.507000151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.354864873s" Jul 16 00:49:22.507044 containerd[1916]: time="2025-07-16T00:49:22.507014929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 16 00:49:22.507529 containerd[1916]: time="2025-07-16T00:49:22.507517250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 16 00:49:22.511079 containerd[1916]: time="2025-07-16T00:49:22.511062167Z" level=info msg="CreateContainer within sandbox \"51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 16 00:49:22.513886 containerd[1916]: time="2025-07-16T00:49:22.513868527Z" level=info msg="Container cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:22.517088 containerd[1916]: time="2025-07-16T00:49:22.517039702Z" level=info msg="CreateContainer within sandbox \"51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e\"" Jul 16 00:49:22.517312 containerd[1916]: time="2025-07-16T00:49:22.517299002Z" level=info msg="StartContainer for \"cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e\"" Jul 16 00:49:22.517905 containerd[1916]: time="2025-07-16T00:49:22.517892968Z" level=info msg="connecting to shim cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e" address="unix:///run/containerd/s/94f8ea5d8704c63eff6d7177b53ea8a0aeb291eec7b55f840d62a59df888ff4a" protocol=ttrpc version=3 Jul 16 00:49:22.538081 systemd[1]: Started cri-containerd-cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e.scope - libcontainer container cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e. Jul 16 00:49:22.571845 containerd[1916]: time="2025-07-16T00:49:22.571811148Z" level=info msg="StartContainer for \"cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e\" returns successfully" Jul 16 00:49:22.587937 kubelet[3434]: E0716 00:49:22.587916 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8xm9" podUID="05542b76-a42a-41b7-a7f7-a1f97b8c0b25" Jul 16 00:49:22.687104 kubelet[3434]: E0716 00:49:22.687086 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687104 kubelet[3434]: W0716 00:49:22.687099 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687217 kubelet[3434]: E0716 00:49:22.687112 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687241 kubelet[3434]: E0716 00:49:22.687234 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687264 kubelet[3434]: W0716 00:49:22.687241 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687264 kubelet[3434]: E0716 00:49:22.687248 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687384 kubelet[3434]: E0716 00:49:22.687378 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687413 kubelet[3434]: W0716 00:49:22.687384 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687413 kubelet[3434]: E0716 00:49:22.687390 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687481 kubelet[3434]: E0716 00:49:22.687475 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687502 kubelet[3434]: W0716 00:49:22.687481 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687502 kubelet[3434]: E0716 00:49:22.687486 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687609 kubelet[3434]: E0716 00:49:22.687604 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687609 kubelet[3434]: W0716 00:49:22.687608 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687651 kubelet[3434]: E0716 00:49:22.687613 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687696 kubelet[3434]: E0716 00:49:22.687690 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687696 kubelet[3434]: W0716 00:49:22.687695 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687729 kubelet[3434]: E0716 00:49:22.687700 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687777 kubelet[3434]: E0716 00:49:22.687772 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687777 kubelet[3434]: W0716 00:49:22.687777 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687814 kubelet[3434]: E0716 00:49:22.687781 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687865 kubelet[3434]: E0716 00:49:22.687860 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687884 kubelet[3434]: W0716 00:49:22.687865 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687884 kubelet[3434]: E0716 00:49:22.687869 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.687952 kubelet[3434]: E0716 00:49:22.687947 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.687952 kubelet[3434]: W0716 00:49:22.687951 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.687987 kubelet[3434]: E0716 00:49:22.687956 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.688029 kubelet[3434]: E0716 00:49:22.688024 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.688047 kubelet[3434]: W0716 00:49:22.688029 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.688047 kubelet[3434]: E0716 00:49:22.688033 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.688107 kubelet[3434]: E0716 00:49:22.688102 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.688107 kubelet[3434]: W0716 00:49:22.688106 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.688142 kubelet[3434]: E0716 00:49:22.688111 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.688184 kubelet[3434]: E0716 00:49:22.688180 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.688204 kubelet[3434]: W0716 00:49:22.688184 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.688204 kubelet[3434]: E0716 00:49:22.688188 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.688268 kubelet[3434]: E0716 00:49:22.688263 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.688268 kubelet[3434]: W0716 00:49:22.688268 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.688328 kubelet[3434]: E0716 00:49:22.688272 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.688349 kubelet[3434]: E0716 00:49:22.688342 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.688349 kubelet[3434]: W0716 00:49:22.688346 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.688381 kubelet[3434]: E0716 00:49:22.688351 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.688458 kubelet[3434]: E0716 00:49:22.688454 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.688477 kubelet[3434]: W0716 00:49:22.688458 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.688477 kubelet[3434]: E0716 00:49:22.688463 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.709786 kubelet[3434]: E0716 00:49:22.709775 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.709786 kubelet[3434]: W0716 00:49:22.709784 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.709877 kubelet[3434]: E0716 00:49:22.709793 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.710014 kubelet[3434]: E0716 00:49:22.710004 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.710014 kubelet[3434]: W0716 00:49:22.710013 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.710089 kubelet[3434]: E0716 00:49:22.710023 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.710200 kubelet[3434]: E0716 00:49:22.710187 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.710245 kubelet[3434]: W0716 00:49:22.710204 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.710245 kubelet[3434]: E0716 00:49:22.710220 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.710384 kubelet[3434]: E0716 00:49:22.710374 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.710384 kubelet[3434]: W0716 00:49:22.710382 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.710462 kubelet[3434]: E0716 00:49:22.710394 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.710522 kubelet[3434]: E0716 00:49:22.710512 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.710522 kubelet[3434]: W0716 00:49:22.710520 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.710600 kubelet[3434]: E0716 00:49:22.710531 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.710711 kubelet[3434]: E0716 00:49:22.710702 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.710711 kubelet[3434]: W0716 00:49:22.710709 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.710784 kubelet[3434]: E0716 00:49:22.710721 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.710900 kubelet[3434]: E0716 00:49:22.710891 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.710900 kubelet[3434]: W0716 00:49:22.710899 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.710980 kubelet[3434]: E0716 00:49:22.710909 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.711066 kubelet[3434]: E0716 00:49:22.711056 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.711066 kubelet[3434]: W0716 00:49:22.711064 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.711148 kubelet[3434]: E0716 00:49:22.711076 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.711207 kubelet[3434]: E0716 00:49:22.711198 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.711207 kubelet[3434]: W0716 00:49:22.711206 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.711283 kubelet[3434]: E0716 00:49:22.711217 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.711341 kubelet[3434]: E0716 00:49:22.711333 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.711341 kubelet[3434]: W0716 00:49:22.711340 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.711428 kubelet[3434]: E0716 00:49:22.711354 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.711494 kubelet[3434]: E0716 00:49:22.711485 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.711494 kubelet[3434]: W0716 00:49:22.711493 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.711566 kubelet[3434]: E0716 00:49:22.711505 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.711732 kubelet[3434]: E0716 00:49:22.711723 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.711732 kubelet[3434]: W0716 00:49:22.711732 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.711796 kubelet[3434]: E0716 00:49:22.711745 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.711938 kubelet[3434]: E0716 00:49:22.711928 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.711938 kubelet[3434]: W0716 00:49:22.711937 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.712019 kubelet[3434]: E0716 00:49:22.711948 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.712092 kubelet[3434]: E0716 00:49:22.712082 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.712092 kubelet[3434]: W0716 00:49:22.712091 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.712176 kubelet[3434]: E0716 00:49:22.712105 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.712246 kubelet[3434]: E0716 00:49:22.712236 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.712246 kubelet[3434]: W0716 00:49:22.712244 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.712334 kubelet[3434]: E0716 00:49:22.712255 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.712388 kubelet[3434]: E0716 00:49:22.712379 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.712432 kubelet[3434]: W0716 00:49:22.712388 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.712432 kubelet[3434]: E0716 00:49:22.712399 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.712555 kubelet[3434]: E0716 00:49:22.712545 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.712555 kubelet[3434]: W0716 00:49:22.712554 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.712642 kubelet[3434]: E0716 00:49:22.712565 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:22.713208 kubelet[3434]: E0716 00:49:22.713165 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:22.713208 kubelet[3434]: W0716 00:49:22.713202 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:22.713348 kubelet[3434]: E0716 00:49:22.713224 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.640203 kubelet[3434]: I0716 00:49:23.640117 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:49:23.695456 kubelet[3434]: E0716 00:49:23.695371 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.695456 kubelet[3434]: W0716 00:49:23.695408 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.695456 kubelet[3434]: E0716 00:49:23.695441 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.696049 kubelet[3434]: E0716 00:49:23.695963 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.696049 kubelet[3434]: W0716 00:49:23.695989 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.696049 kubelet[3434]: E0716 00:49:23.696017 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.696608 kubelet[3434]: E0716 00:49:23.696537 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.696608 kubelet[3434]: W0716 00:49:23.696571 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.696608 kubelet[3434]: E0716 00:49:23.696602 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.697319 kubelet[3434]: E0716 00:49:23.697248 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.697319 kubelet[3434]: W0716 00:49:23.697282 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.697319 kubelet[3434]: E0716 00:49:23.697313 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.697905 kubelet[3434]: E0716 00:49:23.697864 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.697905 kubelet[3434]: W0716 00:49:23.697895 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.698119 kubelet[3434]: E0716 00:49:23.697922 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.698483 kubelet[3434]: E0716 00:49:23.698408 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.698483 kubelet[3434]: W0716 00:49:23.698434 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.698483 kubelet[3434]: E0716 00:49:23.698461 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.698990 kubelet[3434]: E0716 00:49:23.698940 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.698990 kubelet[3434]: W0716 00:49:23.698963 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.698990 kubelet[3434]: E0716 00:49:23.698987 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.699512 kubelet[3434]: E0716 00:49:23.699459 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.699512 kubelet[3434]: W0716 00:49:23.699484 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.699512 kubelet[3434]: E0716 00:49:23.699509 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.700115 kubelet[3434]: E0716 00:49:23.700060 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.700115 kubelet[3434]: W0716 00:49:23.700084 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.700115 kubelet[3434]: E0716 00:49:23.700108 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.700607 kubelet[3434]: E0716 00:49:23.700578 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.700607 kubelet[3434]: W0716 00:49:23.700603 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.700815 kubelet[3434]: E0716 00:49:23.700628 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.701112 kubelet[3434]: E0716 00:49:23.701084 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.701112 kubelet[3434]: W0716 00:49:23.701109 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.701275 kubelet[3434]: E0716 00:49:23.701133 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.701600 kubelet[3434]: E0716 00:49:23.701573 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.701695 kubelet[3434]: W0716 00:49:23.701598 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.701695 kubelet[3434]: E0716 00:49:23.701622 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.702067 kubelet[3434]: E0716 00:49:23.702015 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.702067 kubelet[3434]: W0716 00:49:23.702037 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.702067 kubelet[3434]: E0716 00:49:23.702059 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.702521 kubelet[3434]: E0716 00:49:23.702477 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.702521 kubelet[3434]: W0716 00:49:23.702499 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.702521 kubelet[3434]: E0716 00:49:23.702521 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.703058 kubelet[3434]: E0716 00:49:23.703006 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.703058 kubelet[3434]: W0716 00:49:23.703030 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.703058 kubelet[3434]: E0716 00:49:23.703054 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.718273 kubelet[3434]: E0716 00:49:23.718219 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.718273 kubelet[3434]: W0716 00:49:23.718262 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.718689 kubelet[3434]: E0716 00:49:23.718312 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.719101 kubelet[3434]: E0716 00:49:23.719050 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.719101 kubelet[3434]: W0716 00:49:23.719090 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.719454 kubelet[3434]: E0716 00:49:23.719146 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.719787 kubelet[3434]: E0716 00:49:23.719733 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.719787 kubelet[3434]: W0716 00:49:23.719774 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.720096 kubelet[3434]: E0716 00:49:23.719816 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.720381 kubelet[3434]: E0716 00:49:23.720304 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.720381 kubelet[3434]: W0716 00:49:23.720332 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.720381 kubelet[3434]: E0716 00:49:23.720367 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.720854 kubelet[3434]: E0716 00:49:23.720806 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.720962 kubelet[3434]: W0716 00:49:23.720863 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.721052 kubelet[3434]: E0716 00:49:23.720946 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.721432 kubelet[3434]: E0716 00:49:23.721357 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.721432 kubelet[3434]: W0716 00:49:23.721384 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.721709 kubelet[3434]: E0716 00:49:23.721500 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.721822 kubelet[3434]: E0716 00:49:23.721805 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.721941 kubelet[3434]: W0716 00:49:23.721860 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.721941 kubelet[3434]: E0716 00:49:23.721929 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.722351 kubelet[3434]: E0716 00:49:23.722296 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.722351 kubelet[3434]: W0716 00:49:23.722320 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.722351 kubelet[3434]: E0716 00:49:23.722350 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.723012 kubelet[3434]: E0716 00:49:23.722956 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.723012 kubelet[3434]: W0716 00:49:23.722990 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.723223 kubelet[3434]: E0716 00:49:23.723029 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.723602 kubelet[3434]: E0716 00:49:23.723545 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.723602 kubelet[3434]: W0716 00:49:23.723577 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.723805 kubelet[3434]: E0716 00:49:23.723672 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.724076 kubelet[3434]: E0716 00:49:23.724029 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.724076 kubelet[3434]: W0716 00:49:23.724058 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.724263 kubelet[3434]: E0716 00:49:23.724142 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.724502 kubelet[3434]: E0716 00:49:23.724458 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.724502 kubelet[3434]: W0716 00:49:23.724481 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.724722 kubelet[3434]: E0716 00:49:23.724555 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.724917 kubelet[3434]: E0716 00:49:23.724870 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.724917 kubelet[3434]: W0716 00:49:23.724893 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.725101 kubelet[3434]: E0716 00:49:23.724925 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.725406 kubelet[3434]: E0716 00:49:23.725362 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.725406 kubelet[3434]: W0716 00:49:23.725385 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.725616 kubelet[3434]: E0716 00:49:23.725415 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.726105 kubelet[3434]: E0716 00:49:23.726071 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.726226 kubelet[3434]: W0716 00:49:23.726106 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.726226 kubelet[3434]: E0716 00:49:23.726146 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.726778 kubelet[3434]: E0716 00:49:23.726727 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.726778 kubelet[3434]: W0716 00:49:23.726754 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.727030 kubelet[3434]: E0716 00:49:23.726787 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.727519 kubelet[3434]: E0716 00:49:23.727457 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.727519 kubelet[3434]: W0716 00:49:23.727491 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.727701 kubelet[3434]: E0716 00:49:23.727533 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:23.728147 kubelet[3434]: E0716 00:49:23.728072 3434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:49:23.728147 kubelet[3434]: W0716 00:49:23.728104 3434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:49:23.728147 kubelet[3434]: E0716 00:49:23.728136 3434 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:49:24.531445 containerd[1916]: time="2025-07-16T00:49:24.531420556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:24.531705 containerd[1916]: time="2025-07-16T00:49:24.531693709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 16 00:49:24.532028 containerd[1916]: time="2025-07-16T00:49:24.532017082Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:24.532843 containerd[1916]: time="2025-07-16T00:49:24.532803206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:24.533234 containerd[1916]: time="2025-07-16T00:49:24.533186943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.025654231s" Jul 16 00:49:24.533234 containerd[1916]: time="2025-07-16T00:49:24.533206526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 16 00:49:24.534133 containerd[1916]: time="2025-07-16T00:49:24.534121122Z" level=info msg="CreateContainer within sandbox \"bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 16 00:49:24.537163 containerd[1916]: time="2025-07-16T00:49:24.537116213Z" level=info msg="Container 34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:24.541013 containerd[1916]: time="2025-07-16T00:49:24.540972842Z" level=info msg="CreateContainer within sandbox \"bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04\"" Jul 16 00:49:24.541187 containerd[1916]: time="2025-07-16T00:49:24.541173935Z" level=info msg="StartContainer for \"34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04\"" Jul 16 00:49:24.541942 containerd[1916]: time="2025-07-16T00:49:24.541899180Z" level=info msg="connecting to shim 34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04" address="unix:///run/containerd/s/02505df8b003a13615a11a0ea88f5380afe0edf5ebd171a6ac5fd450f65b56ec" protocol=ttrpc version=3 Jul 16 00:49:24.558988 systemd[1]: Started cri-containerd-34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04.scope - libcontainer container 34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04. Jul 16 00:49:24.578933 containerd[1916]: time="2025-07-16T00:49:24.578902233Z" level=info msg="StartContainer for \"34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04\" returns successfully" Jul 16 00:49:24.583261 systemd[1]: cri-containerd-34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04.scope: Deactivated successfully. Jul 16 00:49:24.584543 containerd[1916]: time="2025-07-16T00:49:24.584524684Z" level=info msg="received exit event container_id:\"34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04\" id:\"34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04\" pid:4394 exited_at:{seconds:1752626964 nanos:584281468}" Jul 16 00:49:24.584613 containerd[1916]: time="2025-07-16T00:49:24.584597431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04\" id:\"34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04\" pid:4394 exited_at:{seconds:1752626964 nanos:584281468}" Jul 16 00:49:24.587197 kubelet[3434]: E0716 00:49:24.587171 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8xm9" podUID="05542b76-a42a-41b7-a7f7-a1f97b8c0b25" Jul 16 00:49:24.599166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04-rootfs.mount: Deactivated successfully. Jul 16 00:49:24.673274 kubelet[3434]: I0716 00:49:24.673195 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fd95d4c78-9dt2l" podStartSLOduration=3.31771772 podStartE2EDuration="5.673164281s" podCreationTimestamp="2025-07-16 00:49:19 +0000 UTC" firstStartedPulling="2025-07-16 00:49:20.151994526 +0000 UTC m=+15.628353469" lastFinishedPulling="2025-07-16 00:49:22.507441087 +0000 UTC m=+17.983800030" observedRunningTime="2025-07-16 00:49:22.648626955 +0000 UTC m=+18.124985901" watchObservedRunningTime="2025-07-16 00:49:24.673164281 +0000 UTC m=+20.149523245" Jul 16 00:49:25.653381 containerd[1916]: time="2025-07-16T00:49:25.653283574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 16 00:49:26.386622 kubelet[3434]: I0716 00:49:26.386506 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:49:26.587282 kubelet[3434]: E0716 00:49:26.587168 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8xm9" podUID="05542b76-a42a-41b7-a7f7-a1f97b8c0b25" Jul 16 00:49:28.549426 containerd[1916]: time="2025-07-16T00:49:28.549402965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:28.549640 containerd[1916]: time="2025-07-16T00:49:28.549599760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 16 00:49:28.549874 containerd[1916]: time="2025-07-16T00:49:28.549862213Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:28.550775 containerd[1916]: time="2025-07-16T00:49:28.550760126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:28.551192 containerd[1916]: time="2025-07-16T00:49:28.551150981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.897800635s" Jul 16 00:49:28.551192 containerd[1916]: time="2025-07-16T00:49:28.551165720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 16 00:49:28.552103 containerd[1916]: time="2025-07-16T00:49:28.552091248Z" level=info msg="CreateContainer within sandbox \"bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 16 00:49:28.555052 containerd[1916]: time="2025-07-16T00:49:28.555010762Z" level=info msg="Container 7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:28.558640 containerd[1916]: time="2025-07-16T00:49:28.558624987Z" level=info msg="CreateContainer within sandbox \"bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361\"" Jul 16 00:49:28.558880 containerd[1916]: time="2025-07-16T00:49:28.558868753Z" level=info msg="StartContainer for \"7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361\"" Jul 16 00:49:28.559629 containerd[1916]: time="2025-07-16T00:49:28.559615892Z" level=info msg="connecting to shim 7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361" address="unix:///run/containerd/s/02505df8b003a13615a11a0ea88f5380afe0edf5ebd171a6ac5fd450f65b56ec" protocol=ttrpc version=3 Jul 16 00:49:28.573993 systemd[1]: Started cri-containerd-7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361.scope - libcontainer container 7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361. Jul 16 00:49:28.587701 kubelet[3434]: E0716 00:49:28.587680 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c8xm9" podUID="05542b76-a42a-41b7-a7f7-a1f97b8c0b25" Jul 16 00:49:28.593647 containerd[1916]: time="2025-07-16T00:49:28.593622152Z" level=info msg="StartContainer for \"7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361\" returns successfully" Jul 16 00:49:29.176237 containerd[1916]: time="2025-07-16T00:49:29.176202862Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:49:29.177148 systemd[1]: cri-containerd-7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361.scope: Deactivated successfully. Jul 16 00:49:29.177303 systemd[1]: cri-containerd-7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361.scope: Consumed 355ms CPU time, 192.9M memory peak, 171.2M written to disk. Jul 16 00:49:29.177651 containerd[1916]: time="2025-07-16T00:49:29.177638820Z" level=info msg="received exit event container_id:\"7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361\" id:\"7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361\" pid:4456 exited_at:{seconds:1752626969 nanos:177552265}" Jul 16 00:49:29.177702 containerd[1916]: time="2025-07-16T00:49:29.177690458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361\" id:\"7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361\" pid:4456 exited_at:{seconds:1752626969 nanos:177552265}" Jul 16 00:49:29.188124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361-rootfs.mount: Deactivated successfully. Jul 16 00:49:29.280532 kubelet[3434]: I0716 00:49:29.280436 3434 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 16 00:49:29.333892 systemd[1]: Created slice kubepods-burstable-pod4581efd3_fe1a_465b_85dd_ae5c238057dd.slice - libcontainer container kubepods-burstable-pod4581efd3_fe1a_465b_85dd_ae5c238057dd.slice. Jul 16 00:49:29.342730 systemd[1]: Created slice kubepods-burstable-pod1340497f_ae28_4eda_920e_f4de075e0d82.slice - libcontainer container kubepods-burstable-pod1340497f_ae28_4eda_920e_f4de075e0d82.slice. Jul 16 00:49:29.347765 systemd[1]: Created slice kubepods-besteffort-pod8f2a8f6f_6e2f_45a0_8a7c_34a267024787.slice - libcontainer container kubepods-besteffort-pod8f2a8f6f_6e2f_45a0_8a7c_34a267024787.slice. Jul 16 00:49:29.352678 systemd[1]: Created slice kubepods-besteffort-pod8e85faa1_92f4_4845_8e0e_7801e31f9e04.slice - libcontainer container kubepods-besteffort-pod8e85faa1_92f4_4845_8e0e_7801e31f9e04.slice. Jul 16 00:49:29.356271 systemd[1]: Created slice kubepods-besteffort-pod5a9829a7_411d_4497_a3d6_2632af67105b.slice - libcontainer container kubepods-besteffort-pod5a9829a7_411d_4497_a3d6_2632af67105b.slice. Jul 16 00:49:29.359998 systemd[1]: Created slice kubepods-besteffort-podb5d72f59_207b_442f_b9d9_abe3a39737ed.slice - libcontainer container kubepods-besteffort-podb5d72f59_207b_442f_b9d9_abe3a39737ed.slice. Jul 16 00:49:29.363515 systemd[1]: Created slice kubepods-besteffort-pod2b06861d_839c_4b0f_9733_4b6f6ee18f8d.slice - libcontainer container kubepods-besteffort-pod2b06861d_839c_4b0f_9733_4b6f6ee18f8d.slice. Jul 16 00:49:29.363620 kubelet[3434]: I0716 00:49:29.363554 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1340497f-ae28-4eda-920e-f4de075e0d82-config-volume\") pod \"coredns-668d6bf9bc-995v6\" (UID: \"1340497f-ae28-4eda-920e-f4de075e0d82\") " pod="kube-system/coredns-668d6bf9bc-995v6" Jul 16 00:49:29.363620 kubelet[3434]: I0716 00:49:29.363578 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7wk7\" (UniqueName: \"kubernetes.io/projected/8f2a8f6f-6e2f-45a0-8a7c-34a267024787-kube-api-access-w7wk7\") pod \"calico-kube-controllers-6fb77455db-cgzrs\" (UID: \"8f2a8f6f-6e2f-45a0-8a7c-34a267024787\") " pod="calico-system/calico-kube-controllers-6fb77455db-cgzrs" Jul 16 00:49:29.363620 kubelet[3434]: I0716 00:49:29.363593 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th78f\" (UniqueName: \"kubernetes.io/projected/5a9829a7-411d-4497-a3d6-2632af67105b-kube-api-access-th78f\") pod \"calico-apiserver-6c4d947c8c-mkbs4\" (UID: \"5a9829a7-411d-4497-a3d6-2632af67105b\") " pod="calico-apiserver/calico-apiserver-6c4d947c8c-mkbs4" Jul 16 00:49:29.363620 kubelet[3434]: I0716 00:49:29.363609 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4581efd3-fe1a-465b-85dd-ae5c238057dd-config-volume\") pod \"coredns-668d6bf9bc-bdmg5\" (UID: \"4581efd3-fe1a-465b-85dd-ae5c238057dd\") " pod="kube-system/coredns-668d6bf9bc-bdmg5" Jul 16 00:49:29.363743 kubelet[3434]: I0716 00:49:29.363668 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gtnx\" (UniqueName: \"kubernetes.io/projected/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-kube-api-access-9gtnx\") pod \"whisker-6c7bdf4c44-x5tzb\" (UID: \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\") " pod="calico-system/whisker-6c7bdf4c44-x5tzb" Jul 16 00:49:29.363743 kubelet[3434]: I0716 00:49:29.363704 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b5d72f59-207b-442f-b9d9-abe3a39737ed-goldmane-key-pair\") pod \"goldmane-768f4c5c69-lxlf2\" (UID: \"b5d72f59-207b-442f-b9d9-abe3a39737ed\") " pod="calico-system/goldmane-768f4c5c69-lxlf2" Jul 16 00:49:29.363743 kubelet[3434]: I0716 00:49:29.363719 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-ca-bundle\") pod \"whisker-6c7bdf4c44-x5tzb\" (UID: \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\") " pod="calico-system/whisker-6c7bdf4c44-x5tzb" Jul 16 00:49:29.363743 kubelet[3434]: I0716 00:49:29.363731 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g4mg\" (UniqueName: \"kubernetes.io/projected/8e85faa1-92f4-4845-8e0e-7801e31f9e04-kube-api-access-4g4mg\") pod \"calico-apiserver-6c4d947c8c-gc28p\" (UID: \"8e85faa1-92f4-4845-8e0e-7801e31f9e04\") " pod="calico-apiserver/calico-apiserver-6c4d947c8c-gc28p" Jul 16 00:49:29.363846 kubelet[3434]: I0716 00:49:29.363750 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffs9v\" (UniqueName: \"kubernetes.io/projected/4581efd3-fe1a-465b-85dd-ae5c238057dd-kube-api-access-ffs9v\") pod \"coredns-668d6bf9bc-bdmg5\" (UID: \"4581efd3-fe1a-465b-85dd-ae5c238057dd\") " pod="kube-system/coredns-668d6bf9bc-bdmg5" Jul 16 00:49:29.363846 kubelet[3434]: I0716 00:49:29.363766 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqd7\" (UniqueName: \"kubernetes.io/projected/b5d72f59-207b-442f-b9d9-abe3a39737ed-kube-api-access-phqd7\") pod \"goldmane-768f4c5c69-lxlf2\" (UID: \"b5d72f59-207b-442f-b9d9-abe3a39737ed\") " pod="calico-system/goldmane-768f4c5c69-lxlf2" Jul 16 00:49:29.363846 kubelet[3434]: I0716 00:49:29.363784 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f2a8f6f-6e2f-45a0-8a7c-34a267024787-tigera-ca-bundle\") pod \"calico-kube-controllers-6fb77455db-cgzrs\" (UID: \"8f2a8f6f-6e2f-45a0-8a7c-34a267024787\") " pod="calico-system/calico-kube-controllers-6fb77455db-cgzrs" Jul 16 00:49:29.363846 kubelet[3434]: I0716 00:49:29.363796 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5d72f59-207b-442f-b9d9-abe3a39737ed-config\") pod \"goldmane-768f4c5c69-lxlf2\" (UID: \"b5d72f59-207b-442f-b9d9-abe3a39737ed\") " pod="calico-system/goldmane-768f4c5c69-lxlf2" Jul 16 00:49:29.363846 kubelet[3434]: I0716 00:49:29.363823 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5d72f59-207b-442f-b9d9-abe3a39737ed-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-lxlf2\" (UID: \"b5d72f59-207b-442f-b9d9-abe3a39737ed\") " pod="calico-system/goldmane-768f4c5c69-lxlf2" Jul 16 00:49:29.363966 kubelet[3434]: I0716 00:49:29.363853 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a9829a7-411d-4497-a3d6-2632af67105b-calico-apiserver-certs\") pod \"calico-apiserver-6c4d947c8c-mkbs4\" (UID: \"5a9829a7-411d-4497-a3d6-2632af67105b\") " pod="calico-apiserver/calico-apiserver-6c4d947c8c-mkbs4" Jul 16 00:49:29.363966 kubelet[3434]: I0716 00:49:29.363865 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-backend-key-pair\") pod \"whisker-6c7bdf4c44-x5tzb\" (UID: \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\") " pod="calico-system/whisker-6c7bdf4c44-x5tzb" Jul 16 00:49:29.363966 kubelet[3434]: I0716 00:49:29.363877 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e85faa1-92f4-4845-8e0e-7801e31f9e04-calico-apiserver-certs\") pod \"calico-apiserver-6c4d947c8c-gc28p\" (UID: \"8e85faa1-92f4-4845-8e0e-7801e31f9e04\") " pod="calico-apiserver/calico-apiserver-6c4d947c8c-gc28p" Jul 16 00:49:29.363966 kubelet[3434]: I0716 00:49:29.363890 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5vx\" (UniqueName: \"kubernetes.io/projected/1340497f-ae28-4eda-920e-f4de075e0d82-kube-api-access-bt5vx\") pod \"coredns-668d6bf9bc-995v6\" (UID: \"1340497f-ae28-4eda-920e-f4de075e0d82\") " pod="kube-system/coredns-668d6bf9bc-995v6" Jul 16 00:49:29.640796 containerd[1916]: time="2025-07-16T00:49:29.640695849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bdmg5,Uid:4581efd3-fe1a-465b-85dd-ae5c238057dd,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:29.645407 containerd[1916]: time="2025-07-16T00:49:29.645360725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-995v6,Uid:1340497f-ae28-4eda-920e-f4de075e0d82,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:29.650896 containerd[1916]: time="2025-07-16T00:49:29.650859128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb77455db-cgzrs,Uid:8f2a8f6f-6e2f-45a0-8a7c-34a267024787,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:29.655301 containerd[1916]: time="2025-07-16T00:49:29.655276112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-gc28p,Uid:8e85faa1-92f4-4845-8e0e-7801e31f9e04,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:49:29.658908 containerd[1916]: time="2025-07-16T00:49:29.658884166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-mkbs4,Uid:5a9829a7-411d-4497-a3d6-2632af67105b,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:49:29.662424 containerd[1916]: time="2025-07-16T00:49:29.662398573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lxlf2,Uid:b5d72f59-207b-442f-b9d9-abe3a39737ed,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:29.666011 containerd[1916]: time="2025-07-16T00:49:29.665964156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7bdf4c44-x5tzb,Uid:2b06861d-839c-4b0f-9733-4b6f6ee18f8d,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:29.670422 containerd[1916]: time="2025-07-16T00:49:29.670391831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 16 00:49:29.673557 containerd[1916]: time="2025-07-16T00:49:29.673515314Z" level=error msg="Failed to destroy network for sandbox \"0475a676e91e0b07b001fc56c296f65a11d0b55f67c2d37a5d8b2ca70842ff8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.675180 containerd[1916]: time="2025-07-16T00:49:29.675145300Z" level=error msg="Failed to destroy network for sandbox \"7f8845e775edb118f5fb1489e1ef236446d254566dc973851ee6923057c4fcb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.682613 containerd[1916]: time="2025-07-16T00:49:29.682581371Z" level=error msg="Failed to destroy network for sandbox \"0ccff945899543839a3e6c8ed7e0df8472251144fd19fd538a41466ea9e0ee13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.687285 containerd[1916]: time="2025-07-16T00:49:29.687252466Z" level=error msg="Failed to destroy network for sandbox \"58e1b89508afbe1a9c3ee2caa27c3f18a614411074b0ed60aabd7191b6a3cb86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.687691 containerd[1916]: time="2025-07-16T00:49:29.687663871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bdmg5,Uid:4581efd3-fe1a-465b-85dd-ae5c238057dd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475a676e91e0b07b001fc56c296f65a11d0b55f67c2d37a5d8b2ca70842ff8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.687845 kubelet[3434]: E0716 00:49:29.687812 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475a676e91e0b07b001fc56c296f65a11d0b55f67c2d37a5d8b2ca70842ff8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.688047 kubelet[3434]: E0716 00:49:29.687870 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475a676e91e0b07b001fc56c296f65a11d0b55f67c2d37a5d8b2ca70842ff8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bdmg5" Jul 16 00:49:29.688047 kubelet[3434]: E0716 00:49:29.687885 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0475a676e91e0b07b001fc56c296f65a11d0b55f67c2d37a5d8b2ca70842ff8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bdmg5" Jul 16 00:49:29.688047 kubelet[3434]: E0716 00:49:29.687930 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bdmg5_kube-system(4581efd3-fe1a-465b-85dd-ae5c238057dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bdmg5_kube-system(4581efd3-fe1a-465b-85dd-ae5c238057dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0475a676e91e0b07b001fc56c296f65a11d0b55f67c2d37a5d8b2ca70842ff8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bdmg5" podUID="4581efd3-fe1a-465b-85dd-ae5c238057dd" Jul 16 00:49:29.688118 containerd[1916]: time="2025-07-16T00:49:29.687899662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-995v6,Uid:1340497f-ae28-4eda-920e-f4de075e0d82,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f8845e775edb118f5fb1489e1ef236446d254566dc973851ee6923057c4fcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.688156 kubelet[3434]: E0716 00:49:29.687960 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f8845e775edb118f5fb1489e1ef236446d254566dc973851ee6923057c4fcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.688156 kubelet[3434]: E0716 00:49:29.687987 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f8845e775edb118f5fb1489e1ef236446d254566dc973851ee6923057c4fcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-995v6" Jul 16 00:49:29.688156 kubelet[3434]: E0716 00:49:29.688004 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f8845e775edb118f5fb1489e1ef236446d254566dc973851ee6923057c4fcb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-995v6" Jul 16 00:49:29.688221 kubelet[3434]: E0716 00:49:29.688025 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-995v6_kube-system(1340497f-ae28-4eda-920e-f4de075e0d82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-995v6_kube-system(1340497f-ae28-4eda-920e-f4de075e0d82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f8845e775edb118f5fb1489e1ef236446d254566dc973851ee6923057c4fcb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-995v6" podUID="1340497f-ae28-4eda-920e-f4de075e0d82" Jul 16 00:49:29.688257 containerd[1916]: time="2025-07-16T00:49:29.688175850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb77455db-cgzrs,Uid:8f2a8f6f-6e2f-45a0-8a7c-34a267024787,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ccff945899543839a3e6c8ed7e0df8472251144fd19fd538a41466ea9e0ee13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.688292 kubelet[3434]: E0716 00:49:29.688249 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ccff945899543839a3e6c8ed7e0df8472251144fd19fd538a41466ea9e0ee13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.688292 kubelet[3434]: E0716 00:49:29.688276 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ccff945899543839a3e6c8ed7e0df8472251144fd19fd538a41466ea9e0ee13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb77455db-cgzrs" Jul 16 00:49:29.688292 kubelet[3434]: E0716 00:49:29.688286 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ccff945899543839a3e6c8ed7e0df8472251144fd19fd538a41466ea9e0ee13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fb77455db-cgzrs" Jul 16 00:49:29.688346 kubelet[3434]: E0716 00:49:29.688307 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fb77455db-cgzrs_calico-system(8f2a8f6f-6e2f-45a0-8a7c-34a267024787)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fb77455db-cgzrs_calico-system(8f2a8f6f-6e2f-45a0-8a7c-34a267024787)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ccff945899543839a3e6c8ed7e0df8472251144fd19fd538a41466ea9e0ee13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fb77455db-cgzrs" podUID="8f2a8f6f-6e2f-45a0-8a7c-34a267024787" Jul 16 00:49:29.688471 containerd[1916]: time="2025-07-16T00:49:29.688453614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-gc28p,Uid:8e85faa1-92f4-4845-8e0e-7801e31f9e04,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e1b89508afbe1a9c3ee2caa27c3f18a614411074b0ed60aabd7191b6a3cb86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.688533 kubelet[3434]: E0716 00:49:29.688521 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e1b89508afbe1a9c3ee2caa27c3f18a614411074b0ed60aabd7191b6a3cb86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.688559 kubelet[3434]: E0716 00:49:29.688538 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e1b89508afbe1a9c3ee2caa27c3f18a614411074b0ed60aabd7191b6a3cb86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d947c8c-gc28p" Jul 16 00:49:29.688559 kubelet[3434]: E0716 00:49:29.688549 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58e1b89508afbe1a9c3ee2caa27c3f18a614411074b0ed60aabd7191b6a3cb86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d947c8c-gc28p" Jul 16 00:49:29.688597 kubelet[3434]: E0716 00:49:29.688578 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d947c8c-gc28p_calico-apiserver(8e85faa1-92f4-4845-8e0e-7801e31f9e04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d947c8c-gc28p_calico-apiserver(8e85faa1-92f4-4845-8e0e-7801e31f9e04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58e1b89508afbe1a9c3ee2caa27c3f18a614411074b0ed60aabd7191b6a3cb86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d947c8c-gc28p" podUID="8e85faa1-92f4-4845-8e0e-7801e31f9e04" Jul 16 00:49:29.689255 containerd[1916]: time="2025-07-16T00:49:29.689232899Z" level=error msg="Failed to destroy network for sandbox \"362d4caca42e26392e3f3d1ad780932527d769a227afe685c64e5d20b665472b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.689578 containerd[1916]: time="2025-07-16T00:49:29.689565670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-mkbs4,Uid:5a9829a7-411d-4497-a3d6-2632af67105b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"362d4caca42e26392e3f3d1ad780932527d769a227afe685c64e5d20b665472b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.689644 kubelet[3434]: E0716 00:49:29.689631 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362d4caca42e26392e3f3d1ad780932527d769a227afe685c64e5d20b665472b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.689682 kubelet[3434]: E0716 00:49:29.689652 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362d4caca42e26392e3f3d1ad780932527d769a227afe685c64e5d20b665472b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d947c8c-mkbs4" Jul 16 00:49:29.689682 kubelet[3434]: E0716 00:49:29.689673 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"362d4caca42e26392e3f3d1ad780932527d769a227afe685c64e5d20b665472b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4d947c8c-mkbs4" Jul 16 00:49:29.689746 kubelet[3434]: E0716 00:49:29.689698 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4d947c8c-mkbs4_calico-apiserver(5a9829a7-411d-4497-a3d6-2632af67105b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4d947c8c-mkbs4_calico-apiserver(5a9829a7-411d-4497-a3d6-2632af67105b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"362d4caca42e26392e3f3d1ad780932527d769a227afe685c64e5d20b665472b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4d947c8c-mkbs4" podUID="5a9829a7-411d-4497-a3d6-2632af67105b" Jul 16 00:49:29.695587 containerd[1916]: time="2025-07-16T00:49:29.695560054Z" level=error msg="Failed to destroy network for sandbox \"ee4dd42731a202dde08879eeedf0698542384051008dabcd6463ad112ad8125b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.696064 containerd[1916]: time="2025-07-16T00:49:29.696049435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lxlf2,Uid:b5d72f59-207b-442f-b9d9-abe3a39737ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee4dd42731a202dde08879eeedf0698542384051008dabcd6463ad112ad8125b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.696226 kubelet[3434]: E0716 00:49:29.696168 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee4dd42731a202dde08879eeedf0698542384051008dabcd6463ad112ad8125b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.696226 kubelet[3434]: E0716 00:49:29.696215 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee4dd42731a202dde08879eeedf0698542384051008dabcd6463ad112ad8125b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lxlf2" Jul 16 00:49:29.696286 kubelet[3434]: E0716 00:49:29.696229 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee4dd42731a202dde08879eeedf0698542384051008dabcd6463ad112ad8125b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lxlf2" Jul 16 00:49:29.696286 kubelet[3434]: E0716 00:49:29.696253 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-lxlf2_calico-system(b5d72f59-207b-442f-b9d9-abe3a39737ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-lxlf2_calico-system(b5d72f59-207b-442f-b9d9-abe3a39737ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee4dd42731a202dde08879eeedf0698542384051008dabcd6463ad112ad8125b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lxlf2" podUID="b5d72f59-207b-442f-b9d9-abe3a39737ed" Jul 16 00:49:29.697137 containerd[1916]: time="2025-07-16T00:49:29.697098623Z" level=error msg="Failed to destroy network for sandbox \"e6526687cce9662489aa9897d5a1ff55f49e38e5b20bbda204ee05e0f741b7f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.697482 containerd[1916]: time="2025-07-16T00:49:29.697445330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7bdf4c44-x5tzb,Uid:2b06861d-839c-4b0f-9733-4b6f6ee18f8d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6526687cce9662489aa9897d5a1ff55f49e38e5b20bbda204ee05e0f741b7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.697552 kubelet[3434]: E0716 00:49:29.697540 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6526687cce9662489aa9897d5a1ff55f49e38e5b20bbda204ee05e0f741b7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:29.697581 kubelet[3434]: E0716 00:49:29.697558 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6526687cce9662489aa9897d5a1ff55f49e38e5b20bbda204ee05e0f741b7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c7bdf4c44-x5tzb" Jul 16 00:49:29.697602 kubelet[3434]: E0716 00:49:29.697577 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6526687cce9662489aa9897d5a1ff55f49e38e5b20bbda204ee05e0f741b7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c7bdf4c44-x5tzb" Jul 16 00:49:29.697623 kubelet[3434]: E0716 00:49:29.697601 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c7bdf4c44-x5tzb_calico-system(2b06861d-839c-4b0f-9733-4b6f6ee18f8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c7bdf4c44-x5tzb_calico-system(2b06861d-839c-4b0f-9733-4b6f6ee18f8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6526687cce9662489aa9897d5a1ff55f49e38e5b20bbda204ee05e0f741b7f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c7bdf4c44-x5tzb" podUID="2b06861d-839c-4b0f-9733-4b6f6ee18f8d" Jul 16 00:49:30.560556 systemd[1]: run-netns-cni\x2d15676765\x2d56f1\x2d29c6\x2d0c8d\x2d96fd67cdae3e.mount: Deactivated successfully. Jul 16 00:49:30.560606 systemd[1]: run-netns-cni\x2dd5e57f32\x2de3d1\x2d09f1\x2d5cb4\x2d9055a28ee02a.mount: Deactivated successfully. Jul 16 00:49:30.560641 systemd[1]: run-netns-cni\x2d9db9cfcb\x2d1d0c\x2d81b1\x2ddaa3\x2def5735780a91.mount: Deactivated successfully. Jul 16 00:49:30.560675 systemd[1]: run-netns-cni\x2dec095506\x2dd049\x2d0761\x2d2bcc\x2d48cc13a2e993.mount: Deactivated successfully. Jul 16 00:49:30.593087 systemd[1]: Created slice kubepods-besteffort-pod05542b76_a42a_41b7_a7f7_a1f97b8c0b25.slice - libcontainer container kubepods-besteffort-pod05542b76_a42a_41b7_a7f7_a1f97b8c0b25.slice. Jul 16 00:49:30.596226 containerd[1916]: time="2025-07-16T00:49:30.596197976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8xm9,Uid:05542b76-a42a-41b7-a7f7-a1f97b8c0b25,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:30.621241 containerd[1916]: time="2025-07-16T00:49:30.621216277Z" level=error msg="Failed to destroy network for sandbox \"76e27c5485c18a04f5951b8b638d245700521f2633eee79605dfce9b1396aa09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:30.621713 containerd[1916]: time="2025-07-16T00:49:30.621697664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8xm9,Uid:05542b76-a42a-41b7-a7f7-a1f97b8c0b25,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e27c5485c18a04f5951b8b638d245700521f2633eee79605dfce9b1396aa09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:30.621880 kubelet[3434]: E0716 00:49:30.621824 3434 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e27c5485c18a04f5951b8b638d245700521f2633eee79605dfce9b1396aa09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:49:30.621931 kubelet[3434]: E0716 00:49:30.621897 3434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e27c5485c18a04f5951b8b638d245700521f2633eee79605dfce9b1396aa09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c8xm9" Jul 16 00:49:30.621931 kubelet[3434]: E0716 00:49:30.621912 3434 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76e27c5485c18a04f5951b8b638d245700521f2633eee79605dfce9b1396aa09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c8xm9" Jul 16 00:49:30.621990 kubelet[3434]: E0716 00:49:30.621939 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c8xm9_calico-system(05542b76-a42a-41b7-a7f7-a1f97b8c0b25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c8xm9_calico-system(05542b76-a42a-41b7-a7f7-a1f97b8c0b25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76e27c5485c18a04f5951b8b638d245700521f2633eee79605dfce9b1396aa09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c8xm9" podUID="05542b76-a42a-41b7-a7f7-a1f97b8c0b25" Jul 16 00:49:30.622805 systemd[1]: run-netns-cni\x2d157a9559\x2da8f6\x2dcccd\x2d0df1\x2db48fdeb5060c.mount: Deactivated successfully. Jul 16 00:49:35.070132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604390796.mount: Deactivated successfully. Jul 16 00:49:35.082526 containerd[1916]: time="2025-07-16T00:49:35.082478828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:35.082721 containerd[1916]: time="2025-07-16T00:49:35.082705413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 16 00:49:35.083072 containerd[1916]: time="2025-07-16T00:49:35.083025599Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:35.083789 containerd[1916]: time="2025-07-16T00:49:35.083746871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:35.084149 containerd[1916]: time="2025-07-16T00:49:35.084106920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.413687662s" Jul 16 00:49:35.084149 containerd[1916]: time="2025-07-16T00:49:35.084123373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 16 00:49:35.087585 containerd[1916]: time="2025-07-16T00:49:35.087539740Z" level=info msg="CreateContainer within sandbox \"bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 16 00:49:35.099895 containerd[1916]: time="2025-07-16T00:49:35.099831054Z" level=info msg="Container 18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:35.104963 containerd[1916]: time="2025-07-16T00:49:35.104918574Z" level=info msg="CreateContainer within sandbox \"bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\"" Jul 16 00:49:35.105245 containerd[1916]: time="2025-07-16T00:49:35.105231799Z" level=info msg="StartContainer for \"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\"" Jul 16 00:49:35.106077 containerd[1916]: time="2025-07-16T00:49:35.106062664Z" level=info msg="connecting to shim 18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4" address="unix:///run/containerd/s/02505df8b003a13615a11a0ea88f5380afe0edf5ebd171a6ac5fd450f65b56ec" protocol=ttrpc version=3 Jul 16 00:49:35.132071 systemd[1]: Started cri-containerd-18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4.scope - libcontainer container 18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4. Jul 16 00:49:35.159094 containerd[1916]: time="2025-07-16T00:49:35.159038279Z" level=info msg="StartContainer for \"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" returns successfully" Jul 16 00:49:35.229590 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 16 00:49:35.229652 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 16 00:49:35.302045 kubelet[3434]: I0716 00:49:35.302023 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gtnx\" (UniqueName: \"kubernetes.io/projected/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-kube-api-access-9gtnx\") pod \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\" (UID: \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\") " Jul 16 00:49:35.302367 kubelet[3434]: I0716 00:49:35.302056 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-backend-key-pair\") pod \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\" (UID: \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\") " Jul 16 00:49:35.302367 kubelet[3434]: I0716 00:49:35.302077 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-ca-bundle\") pod \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\" (UID: \"2b06861d-839c-4b0f-9733-4b6f6ee18f8d\") " Jul 16 00:49:35.302367 kubelet[3434]: I0716 00:49:35.302355 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2b06861d-839c-4b0f-9733-4b6f6ee18f8d" (UID: "2b06861d-839c-4b0f-9733-4b6f6ee18f8d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 16 00:49:35.303533 kubelet[3434]: I0716 00:49:35.303519 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2b06861d-839c-4b0f-9733-4b6f6ee18f8d" (UID: "2b06861d-839c-4b0f-9733-4b6f6ee18f8d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 16 00:49:35.303638 kubelet[3434]: I0716 00:49:35.303626 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-kube-api-access-9gtnx" (OuterVolumeSpecName: "kube-api-access-9gtnx") pod "2b06861d-839c-4b0f-9733-4b6f6ee18f8d" (UID: "2b06861d-839c-4b0f-9733-4b6f6ee18f8d"). InnerVolumeSpecName "kube-api-access-9gtnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 16 00:49:35.403227 kubelet[3434]: I0716 00:49:35.403009 3434 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9gtnx\" (UniqueName: \"kubernetes.io/projected/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-kube-api-access-9gtnx\") on node \"ci-4372.0.1-n-bd48696324\" DevicePath \"\"" Jul 16 00:49:35.403227 kubelet[3434]: I0716 00:49:35.403083 3434 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-backend-key-pair\") on node \"ci-4372.0.1-n-bd48696324\" DevicePath \"\"" Jul 16 00:49:35.403227 kubelet[3434]: I0716 00:49:35.403133 3434 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b06861d-839c-4b0f-9733-4b6f6ee18f8d-whisker-ca-bundle\") on node \"ci-4372.0.1-n-bd48696324\" DevicePath \"\"" Jul 16 00:49:35.703883 systemd[1]: Removed slice kubepods-besteffort-pod2b06861d_839c_4b0f_9733_4b6f6ee18f8d.slice - libcontainer container kubepods-besteffort-pod2b06861d_839c_4b0f_9733_4b6f6ee18f8d.slice. Jul 16 00:49:35.726410 kubelet[3434]: I0716 00:49:35.726262 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kl7bg" podStartSLOduration=1.110156716 podStartE2EDuration="15.726226733s" podCreationTimestamp="2025-07-16 00:49:20 +0000 UTC" firstStartedPulling="2025-07-16 00:49:20.4683788 +0000 UTC m=+15.944737742" lastFinishedPulling="2025-07-16 00:49:35.084448819 +0000 UTC m=+30.560807759" observedRunningTime="2025-07-16 00:49:35.725249329 +0000 UTC m=+31.201608348" watchObservedRunningTime="2025-07-16 00:49:35.726226733 +0000 UTC m=+31.202585719" Jul 16 00:49:35.778607 systemd[1]: Created slice kubepods-besteffort-pod0e7feb7c_2393_47fa_ad65_95aa78094e29.slice - libcontainer container kubepods-besteffort-pod0e7feb7c_2393_47fa_ad65_95aa78094e29.slice. Jul 16 00:49:35.806707 kubelet[3434]: I0716 00:49:35.806608 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e7feb7c-2393-47fa-ad65-95aa78094e29-whisker-ca-bundle\") pod \"whisker-7566c57bf6-fqf94\" (UID: \"0e7feb7c-2393-47fa-ad65-95aa78094e29\") " pod="calico-system/whisker-7566c57bf6-fqf94" Jul 16 00:49:35.806707 kubelet[3434]: I0716 00:49:35.806686 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e7feb7c-2393-47fa-ad65-95aa78094e29-whisker-backend-key-pair\") pod \"whisker-7566c57bf6-fqf94\" (UID: \"0e7feb7c-2393-47fa-ad65-95aa78094e29\") " pod="calico-system/whisker-7566c57bf6-fqf94" Jul 16 00:49:35.807034 kubelet[3434]: I0716 00:49:35.806880 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c692\" (UniqueName: \"kubernetes.io/projected/0e7feb7c-2393-47fa-ad65-95aa78094e29-kube-api-access-7c692\") pod \"whisker-7566c57bf6-fqf94\" (UID: \"0e7feb7c-2393-47fa-ad65-95aa78094e29\") " pod="calico-system/whisker-7566c57bf6-fqf94" Jul 16 00:49:36.083681 containerd[1916]: time="2025-07-16T00:49:36.083595082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7566c57bf6-fqf94,Uid:0e7feb7c-2393-47fa-ad65-95aa78094e29,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:36.083789 systemd[1]: var-lib-kubelet-pods-2b06861d\x2d839c\x2d4b0f\x2d9733\x2d4b6f6ee18f8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9gtnx.mount: Deactivated successfully. Jul 16 00:49:36.084084 systemd[1]: var-lib-kubelet-pods-2b06861d\x2d839c\x2d4b0f\x2d9733\x2d4b6f6ee18f8d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 16 00:49:36.149532 systemd-networkd[1829]: calif9cb7d44d39: Link UP Jul 16 00:49:36.149694 systemd-networkd[1829]: calif9cb7d44d39: Gained carrier Jul 16 00:49:36.156298 containerd[1916]: 2025-07-16 00:49:36.095 [INFO][4960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:49:36.156298 containerd[1916]: 2025-07-16 00:49:36.101 [INFO][4960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0 whisker-7566c57bf6- calico-system 0e7feb7c-2393-47fa-ad65-95aa78094e29 853 0 2025-07-16 00:49:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7566c57bf6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 whisker-7566c57bf6-fqf94 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif9cb7d44d39 [] [] }} ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-" Jul 16 00:49:36.156298 containerd[1916]: 2025-07-16 00:49:36.101 [INFO][4960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" Jul 16 00:49:36.156298 containerd[1916]: 2025-07-16 00:49:36.114 [INFO][4981] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" HandleID="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Workload="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.114 [INFO][4981] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" HandleID="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Workload="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eab0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-bd48696324", "pod":"whisker-7566c57bf6-fqf94", "timestamp":"2025-07-16 00:49:36.114437476 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.114 [INFO][4981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.114 [INFO][4981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.114 [INFO][4981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.119 [INFO][4981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.122 [INFO][4981] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.125 [INFO][4981] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.128 [INFO][4981] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156459 containerd[1916]: 2025-07-16 00:49:36.135 [INFO][4981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156627 containerd[1916]: 2025-07-16 00:49:36.135 [INFO][4981] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156627 containerd[1916]: 2025-07-16 00:49:36.137 [INFO][4981] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1 Jul 16 00:49:36.156627 containerd[1916]: 2025-07-16 00:49:36.140 [INFO][4981] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156627 containerd[1916]: 2025-07-16 00:49:36.143 [INFO][4981] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.129/26] block=192.168.82.128/26 handle="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156627 containerd[1916]: 2025-07-16 00:49:36.143 [INFO][4981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.129/26] handle="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:36.156627 containerd[1916]: 2025-07-16 00:49:36.143 [INFO][4981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:36.156627 containerd[1916]: 2025-07-16 00:49:36.143 [INFO][4981] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.129/26] IPv6=[] ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" HandleID="k8s-pod-network.490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Workload="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" Jul 16 00:49:36.156745 containerd[1916]: 2025-07-16 00:49:36.145 [INFO][4960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0", GenerateName:"whisker-7566c57bf6-", Namespace:"calico-system", SelfLink:"", UID:"0e7feb7c-2393-47fa-ad65-95aa78094e29", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7566c57bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"whisker-7566c57bf6-fqf94", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif9cb7d44d39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:36.156745 containerd[1916]: 2025-07-16 00:49:36.145 [INFO][4960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.129/32] ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" Jul 16 00:49:36.156805 containerd[1916]: 2025-07-16 00:49:36.145 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9cb7d44d39 ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" Jul 16 00:49:36.156805 containerd[1916]: 2025-07-16 00:49:36.150 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" Jul 16 00:49:36.156852 containerd[1916]: 2025-07-16 00:49:36.150 [INFO][4960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0", GenerateName:"whisker-7566c57bf6-", Namespace:"calico-system", SelfLink:"", UID:"0e7feb7c-2393-47fa-ad65-95aa78094e29", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7566c57bf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1", Pod:"whisker-7566c57bf6-fqf94", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif9cb7d44d39", MAC:"1a:98:15:5c:14:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:36.156897 containerd[1916]: 2025-07-16 00:49:36.155 [INFO][4960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" Namespace="calico-system" Pod="whisker-7566c57bf6-fqf94" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-whisker--7566c57bf6--fqf94-eth0" Jul 16 00:49:36.165242 containerd[1916]: time="2025-07-16T00:49:36.165216218Z" level=info msg="connecting to shim 490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1" address="unix:///run/containerd/s/3a942ba07f3768b35d06aae54fc6123600c3923a1c25ee8cbc881be6b23f6ed3" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:36.185327 systemd[1]: Started cri-containerd-490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1.scope - libcontainer container 490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1. Jul 16 00:49:36.268053 containerd[1916]: time="2025-07-16T00:49:36.268000467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7566c57bf6-fqf94,Uid:0e7feb7c-2393-47fa-ad65-95aa78094e29,Namespace:calico-system,Attempt:0,} returns sandbox id \"490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1\"" Jul 16 00:49:36.268670 containerd[1916]: time="2025-07-16T00:49:36.268658672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 16 00:49:36.587972 kubelet[3434]: I0716 00:49:36.587951 3434 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b06861d-839c-4b0f-9733-4b6f6ee18f8d" path="/var/lib/kubelet/pods/2b06861d-839c-4b0f-9733-4b6f6ee18f8d/volumes" Jul 16 00:49:36.592348 systemd-networkd[1829]: vxlan.calico: Link UP Jul 16 00:49:36.592351 systemd-networkd[1829]: vxlan.calico: Gained carrier Jul 16 00:49:36.739043 containerd[1916]: time="2025-07-16T00:49:36.739007056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"0737d7250f6fbde54669088952cf3142a4c092ae5db184756f52d5c686f1dfec\" pid:5303 exit_status:1 exited_at:{seconds:1752626976 nanos:738618360}" Jul 16 00:49:37.773786 containerd[1916]: time="2025-07-16T00:49:37.773716852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"a187f9313a0f0c305298eddd7e5f79cda44ff97a3a59e9b075b63c1e7fbe42c4\" pid:5375 exited_at:{seconds:1752626977 nanos:773490377}" Jul 16 00:49:37.791133 systemd-networkd[1829]: calif9cb7d44d39: Gained IPv6LL Jul 16 00:49:37.858923 containerd[1916]: time="2025-07-16T00:49:37.858865005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:37.859094 containerd[1916]: time="2025-07-16T00:49:37.859052567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 16 00:49:37.859422 containerd[1916]: time="2025-07-16T00:49:37.859374957Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:37.860354 containerd[1916]: time="2025-07-16T00:49:37.860306491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:37.860728 containerd[1916]: time="2025-07-16T00:49:37.860690259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.592014911s" Jul 16 00:49:37.860728 containerd[1916]: time="2025-07-16T00:49:37.860706644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 16 00:49:37.861679 containerd[1916]: time="2025-07-16T00:49:37.861639074Z" level=info msg="CreateContainer within sandbox \"490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 16 00:49:37.864497 containerd[1916]: time="2025-07-16T00:49:37.864460364Z" level=info msg="Container ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:37.868631 containerd[1916]: time="2025-07-16T00:49:37.868589374Z" level=info msg="CreateContainer within sandbox \"490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca\"" Jul 16 00:49:37.868893 containerd[1916]: time="2025-07-16T00:49:37.868848799Z" level=info msg="StartContainer for \"ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca\"" Jul 16 00:49:37.869389 containerd[1916]: time="2025-07-16T00:49:37.869350148Z" level=info msg="connecting to shim ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca" address="unix:///run/containerd/s/3a942ba07f3768b35d06aae54fc6123600c3923a1c25ee8cbc881be6b23f6ed3" protocol=ttrpc version=3 Jul 16 00:49:37.893073 systemd[1]: Started cri-containerd-ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca.scope - libcontainer container ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca. Jul 16 00:49:37.927500 containerd[1916]: time="2025-07-16T00:49:37.927473088Z" level=info msg="StartContainer for \"ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca\" returns successfully" Jul 16 00:49:37.928118 containerd[1916]: time="2025-07-16T00:49:37.928102174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 16 00:49:38.494111 systemd-networkd[1829]: vxlan.calico: Gained IPv6LL Jul 16 00:49:40.306633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277684160.mount: Deactivated successfully. Jul 16 00:49:40.313977 containerd[1916]: time="2025-07-16T00:49:40.313874486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:40.314220 containerd[1916]: time="2025-07-16T00:49:40.314206516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 16 00:49:40.314531 containerd[1916]: time="2025-07-16T00:49:40.314519679Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:40.315426 containerd[1916]: time="2025-07-16T00:49:40.315384321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:40.316126 containerd[1916]: time="2025-07-16T00:49:40.316082269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.387961564s" Jul 16 00:49:40.316126 containerd[1916]: time="2025-07-16T00:49:40.316097889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 16 00:49:40.316984 containerd[1916]: time="2025-07-16T00:49:40.316971246Z" level=info msg="CreateContainer within sandbox \"490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 16 00:49:40.319728 containerd[1916]: time="2025-07-16T00:49:40.319714291Z" level=info msg="Container 8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:40.322688 containerd[1916]: time="2025-07-16T00:49:40.322650061Z" level=info msg="CreateContainer within sandbox \"490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d\"" Jul 16 00:49:40.322864 containerd[1916]: time="2025-07-16T00:49:40.322851927Z" level=info msg="StartContainer for \"8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d\"" Jul 16 00:49:40.323384 containerd[1916]: time="2025-07-16T00:49:40.323368744Z" level=info msg="connecting to shim 8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d" address="unix:///run/containerd/s/3a942ba07f3768b35d06aae54fc6123600c3923a1c25ee8cbc881be6b23f6ed3" protocol=ttrpc version=3 Jul 16 00:49:40.340041 systemd[1]: Started cri-containerd-8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d.scope - libcontainer container 8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d. Jul 16 00:49:40.372994 containerd[1916]: time="2025-07-16T00:49:40.372940126Z" level=info msg="StartContainer for \"8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d\" returns successfully" Jul 16 00:49:40.587213 containerd[1916]: time="2025-07-16T00:49:40.587095745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bdmg5,Uid:4581efd3-fe1a-465b-85dd-ae5c238057dd,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:40.638760 systemd-networkd[1829]: calicb137b61eb7: Link UP Jul 16 00:49:40.638912 systemd-networkd[1829]: calicb137b61eb7: Gained carrier Jul 16 00:49:40.645721 containerd[1916]: 2025-07-16 00:49:40.606 [INFO][5500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0 coredns-668d6bf9bc- kube-system 4581efd3-fe1a-465b-85dd-ae5c238057dd 782 0 2025-07-16 00:49:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 coredns-668d6bf9bc-bdmg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb137b61eb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-" Jul 16 00:49:40.645721 containerd[1916]: 2025-07-16 00:49:40.606 [INFO][5500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" Jul 16 00:49:40.645721 containerd[1916]: 2025-07-16 00:49:40.620 [INFO][5525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" HandleID="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Workload="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.620 [INFO][5525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" HandleID="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Workload="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000423a40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-bd48696324", "pod":"coredns-668d6bf9bc-bdmg5", "timestamp":"2025-07-16 00:49:40.620783211 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.620 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.620 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.620 [INFO][5525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.624 [INFO][5525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.627 [INFO][5525] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.629 [INFO][5525] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.630 [INFO][5525] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.645875 containerd[1916]: 2025-07-16 00:49:40.631 [INFO][5525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.646081 containerd[1916]: 2025-07-16 00:49:40.631 [INFO][5525] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.646081 containerd[1916]: 2025-07-16 00:49:40.632 [INFO][5525] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1 Jul 16 00:49:40.646081 containerd[1916]: 2025-07-16 00:49:40.634 [INFO][5525] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.646081 containerd[1916]: 2025-07-16 00:49:40.636 [INFO][5525] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.130/26] block=192.168.82.128/26 handle="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.646081 containerd[1916]: 2025-07-16 00:49:40.636 [INFO][5525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.130/26] handle="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:40.646081 containerd[1916]: 2025-07-16 00:49:40.636 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:40.646081 containerd[1916]: 2025-07-16 00:49:40.636 [INFO][5525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.130/26] IPv6=[] ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" HandleID="k8s-pod-network.0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Workload="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" Jul 16 00:49:40.646211 containerd[1916]: 2025-07-16 00:49:40.637 [INFO][5500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4581efd3-fe1a-465b-85dd-ae5c238057dd", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"coredns-668d6bf9bc-bdmg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb137b61eb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:40.646211 containerd[1916]: 2025-07-16 00:49:40.637 [INFO][5500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.130/32] ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" Jul 16 00:49:40.646211 containerd[1916]: 2025-07-16 00:49:40.637 [INFO][5500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb137b61eb7 ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" Jul 16 00:49:40.646211 containerd[1916]: 2025-07-16 00:49:40.638 [INFO][5500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" Jul 16 00:49:40.646211 containerd[1916]: 2025-07-16 00:49:40.639 [INFO][5500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4581efd3-fe1a-465b-85dd-ae5c238057dd", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1", Pod:"coredns-668d6bf9bc-bdmg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb137b61eb7", MAC:"92:43:dd:bb:97:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:40.646211 containerd[1916]: 2025-07-16 00:49:40.644 [INFO][5500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" Namespace="kube-system" Pod="coredns-668d6bf9bc-bdmg5" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--bdmg5-eth0" Jul 16 00:49:40.654020 containerd[1916]: time="2025-07-16T00:49:40.653994444Z" level=info msg="connecting to shim 0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1" address="unix:///run/containerd/s/d178922959efa954e3d6361ce18c253babb84f9bc9c1b3e580d8648f0828b1cc" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:40.679007 systemd[1]: Started cri-containerd-0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1.scope - libcontainer container 0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1. Jul 16 00:49:40.706043 containerd[1916]: time="2025-07-16T00:49:40.706019926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bdmg5,Uid:4581efd3-fe1a-465b-85dd-ae5c238057dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1\"" Jul 16 00:49:40.707247 containerd[1916]: time="2025-07-16T00:49:40.707232822Z" level=info msg="CreateContainer within sandbox \"0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:49:40.710367 containerd[1916]: time="2025-07-16T00:49:40.710352178Z" level=info msg="Container 4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:40.712537 containerd[1916]: time="2025-07-16T00:49:40.712520776Z" level=info msg="CreateContainer within sandbox \"0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72\"" Jul 16 00:49:40.712715 containerd[1916]: time="2025-07-16T00:49:40.712701031Z" level=info msg="StartContainer for \"4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72\"" Jul 16 00:49:40.713164 containerd[1916]: time="2025-07-16T00:49:40.713152843Z" level=info msg="connecting to shim 4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72" address="unix:///run/containerd/s/d178922959efa954e3d6361ce18c253babb84f9bc9c1b3e580d8648f0828b1cc" protocol=ttrpc version=3 Jul 16 00:49:40.730004 systemd[1]: Started cri-containerd-4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72.scope - libcontainer container 4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72. Jul 16 00:49:40.752995 containerd[1916]: time="2025-07-16T00:49:40.752947881Z" level=info msg="StartContainer for \"4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72\" returns successfully" Jul 16 00:49:41.730521 kubelet[3434]: I0716 00:49:41.730437 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7566c57bf6-fqf94" podStartSLOduration=2.6825620690000003 podStartE2EDuration="6.730409397s" podCreationTimestamp="2025-07-16 00:49:35 +0000 UTC" firstStartedPulling="2025-07-16 00:49:36.268546491 +0000 UTC m=+31.744905434" lastFinishedPulling="2025-07-16 00:49:40.316393818 +0000 UTC m=+35.792752762" observedRunningTime="2025-07-16 00:49:40.719290524 +0000 UTC m=+36.195649470" watchObservedRunningTime="2025-07-16 00:49:41.730409397 +0000 UTC m=+37.206768362" Jul 16 00:49:41.823093 systemd-networkd[1829]: calicb137b61eb7: Gained IPv6LL Jul 16 00:49:42.588597 containerd[1916]: time="2025-07-16T00:49:42.588506279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb77455db-cgzrs,Uid:8f2a8f6f-6e2f-45a0-8a7c-34a267024787,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:42.589345 containerd[1916]: time="2025-07-16T00:49:42.588597769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-gc28p,Uid:8e85faa1-92f4-4845-8e0e-7801e31f9e04,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:49:42.643025 systemd-networkd[1829]: cali19cab65c567: Link UP Jul 16 00:49:42.643193 systemd-networkd[1829]: cali19cab65c567: Gained carrier Jul 16 00:49:42.647628 kubelet[3434]: I0716 00:49:42.647586 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bdmg5" podStartSLOduration=32.647568298 podStartE2EDuration="32.647568298s" podCreationTimestamp="2025-07-16 00:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:49:41.729381094 +0000 UTC m=+37.205740072" watchObservedRunningTime="2025-07-16 00:49:42.647568298 +0000 UTC m=+38.123927242" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.608 [INFO][5657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0 calico-kube-controllers-6fb77455db- calico-system 8f2a8f6f-6e2f-45a0-8a7c-34a267024787 786 0 2025-07-16 00:49:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fb77455db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 calico-kube-controllers-6fb77455db-cgzrs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali19cab65c567 [] [] }} ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.608 [INFO][5657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.620 [INFO][5705] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" HandleID="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.621 [INFO][5705] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" HandleID="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-bd48696324", "pod":"calico-kube-controllers-6fb77455db-cgzrs", "timestamp":"2025-07-16 00:49:42.620991099 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.621 [INFO][5705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.621 [INFO][5705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.621 [INFO][5705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.626 [INFO][5705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.629 [INFO][5705] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.632 [INFO][5705] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.633 [INFO][5705] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.635 [INFO][5705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.635 [INFO][5705] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.636 [INFO][5705] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.638 [INFO][5705] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.641 [INFO][5705] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.131/26] block=192.168.82.128/26 handle="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.641 [INFO][5705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.131/26] handle="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.641 [INFO][5705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:42.648406 containerd[1916]: 2025-07-16 00:49:42.641 [INFO][5705] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.131/26] IPv6=[] ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" HandleID="k8s-pod-network.441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" Jul 16 00:49:42.648794 containerd[1916]: 2025-07-16 00:49:42.642 [INFO][5657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0", GenerateName:"calico-kube-controllers-6fb77455db-", Namespace:"calico-system", SelfLink:"", UID:"8f2a8f6f-6e2f-45a0-8a7c-34a267024787", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb77455db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"calico-kube-controllers-6fb77455db-cgzrs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19cab65c567", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:42.648794 containerd[1916]: 2025-07-16 00:49:42.642 [INFO][5657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.131/32] ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" Jul 16 00:49:42.648794 containerd[1916]: 2025-07-16 00:49:42.642 [INFO][5657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19cab65c567 ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" Jul 16 00:49:42.648794 containerd[1916]: 2025-07-16 00:49:42.643 [INFO][5657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" Jul 16 00:49:42.648794 containerd[1916]: 2025-07-16 00:49:42.643 [INFO][5657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0", GenerateName:"calico-kube-controllers-6fb77455db-", Namespace:"calico-system", SelfLink:"", UID:"8f2a8f6f-6e2f-45a0-8a7c-34a267024787", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fb77455db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d", Pod:"calico-kube-controllers-6fb77455db-cgzrs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali19cab65c567", MAC:"36:a8:a3:3c:b7:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:42.648794 containerd[1916]: 2025-07-16 00:49:42.647 [INFO][5657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" Namespace="calico-system" Pod="calico-kube-controllers-6fb77455db-cgzrs" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--kube--controllers--6fb77455db--cgzrs-eth0" Jul 16 00:49:42.658402 containerd[1916]: time="2025-07-16T00:49:42.658349135Z" level=info msg="connecting to shim 441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d" address="unix:///run/containerd/s/d1903896c27da159baa21695b2fc1506f85ae953b86b3714028f9712edafd3e4" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:42.679234 systemd[1]: Started cri-containerd-441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d.scope - libcontainer container 441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d. Jul 16 00:49:42.748742 containerd[1916]: time="2025-07-16T00:49:42.748719068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fb77455db-cgzrs,Uid:8f2a8f6f-6e2f-45a0-8a7c-34a267024787,Namespace:calico-system,Attempt:0,} returns sandbox id \"441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d\"" Jul 16 00:49:42.749481 containerd[1916]: time="2025-07-16T00:49:42.749468193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 16 00:49:42.751140 systemd-networkd[1829]: califee2ed69a88: Link UP Jul 16 00:49:42.751354 systemd-networkd[1829]: califee2ed69a88: Gained carrier Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.608 [INFO][5658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0 calico-apiserver-6c4d947c8c- calico-apiserver 8e85faa1-92f4-4845-8e0e-7801e31f9e04 790 0 2025-07-16 00:49:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c4d947c8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 calico-apiserver-6c4d947c8c-gc28p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califee2ed69a88 [] [] }} ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.608 [INFO][5658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.620 [INFO][5703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" HandleID="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.621 [INFO][5703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" HandleID="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003856e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-bd48696324", "pod":"calico-apiserver-6c4d947c8c-gc28p", "timestamp":"2025-07-16 00:49:42.620990872 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.621 [INFO][5703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.641 [INFO][5703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.641 [INFO][5703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.731 [INFO][5703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.735 [INFO][5703] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.739 [INFO][5703] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.741 [INFO][5703] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.742 [INFO][5703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.742 [INFO][5703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.743 [INFO][5703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5 Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.746 [INFO][5703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.749 [INFO][5703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.132/26] block=192.168.82.128/26 handle="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.749 [INFO][5703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.132/26] handle="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.749 [INFO][5703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:42.757346 containerd[1916]: 2025-07-16 00:49:42.749 [INFO][5703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.132/26] IPv6=[] ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" HandleID="k8s-pod-network.3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" Jul 16 00:49:42.757769 containerd[1916]: 2025-07-16 00:49:42.750 [INFO][5658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0", GenerateName:"calico-apiserver-6c4d947c8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e85faa1-92f4-4845-8e0e-7801e31f9e04", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d947c8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"calico-apiserver-6c4d947c8c-gc28p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califee2ed69a88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:42.757769 containerd[1916]: 2025-07-16 00:49:42.750 [INFO][5658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.132/32] ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" Jul 16 00:49:42.757769 containerd[1916]: 2025-07-16 00:49:42.750 [INFO][5658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califee2ed69a88 ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" Jul 16 00:49:42.757769 containerd[1916]: 2025-07-16 00:49:42.751 [INFO][5658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" Jul 16 00:49:42.757769 containerd[1916]: 2025-07-16 00:49:42.751 [INFO][5658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0", GenerateName:"calico-apiserver-6c4d947c8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e85faa1-92f4-4845-8e0e-7801e31f9e04", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d947c8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5", Pod:"calico-apiserver-6c4d947c8c-gc28p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califee2ed69a88", MAC:"46:e9:e4:93:ba:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:42.757769 containerd[1916]: 2025-07-16 00:49:42.756 [INFO][5658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-gc28p" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--gc28p-eth0" Jul 16 00:49:42.765495 containerd[1916]: time="2025-07-16T00:49:42.765448227Z" level=info msg="connecting to shim 3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5" address="unix:///run/containerd/s/b67148f8532d4bd221ae066d28471602d8f1f4735c84a3593cab149fcdab3144" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:42.793049 systemd[1]: Started cri-containerd-3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5.scope - libcontainer container 3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5. Jul 16 00:49:42.819544 containerd[1916]: time="2025-07-16T00:49:42.819523382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-gc28p,Uid:8e85faa1-92f4-4845-8e0e-7801e31f9e04,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5\"" Jul 16 00:49:43.587277 containerd[1916]: time="2025-07-16T00:49:43.587219775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-mkbs4,Uid:5a9829a7-411d-4497-a3d6-2632af67105b,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:49:43.644478 systemd-networkd[1829]: cali496a6a27f40: Link UP Jul 16 00:49:43.644626 systemd-networkd[1829]: cali496a6a27f40: Gained carrier Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.607 [INFO][5846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0 calico-apiserver-6c4d947c8c- calico-apiserver 5a9829a7-411d-4497-a3d6-2632af67105b 787 0 2025-07-16 00:49:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c4d947c8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 calico-apiserver-6c4d947c8c-mkbs4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali496a6a27f40 [] [] }} ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.607 [INFO][5846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.620 [INFO][5867] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" HandleID="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.620 [INFO][5867] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" HandleID="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bbb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-bd48696324", "pod":"calico-apiserver-6c4d947c8c-mkbs4", "timestamp":"2025-07-16 00:49:43.620703919 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.620 [INFO][5867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.620 [INFO][5867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.620 [INFO][5867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.625 [INFO][5867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.629 [INFO][5867] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.632 [INFO][5867] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.633 [INFO][5867] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.635 [INFO][5867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.635 [INFO][5867] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.636 [INFO][5867] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5 Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.639 [INFO][5867] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.642 [INFO][5867] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.133/26] block=192.168.82.128/26 handle="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.642 [INFO][5867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.133/26] handle="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.642 [INFO][5867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:43.649993 containerd[1916]: 2025-07-16 00:49:43.642 [INFO][5867] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.133/26] IPv6=[] ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" HandleID="k8s-pod-network.56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Workload="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" Jul 16 00:49:43.650500 containerd[1916]: 2025-07-16 00:49:43.643 [INFO][5846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0", GenerateName:"calico-apiserver-6c4d947c8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9829a7-411d-4497-a3d6-2632af67105b", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d947c8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"calico-apiserver-6c4d947c8c-mkbs4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali496a6a27f40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:43.650500 containerd[1916]: 2025-07-16 00:49:43.643 [INFO][5846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.133/32] ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" Jul 16 00:49:43.650500 containerd[1916]: 2025-07-16 00:49:43.643 [INFO][5846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali496a6a27f40 ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" Jul 16 00:49:43.650500 containerd[1916]: 2025-07-16 00:49:43.644 [INFO][5846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" Jul 16 00:49:43.650500 containerd[1916]: 2025-07-16 00:49:43.644 [INFO][5846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0", GenerateName:"calico-apiserver-6c4d947c8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a9829a7-411d-4497-a3d6-2632af67105b", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4d947c8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5", Pod:"calico-apiserver-6c4d947c8c-mkbs4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali496a6a27f40", MAC:"ce:54:f5:2c:39:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:43.650500 containerd[1916]: 2025-07-16 00:49:43.649 [INFO][5846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" Namespace="calico-apiserver" Pod="calico-apiserver-6c4d947c8c-mkbs4" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-calico--apiserver--6c4d947c8c--mkbs4-eth0" Jul 16 00:49:43.669006 containerd[1916]: time="2025-07-16T00:49:43.668981334Z" level=info msg="connecting to shim 56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5" address="unix:///run/containerd/s/7c9e1b72b4effde36e80fd89aad8019ee36a2dfafa125d6f812f84defaf09639" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:43.692022 systemd[1]: Started cri-containerd-56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5.scope - libcontainer container 56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5. Jul 16 00:49:43.720871 containerd[1916]: time="2025-07-16T00:49:43.720835076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4d947c8c-mkbs4,Uid:5a9829a7-411d-4497-a3d6-2632af67105b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5\"" Jul 16 00:49:44.254053 systemd-networkd[1829]: califee2ed69a88: Gained IPv6LL Jul 16 00:49:44.446002 systemd-networkd[1829]: cali19cab65c567: Gained IPv6LL Jul 16 00:49:44.588175 containerd[1916]: time="2025-07-16T00:49:44.588129117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-995v6,Uid:1340497f-ae28-4eda-920e-f4de075e0d82,Namespace:kube-system,Attempt:0,}" Jul 16 00:49:44.588278 containerd[1916]: time="2025-07-16T00:49:44.588126843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lxlf2,Uid:b5d72f59-207b-442f-b9d9-abe3a39737ed,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:44.643191 systemd-networkd[1829]: calicebb77cf32a: Link UP Jul 16 00:49:44.643362 systemd-networkd[1829]: calicebb77cf32a: Gained carrier Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.608 [INFO][5932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0 coredns-668d6bf9bc- kube-system 1340497f-ae28-4eda-920e-f4de075e0d82 788 0 2025-07-16 00:49:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 coredns-668d6bf9bc-995v6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicebb77cf32a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.608 [INFO][5932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" HandleID="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Workload="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" HandleID="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Workload="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f850), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-bd48696324", "pod":"coredns-668d6bf9bc-995v6", "timestamp":"2025-07-16 00:49:44.622297541 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.626 [INFO][5980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.630 [INFO][5980] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.632 [INFO][5980] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.633 [INFO][5980] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.635 [INFO][5980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.635 [INFO][5980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.636 [INFO][5980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.639 [INFO][5980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.641 [INFO][5980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.134/26] block=192.168.82.128/26 handle="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.641 [INFO][5980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.134/26] handle="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.641 [INFO][5980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:44.648422 containerd[1916]: 2025-07-16 00:49:44.641 [INFO][5980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.134/26] IPv6=[] ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" HandleID="k8s-pod-network.cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Workload="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" Jul 16 00:49:44.648825 containerd[1916]: 2025-07-16 00:49:44.642 [INFO][5932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1340497f-ae28-4eda-920e-f4de075e0d82", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"coredns-668d6bf9bc-995v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicebb77cf32a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:44.648825 containerd[1916]: 2025-07-16 00:49:44.642 [INFO][5932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.134/32] ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" Jul 16 00:49:44.648825 containerd[1916]: 2025-07-16 00:49:44.642 [INFO][5932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicebb77cf32a ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" Jul 16 00:49:44.648825 containerd[1916]: 2025-07-16 00:49:44.643 [INFO][5932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" Jul 16 00:49:44.648825 containerd[1916]: 2025-07-16 00:49:44.643 [INFO][5932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1340497f-ae28-4eda-920e-f4de075e0d82", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d", Pod:"coredns-668d6bf9bc-995v6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicebb77cf32a", MAC:"da:1c:5b:17:bb:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:44.648825 containerd[1916]: 2025-07-16 00:49:44.647 [INFO][5932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-995v6" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-coredns--668d6bf9bc--995v6-eth0" Jul 16 00:49:44.655710 containerd[1916]: time="2025-07-16T00:49:44.655682497Z" level=info msg="connecting to shim cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d" address="unix:///run/containerd/s/497daf24f63306fb98f9d3759722087f8f20645043160efca9807421dfbd418c" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:44.672002 systemd[1]: Started cri-containerd-cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d.scope - libcontainer container cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d. Jul 16 00:49:44.701333 containerd[1916]: time="2025-07-16T00:49:44.701311275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-995v6,Uid:1340497f-ae28-4eda-920e-f4de075e0d82,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d\"" Jul 16 00:49:44.701895 systemd-networkd[1829]: cali496a6a27f40: Gained IPv6LL Jul 16 00:49:44.702437 containerd[1916]: time="2025-07-16T00:49:44.702423818Z" level=info msg="CreateContainer within sandbox \"cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:49:44.705792 containerd[1916]: time="2025-07-16T00:49:44.705777630Z" level=info msg="Container 9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:44.709253 containerd[1916]: time="2025-07-16T00:49:44.709232749Z" level=info msg="CreateContainer within sandbox \"cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0\"" Jul 16 00:49:44.709493 containerd[1916]: time="2025-07-16T00:49:44.709483092Z" level=info msg="StartContainer for \"9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0\"" Jul 16 00:49:44.710007 containerd[1916]: time="2025-07-16T00:49:44.709969758Z" level=info msg="connecting to shim 9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0" address="unix:///run/containerd/s/497daf24f63306fb98f9d3759722087f8f20645043160efca9807421dfbd418c" protocol=ttrpc version=3 Jul 16 00:49:44.730976 systemd[1]: Started cri-containerd-9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0.scope - libcontainer container 9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0. Jul 16 00:49:44.745804 containerd[1916]: time="2025-07-16T00:49:44.745776978Z" level=info msg="StartContainer for \"9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0\" returns successfully" Jul 16 00:49:44.751005 systemd-networkd[1829]: calib0862642ae8: Link UP Jul 16 00:49:44.751153 systemd-networkd[1829]: calib0862642ae8: Gained carrier Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.608 [INFO][5934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0 goldmane-768f4c5c69- calico-system b5d72f59-207b-442f-b9d9-abe3a39737ed 791 0 2025-07-16 00:49:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 goldmane-768f4c5c69-lxlf2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib0862642ae8 [] [] }} ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.608 [INFO][5934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" HandleID="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Workload="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" HandleID="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Workload="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7330), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-bd48696324", "pod":"goldmane-768f4c5c69-lxlf2", "timestamp":"2025-07-16 00:49:44.62252786 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.622 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.641 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.641 [INFO][5978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.727 [INFO][5978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.730 [INFO][5978] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.732 [INFO][5978] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.733 [INFO][5978] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.734 [INFO][5978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.734 [INFO][5978] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.735 [INFO][5978] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0 Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.745 [INFO][5978] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.748 [INFO][5978] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.135/26] block=192.168.82.128/26 handle="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.749 [INFO][5978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.135/26] handle="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.749 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:44.758036 containerd[1916]: 2025-07-16 00:49:44.749 [INFO][5978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.135/26] IPv6=[] ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" HandleID="k8s-pod-network.2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Workload="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" Jul 16 00:49:44.758655 containerd[1916]: 2025-07-16 00:49:44.750 [INFO][5934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b5d72f59-207b-442f-b9d9-abe3a39737ed", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"goldmane-768f4c5c69-lxlf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0862642ae8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:44.758655 containerd[1916]: 2025-07-16 00:49:44.750 [INFO][5934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.135/32] ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" Jul 16 00:49:44.758655 containerd[1916]: 2025-07-16 00:49:44.750 [INFO][5934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0862642ae8 ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" Jul 16 00:49:44.758655 containerd[1916]: 2025-07-16 00:49:44.751 [INFO][5934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" Jul 16 00:49:44.758655 containerd[1916]: 2025-07-16 00:49:44.751 [INFO][5934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b5d72f59-207b-442f-b9d9-abe3a39737ed", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0", Pod:"goldmane-768f4c5c69-lxlf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0862642ae8", MAC:"0e:94:03:18:ac:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:44.758655 containerd[1916]: 2025-07-16 00:49:44.756 [INFO][5934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" Namespace="calico-system" Pod="goldmane-768f4c5c69-lxlf2" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-goldmane--768f4c5c69--lxlf2-eth0" Jul 16 00:49:44.765997 containerd[1916]: time="2025-07-16T00:49:44.765971490Z" level=info msg="connecting to shim 2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0" address="unix:///run/containerd/s/66909a450b0466752c406a9f1170c8838d04b53f9ff4fe9cae3be64a031ba7a7" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:44.790027 systemd[1]: Started cri-containerd-2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0.scope - libcontainer container 2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0. Jul 16 00:49:44.817806 containerd[1916]: time="2025-07-16T00:49:44.817785536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lxlf2,Uid:b5d72f59-207b-442f-b9d9-abe3a39737ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0\"" Jul 16 00:49:45.419024 containerd[1916]: time="2025-07-16T00:49:45.418995775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:45.419205 containerd[1916]: time="2025-07-16T00:49:45.419187559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 16 00:49:45.419550 containerd[1916]: time="2025-07-16T00:49:45.419536213Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:45.420376 containerd[1916]: time="2025-07-16T00:49:45.420360165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:45.420807 containerd[1916]: time="2025-07-16T00:49:45.420793570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.671310603s" Jul 16 00:49:45.420858 containerd[1916]: time="2025-07-16T00:49:45.420810233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 16 00:49:45.421308 containerd[1916]: time="2025-07-16T00:49:45.421297103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:49:45.424425 containerd[1916]: time="2025-07-16T00:49:45.424405694Z" level=info msg="CreateContainer within sandbox \"441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 16 00:49:45.427331 containerd[1916]: time="2025-07-16T00:49:45.427317160Z" level=info msg="Container 6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:45.429879 containerd[1916]: time="2025-07-16T00:49:45.429839560Z" level=info msg="CreateContainer within sandbox \"441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\"" Jul 16 00:49:45.430109 containerd[1916]: time="2025-07-16T00:49:45.430098795Z" level=info msg="StartContainer for \"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\"" Jul 16 00:49:45.430882 containerd[1916]: time="2025-07-16T00:49:45.430822304Z" level=info msg="connecting to shim 6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e" address="unix:///run/containerd/s/d1903896c27da159baa21695b2fc1506f85ae953b86b3714028f9712edafd3e4" protocol=ttrpc version=3 Jul 16 00:49:45.447051 systemd[1]: Started cri-containerd-6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e.scope - libcontainer container 6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e. Jul 16 00:49:45.476235 containerd[1916]: time="2025-07-16T00:49:45.476179765Z" level=info msg="StartContainer for \"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" returns successfully" Jul 16 00:49:45.587597 containerd[1916]: time="2025-07-16T00:49:45.587570241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8xm9,Uid:05542b76-a42a-41b7-a7f7-a1f97b8c0b25,Namespace:calico-system,Attempt:0,}" Jul 16 00:49:45.634648 systemd-networkd[1829]: cali90414149821: Link UP Jul 16 00:49:45.634843 systemd-networkd[1829]: cali90414149821: Gained carrier Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.604 [INFO][6231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0 csi-node-driver- calico-system 05542b76-a42a-41b7-a7f7-a1f97b8c0b25 668 0 2025-07-16 00:49:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-n-bd48696324 csi-node-driver-c8xm9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali90414149821 [] [] }} ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.604 [INFO][6231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.617 [INFO][6253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" HandleID="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Workload="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.617 [INFO][6253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" HandleID="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Workload="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139620), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-bd48696324", "pod":"csi-node-driver-c8xm9", "timestamp":"2025-07-16 00:49:45.61713838 +0000 UTC"}, Hostname:"ci-4372.0.1-n-bd48696324", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.617 [INFO][6253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.617 [INFO][6253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.617 [INFO][6253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-bd48696324' Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.621 [INFO][6253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.623 [INFO][6253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.625 [INFO][6253] ipam/ipam.go 511: Trying affinity for 192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.626 [INFO][6253] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.627 [INFO][6253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.128/26 host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.627 [INFO][6253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.128/26 handle="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.628 [INFO][6253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.629 [INFO][6253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.128/26 handle="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.632 [INFO][6253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.136/26] block=192.168.82.128/26 handle="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.632 [INFO][6253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.136/26] handle="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" host="ci-4372.0.1-n-bd48696324" Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.632 [INFO][6253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:49:45.639855 containerd[1916]: 2025-07-16 00:49:45.632 [INFO][6253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.136/26] IPv6=[] ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" HandleID="k8s-pod-network.c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Workload="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" Jul 16 00:49:45.640271 containerd[1916]: 2025-07-16 00:49:45.633 [INFO][6231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05542b76-a42a-41b7-a7f7-a1f97b8c0b25", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"", Pod:"csi-node-driver-c8xm9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90414149821", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:45.640271 containerd[1916]: 2025-07-16 00:49:45.633 [INFO][6231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.136/32] ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" Jul 16 00:49:45.640271 containerd[1916]: 2025-07-16 00:49:45.633 [INFO][6231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90414149821 ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" Jul 16 00:49:45.640271 containerd[1916]: 2025-07-16 00:49:45.634 [INFO][6231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" Jul 16 00:49:45.640271 containerd[1916]: 2025-07-16 00:49:45.635 [INFO][6231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05542b76-a42a-41b7-a7f7-a1f97b8c0b25", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-bd48696324", ContainerID:"c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba", Pod:"csi-node-driver-c8xm9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90414149821", MAC:"86:8d:6b:63:f2:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:49:45.640271 containerd[1916]: 2025-07-16 00:49:45.638 [INFO][6231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" Namespace="calico-system" Pod="csi-node-driver-c8xm9" WorkloadEndpoint="ci--4372.0.1--n--bd48696324-k8s-csi--node--driver--c8xm9-eth0" Jul 16 00:49:45.647284 containerd[1916]: time="2025-07-16T00:49:45.647234051Z" level=info msg="connecting to shim c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba" address="unix:///run/containerd/s/5b39e583e85c3c76caa55328215d0929f3c11639fbac3f127cc600daa8f46cfe" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:49:45.669912 systemd[1]: Started cri-containerd-c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba.scope - libcontainer container c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba. Jul 16 00:49:45.682431 containerd[1916]: time="2025-07-16T00:49:45.682410418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c8xm9,Uid:05542b76-a42a-41b7-a7f7-a1f97b8c0b25,Namespace:calico-system,Attempt:0,} returns sandbox id \"c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba\"" Jul 16 00:49:45.767449 kubelet[3434]: I0716 00:49:45.767366 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-995v6" podStartSLOduration=35.767338793 podStartE2EDuration="35.767338793s" podCreationTimestamp="2025-07-16 00:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:49:45.766881935 +0000 UTC m=+41.243240904" watchObservedRunningTime="2025-07-16 00:49:45.767338793 +0000 UTC m=+41.243697752" Jul 16 00:49:45.779362 kubelet[3434]: I0716 00:49:45.779045 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fb77455db-cgzrs" podStartSLOduration=23.107105289 podStartE2EDuration="25.779022566s" podCreationTimestamp="2025-07-16 00:49:20 +0000 UTC" firstStartedPulling="2025-07-16 00:49:42.74933012 +0000 UTC m=+38.225689063" lastFinishedPulling="2025-07-16 00:49:45.421247397 +0000 UTC m=+40.897606340" observedRunningTime="2025-07-16 00:49:45.778365026 +0000 UTC m=+41.254723984" watchObservedRunningTime="2025-07-16 00:49:45.779022566 +0000 UTC m=+41.255381518" Jul 16 00:49:45.804905 containerd[1916]: time="2025-07-16T00:49:45.804823338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"349ba3e8e92378c5eae9141bef7d1108289e111ac0c54e8ecd7ecf877c1d5a1c\" pid:6333 exited_at:{seconds:1752626985 nanos:804641613}" Jul 16 00:49:45.918977 systemd-networkd[1829]: calib0862642ae8: Gained IPv6LL Jul 16 00:49:46.686002 systemd-networkd[1829]: calicebb77cf32a: Gained IPv6LL Jul 16 00:49:46.941944 systemd-networkd[1829]: cali90414149821: Gained IPv6LL Jul 16 00:49:48.064808 containerd[1916]: time="2025-07-16T00:49:48.064758474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:48.065025 containerd[1916]: time="2025-07-16T00:49:48.065000846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 16 00:49:48.065369 containerd[1916]: time="2025-07-16T00:49:48.065324625Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:48.066191 containerd[1916]: time="2025-07-16T00:49:48.066150019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:48.066591 containerd[1916]: time="2025-07-16T00:49:48.066549220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.6452364s" Jul 16 00:49:48.066591 containerd[1916]: time="2025-07-16T00:49:48.066565356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 16 00:49:48.067039 containerd[1916]: time="2025-07-16T00:49:48.067028343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:49:48.067480 containerd[1916]: time="2025-07-16T00:49:48.067468876Z" level=info msg="CreateContainer within sandbox \"3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:49:48.069989 containerd[1916]: time="2025-07-16T00:49:48.069946409Z" level=info msg="Container ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:48.090112 containerd[1916]: time="2025-07-16T00:49:48.090061506Z" level=info msg="CreateContainer within sandbox \"3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81\"" Jul 16 00:49:48.090405 containerd[1916]: time="2025-07-16T00:49:48.090361602Z" level=info msg="StartContainer for \"ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81\"" Jul 16 00:49:48.090992 containerd[1916]: time="2025-07-16T00:49:48.090948751Z" level=info msg="connecting to shim ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81" address="unix:///run/containerd/s/b67148f8532d4bd221ae066d28471602d8f1f4735c84a3593cab149fcdab3144" protocol=ttrpc version=3 Jul 16 00:49:48.112014 systemd[1]: Started cri-containerd-ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81.scope - libcontainer container ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81. Jul 16 00:49:48.141571 containerd[1916]: time="2025-07-16T00:49:48.141524969Z" level=info msg="StartContainer for \"ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81\" returns successfully" Jul 16 00:49:48.484682 containerd[1916]: time="2025-07-16T00:49:48.484620684Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:48.484782 containerd[1916]: time="2025-07-16T00:49:48.484765123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 16 00:49:48.485968 containerd[1916]: time="2025-07-16T00:49:48.485953474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 418.910305ms" Jul 16 00:49:48.486000 containerd[1916]: time="2025-07-16T00:49:48.485972727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 16 00:49:48.487406 containerd[1916]: time="2025-07-16T00:49:48.487358770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 16 00:49:48.487833 containerd[1916]: time="2025-07-16T00:49:48.487815881Z" level=info msg="CreateContainer within sandbox \"56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:49:48.490755 containerd[1916]: time="2025-07-16T00:49:48.490734625Z" level=info msg="Container 82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:48.495454 containerd[1916]: time="2025-07-16T00:49:48.495432729Z" level=info msg="CreateContainer within sandbox \"56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d\"" Jul 16 00:49:48.495749 containerd[1916]: time="2025-07-16T00:49:48.495737004Z" level=info msg="StartContainer for \"82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d\"" Jul 16 00:49:48.496296 containerd[1916]: time="2025-07-16T00:49:48.496283452Z" level=info msg="connecting to shim 82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d" address="unix:///run/containerd/s/7c9e1b72b4effde36e80fd89aad8019ee36a2dfafa125d6f812f84defaf09639" protocol=ttrpc version=3 Jul 16 00:49:48.514037 systemd[1]: Started cri-containerd-82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d.scope - libcontainer container 82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d. Jul 16 00:49:48.545430 containerd[1916]: time="2025-07-16T00:49:48.545402378Z" level=info msg="StartContainer for \"82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d\" returns successfully" Jul 16 00:49:48.757510 kubelet[3434]: I0716 00:49:48.757408 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c4d947c8c-gc28p" podStartSLOduration=26.510455125 podStartE2EDuration="31.75739568s" podCreationTimestamp="2025-07-16 00:49:17 +0000 UTC" firstStartedPulling="2025-07-16 00:49:42.820037454 +0000 UTC m=+38.296396396" lastFinishedPulling="2025-07-16 00:49:48.066978008 +0000 UTC m=+43.543336951" observedRunningTime="2025-07-16 00:49:48.757208623 +0000 UTC m=+44.233567568" watchObservedRunningTime="2025-07-16 00:49:48.75739568 +0000 UTC m=+44.233754621" Jul 16 00:49:48.762480 kubelet[3434]: I0716 00:49:48.762348 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c4d947c8c-mkbs4" podStartSLOduration=26.996524155 podStartE2EDuration="31.762330561s" podCreationTimestamp="2025-07-16 00:49:17 +0000 UTC" firstStartedPulling="2025-07-16 00:49:43.721394235 +0000 UTC m=+39.197753177" lastFinishedPulling="2025-07-16 00:49:48.48720064 +0000 UTC m=+43.963559583" observedRunningTime="2025-07-16 00:49:48.762067703 +0000 UTC m=+44.238426651" watchObservedRunningTime="2025-07-16 00:49:48.762330561 +0000 UTC m=+44.238689501" Jul 16 00:49:49.756024 kubelet[3434]: I0716 00:49:49.755955 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:49:49.756024 kubelet[3434]: I0716 00:49:49.755992 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:49:51.454496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797885051.mount: Deactivated successfully. Jul 16 00:49:51.708184 containerd[1916]: time="2025-07-16T00:49:51.708118544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:51.708461 containerd[1916]: time="2025-07-16T00:49:51.708295091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 16 00:49:51.708692 containerd[1916]: time="2025-07-16T00:49:51.708679535Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:51.709548 containerd[1916]: time="2025-07-16T00:49:51.709533213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:51.709971 containerd[1916]: time="2025-07-16T00:49:51.709956397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.222572263s" Jul 16 00:49:51.710019 containerd[1916]: time="2025-07-16T00:49:51.709973129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 16 00:49:51.710436 containerd[1916]: time="2025-07-16T00:49:51.710423639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 16 00:49:51.711391 containerd[1916]: time="2025-07-16T00:49:51.711368191Z" level=info msg="CreateContainer within sandbox \"2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 16 00:49:51.714650 containerd[1916]: time="2025-07-16T00:49:51.714609372Z" level=info msg="Container be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:51.717305 containerd[1916]: time="2025-07-16T00:49:51.717262661Z" level=info msg="CreateContainer within sandbox \"2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\"" Jul 16 00:49:51.717531 containerd[1916]: time="2025-07-16T00:49:51.717496961Z" level=info msg="StartContainer for \"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\"" Jul 16 00:49:51.718027 containerd[1916]: time="2025-07-16T00:49:51.718015905Z" level=info msg="connecting to shim be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0" address="unix:///run/containerd/s/66909a450b0466752c406a9f1170c8838d04b53f9ff4fe9cae3be64a031ba7a7" protocol=ttrpc version=3 Jul 16 00:49:51.734991 systemd[1]: Started cri-containerd-be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0.scope - libcontainer container be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0. Jul 16 00:49:51.764279 containerd[1916]: time="2025-07-16T00:49:51.764258227Z" level=info msg="StartContainer for \"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" returns successfully" Jul 16 00:49:52.771233 kubelet[3434]: I0716 00:49:52.771199 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-lxlf2" podStartSLOduration=26.879165424 podStartE2EDuration="33.771185525s" podCreationTimestamp="2025-07-16 00:49:19 +0000 UTC" firstStartedPulling="2025-07-16 00:49:44.818347802 +0000 UTC m=+40.294706745" lastFinishedPulling="2025-07-16 00:49:51.710367904 +0000 UTC m=+47.186726846" observedRunningTime="2025-07-16 00:49:52.77083571 +0000 UTC m=+48.247194653" watchObservedRunningTime="2025-07-16 00:49:52.771185525 +0000 UTC m=+48.247544466" Jul 16 00:49:52.825413 containerd[1916]: time="2025-07-16T00:49:52.825390286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"348e91da8137726e8c1f4609ed65e7fc073e1e272fa077db88d2171b7a6ebf49\" pid:6530 exit_status:1 exited_at:{seconds:1752626992 nanos:825184489}" Jul 16 00:49:53.080229 containerd[1916]: time="2025-07-16T00:49:53.080199190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:53.080333 containerd[1916]: time="2025-07-16T00:49:53.080320728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 16 00:49:53.080789 containerd[1916]: time="2025-07-16T00:49:53.080778299Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:53.081668 containerd[1916]: time="2025-07-16T00:49:53.081656833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:53.082059 containerd[1916]: time="2025-07-16T00:49:53.082043425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.371603193s" Jul 16 00:49:53.082059 containerd[1916]: time="2025-07-16T00:49:53.082059414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 16 00:49:53.083040 containerd[1916]: time="2025-07-16T00:49:53.083025286Z" level=info msg="CreateContainer within sandbox \"c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 16 00:49:53.086526 containerd[1916]: time="2025-07-16T00:49:53.086483814Z" level=info msg="Container e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:53.090249 containerd[1916]: time="2025-07-16T00:49:53.090229580Z" level=info msg="CreateContainer within sandbox \"c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06\"" Jul 16 00:49:53.090525 containerd[1916]: time="2025-07-16T00:49:53.090511371Z" level=info msg="StartContainer for \"e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06\"" Jul 16 00:49:53.091279 containerd[1916]: time="2025-07-16T00:49:53.091266188Z" level=info msg="connecting to shim e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06" address="unix:///run/containerd/s/5b39e583e85c3c76caa55328215d0929f3c11639fbac3f127cc600daa8f46cfe" protocol=ttrpc version=3 Jul 16 00:49:53.115116 systemd[1]: Started cri-containerd-e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06.scope - libcontainer container e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06. Jul 16 00:49:53.135405 containerd[1916]: time="2025-07-16T00:49:53.135346445Z" level=info msg="StartContainer for \"e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06\" returns successfully" Jul 16 00:49:53.135952 containerd[1916]: time="2025-07-16T00:49:53.135940064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 16 00:49:53.876373 containerd[1916]: time="2025-07-16T00:49:53.876349977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"5a12e2edd7418447bc06c57c0705b36c55ca36fd82e958854687f6e37b222cdc\" pid:6597 exit_status:1 exited_at:{seconds:1752626993 nanos:876180210}" Jul 16 00:49:54.677903 containerd[1916]: time="2025-07-16T00:49:54.677873077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:54.678090 containerd[1916]: time="2025-07-16T00:49:54.678079115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 16 00:49:54.678411 containerd[1916]: time="2025-07-16T00:49:54.678401190Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:54.679248 containerd[1916]: time="2025-07-16T00:49:54.679236695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:49:54.679576 containerd[1916]: time="2025-07-16T00:49:54.679563960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.543605916s" Jul 16 00:49:54.679600 containerd[1916]: time="2025-07-16T00:49:54.679579421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 16 00:49:54.680538 containerd[1916]: time="2025-07-16T00:49:54.680527391Z" level=info msg="CreateContainer within sandbox \"c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 16 00:49:54.683834 containerd[1916]: time="2025-07-16T00:49:54.683813548Z" level=info msg="Container 78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:49:54.688198 containerd[1916]: time="2025-07-16T00:49:54.688178168Z" level=info msg="CreateContainer within sandbox \"c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14\"" Jul 16 00:49:54.688455 containerd[1916]: time="2025-07-16T00:49:54.688440859Z" level=info msg="StartContainer for \"78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14\"" Jul 16 00:49:54.689206 containerd[1916]: time="2025-07-16T00:49:54.689193104Z" level=info msg="connecting to shim 78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14" address="unix:///run/containerd/s/5b39e583e85c3c76caa55328215d0929f3c11639fbac3f127cc600daa8f46cfe" protocol=ttrpc version=3 Jul 16 00:49:54.712200 systemd[1]: Started cri-containerd-78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14.scope - libcontainer container 78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14. Jul 16 00:49:54.766326 containerd[1916]: time="2025-07-16T00:49:54.766299400Z" level=info msg="StartContainer for \"78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14\" returns successfully" Jul 16 00:49:54.780196 kubelet[3434]: I0716 00:49:54.780163 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c8xm9" podStartSLOduration=25.783169104 podStartE2EDuration="34.780152488s" podCreationTimestamp="2025-07-16 00:49:20 +0000 UTC" firstStartedPulling="2025-07-16 00:49:45.682975913 +0000 UTC m=+41.159334861" lastFinishedPulling="2025-07-16 00:49:54.679959301 +0000 UTC m=+50.156318245" observedRunningTime="2025-07-16 00:49:54.780063272 +0000 UTC m=+50.256422224" watchObservedRunningTime="2025-07-16 00:49:54.780152488 +0000 UTC m=+50.256511430" Jul 16 00:49:55.643802 kubelet[3434]: I0716 00:49:55.643732 3434 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 16 00:49:55.644061 kubelet[3434]: I0716 00:49:55.643857 3434 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 16 00:50:00.903884 containerd[1916]: time="2025-07-16T00:50:00.903818046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"9e2b1b21e9c3188421bcfde39ff6c100ce9993b5cfee05148604917af28c391b\" pid:6682 exited_at:{seconds:1752627000 nanos:903624201}" Jul 16 00:50:07.747599 containerd[1916]: time="2025-07-16T00:50:07.747576831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"77db9f49675d66c302c0f5f883cec42043743fbf90515d6ac2828fe12c37df77\" pid:6707 exited_at:{seconds:1752627007 nanos:747387809}" Jul 16 00:50:08.008316 kubelet[3434]: I0716 00:50:08.008092 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:50:09.283659 containerd[1916]: time="2025-07-16T00:50:09.283631326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"0fc7cdeedf3d71d862ab95ba8bc811c7ac16d08c65c7102e685af5271959df62\" pid:6746 exited_at:{seconds:1752627009 nanos:283462548}" Jul 16 00:50:15.782265 containerd[1916]: time="2025-07-16T00:50:15.782239541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"d68f260e87ef712cf6faae8c8108bd052eacb1e4e93dd84b9f9f60955b353500\" pid:6782 exited_at:{seconds:1752627015 nanos:782079343}" Jul 16 00:50:15.873735 systemd[1]: Started sshd@63-147.75.90.137:22-144.126.219.123:44106.service - OpenSSH per-connection server daemon (144.126.219.123:44106). Jul 16 00:50:15.990626 sshd[6793]: Received disconnect from 144.126.219.123 port 44106:11: Bye Bye [preauth] Jul 16 00:50:15.990626 sshd[6793]: Disconnected from authenticating user root 144.126.219.123 port 44106 [preauth] Jul 16 00:50:15.991916 systemd[1]: sshd@63-147.75.90.137:22-144.126.219.123:44106.service: Deactivated successfully. Jul 16 00:50:16.617716 kubelet[3434]: I0716 00:50:16.617637 3434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:50:23.842802 containerd[1916]: time="2025-07-16T00:50:23.842780964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"10ff6e8ecffbd07ee7c0f74672c1d0685f80c490da54949c7845d33031b3a49d\" pid:6816 exited_at:{seconds:1752627023 nanos:842632770}" Jul 16 00:50:37.760826 containerd[1916]: time="2025-07-16T00:50:37.760796384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"c51a1fb037bbcf58221700f3ea242aa011b616bed73153911fcd7cbc219fc4de\" pid:6855 exited_at:{seconds:1752627037 nanos:760578603}" Jul 16 00:50:45.795119 containerd[1916]: time="2025-07-16T00:50:45.795089669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"a7a254474a27403ffe61535e922e18cfed804b59dd10ac352fb0e2bed8e49f59\" pid:6894 exited_at:{seconds:1752627045 nanos:794963022}" Jul 16 00:50:53.891325 containerd[1916]: time="2025-07-16T00:50:53.891297150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"bcc998754d2b4e15e91213e9d4753226fe54c47ecb157faa28b9ca23fcaf6ce1\" pid:6914 exited_at:{seconds:1752627053 nanos:891085565}" Jul 16 00:50:55.869753 systemd[1]: Started sshd@64-147.75.90.137:22-203.55.224.216:56068.service - OpenSSH per-connection server daemon (203.55.224.216:56068). Jul 16 00:50:57.971564 sshd[6935]: Received disconnect from 203.55.224.216 port 56068:11: Bye Bye [preauth] Jul 16 00:50:57.971564 sshd[6935]: Disconnected from authenticating user root 203.55.224.216 port 56068 [preauth] Jul 16 00:50:57.976402 systemd[1]: sshd@64-147.75.90.137:22-203.55.224.216:56068.service: Deactivated successfully. Jul 16 00:51:00.914691 containerd[1916]: time="2025-07-16T00:51:00.914660211Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"d752d46f644040f1160f6b6a43263ff7ea35dda8c84f2be8fcdef748d50eee3b\" pid:6957 exited_at:{seconds:1752627060 nanos:914532076}" Jul 16 00:51:07.759363 containerd[1916]: time="2025-07-16T00:51:07.759281581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"7267b1c7148ea950a11b36cd6f8b7c45239f72a1c2dcd5bab23fce897a30ce4b\" pid:6985 exited_at:{seconds:1752627067 nanos:759024937}" Jul 16 00:51:09.272572 containerd[1916]: time="2025-07-16T00:51:09.272549533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"3628915908bde3387d65055bf637b655a465228dc06b880292c6e4bfaeedfcc6\" pid:7020 exited_at:{seconds:1752627069 nanos:272263170}" Jul 16 00:51:15.487547 systemd[1]: sshd@62-147.75.90.137:22-203.55.224.216:46044.service: Deactivated successfully. Jul 16 00:51:15.851014 containerd[1916]: time="2025-07-16T00:51:15.850968880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"a7f740e6f7d8ea7699142cd983e9455a30200921c8ef322a5c1e3cb860e49ee0\" pid:7079 exited_at:{seconds:1752627075 nanos:850542804}" Jul 16 00:51:20.883586 systemd[1]: Started sshd@65-147.75.90.137:22-144.126.219.123:34986.service - OpenSSH per-connection server daemon (144.126.219.123:34986). Jul 16 00:51:20.985181 sshd[7090]: Received disconnect from 144.126.219.123 port 34986:11: Bye Bye [preauth] Jul 16 00:51:20.985181 sshd[7090]: Disconnected from authenticating user root 144.126.219.123 port 34986 [preauth] Jul 16 00:51:20.988303 systemd[1]: sshd@65-147.75.90.137:22-144.126.219.123:34986.service: Deactivated successfully. Jul 16 00:51:23.846310 containerd[1916]: time="2025-07-16T00:51:23.846255757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"22ff29bb2b8c8f893a6d8adf0cc5454c2300936807dc7a42f9102b8bb48860ac\" pid:7105 exited_at:{seconds:1752627083 nanos:846060245}" Jul 16 00:51:37.771759 containerd[1916]: time="2025-07-16T00:51:37.771680948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"94580b53a2a2a9243f9ca29d2a84e30441fff83acd4e8e3f7711bc4b6e44ecac\" pid:7140 exited_at:{seconds:1752627097 nanos:771513465}" Jul 16 00:51:45.842745 containerd[1916]: time="2025-07-16T00:51:45.842716223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"b4262f114e0a7c4b7ade2f9e48dcfc372fd6e96a0f9e6549665b8b1df6e2dc45\" pid:7179 exited_at:{seconds:1752627105 nanos:842557015}" Jul 16 00:51:53.835945 containerd[1916]: time="2025-07-16T00:51:53.835918484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"3dd57378c57c13d73d307a7172931e3a8cc31ce8f0a6b7aa99bc8b244869ab83\" pid:7201 exited_at:{seconds:1752627113 nanos:835739155}" Jul 16 00:52:00.953094 containerd[1916]: time="2025-07-16T00:52:00.953066026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"e3adaa1e2b8fd49e6eba6b3b230d36a847176af0fb62d5fc07cca0ef038921f5\" pid:7234 exited_at:{seconds:1752627120 nanos:952939851}" Jul 16 00:52:07.752675 containerd[1916]: time="2025-07-16T00:52:07.752645536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"09804a1e48c03d0ef102fddb7d215aabd5fd805b0dd9bdaef5d8b9764d8d49df\" pid:7258 exited_at:{seconds:1752627127 nanos:752449568}" Jul 16 00:52:09.325703 containerd[1916]: time="2025-07-16T00:52:09.325672378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"a03861e2e9fa891f00043a78e3637f6230fd5939ff6c4c4abdcbe9051017e70c\" pid:7292 exited_at:{seconds:1752627129 nanos:325511434}" Jul 16 00:52:15.808812 containerd[1916]: time="2025-07-16T00:52:15.808783298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"59c73f43ec759744e952f8bc1f831abcda390e66b65f2d95925fdd8f0db2879e\" pid:7327 exited_at:{seconds:1752627135 nanos:808674113}" Jul 16 00:52:23.847977 containerd[1916]: time="2025-07-16T00:52:23.847950235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"aadadddcae5c35af304b82397f005aee06f5fa78d19a1536a63cb8c26e38f1ff\" pid:7356 exited_at:{seconds:1752627143 nanos:847764429}" Jul 16 00:52:25.134782 systemd[1]: Started sshd@66-147.75.90.137:22-144.126.219.123:46294.service - OpenSSH per-connection server daemon (144.126.219.123:46294). Jul 16 00:52:25.263164 sshd[7378]: Received disconnect from 144.126.219.123 port 46294:11: Bye Bye [preauth] Jul 16 00:52:25.263164 sshd[7378]: Disconnected from authenticating user root 144.126.219.123 port 46294 [preauth] Jul 16 00:52:25.265614 systemd[1]: sshd@66-147.75.90.137:22-144.126.219.123:46294.service: Deactivated successfully. Jul 16 00:52:37.137347 systemd[1]: Started sshd@67-147.75.90.137:22-203.55.224.216:55308.service - OpenSSH per-connection server daemon (203.55.224.216:55308). Jul 16 00:52:37.751131 containerd[1916]: time="2025-07-16T00:52:37.751070660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"41b84ec9391f8e02c1ae889904b13274e91157c1eb7b5f7538700587f2cafcca\" pid:7399 exited_at:{seconds:1752627157 nanos:750882357}" Jul 16 00:52:42.535939 systemd[1]: Started sshd@68-147.75.90.137:22-211.250.5.139:45907.service - OpenSSH per-connection server daemon (211.250.5.139:45907). Jul 16 00:52:42.707654 sshd[7425]: Connection reset by 211.250.5.139 port 45907 [preauth] Jul 16 00:52:42.710974 systemd[1]: sshd@68-147.75.90.137:22-211.250.5.139:45907.service: Deactivated successfully. Jul 16 00:52:45.803071 containerd[1916]: time="2025-07-16T00:52:45.803039175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"6243e5fae07324b9e972afee07f788a740a6e847b99346cb1f5e13252b4066c5\" pid:7440 exited_at:{seconds:1752627165 nanos:802848661}" Jul 16 00:52:53.832668 containerd[1916]: time="2025-07-16T00:52:53.832646231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"5660047f39d16dcb2b9bf8d855afa64ccc8c360043db6b2a20246263a2a39ded\" pid:7487 exited_at:{seconds:1752627173 nanos:832491384}" Jul 16 00:53:00.914384 containerd[1916]: time="2025-07-16T00:53:00.914330983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"184e55e24b5cb56685137dd6b5fcdf563fd158d22a3534449b3715c54e0d29a5\" pid:7521 exited_at:{seconds:1752627180 nanos:914203102}" Jul 16 00:53:07.765799 containerd[1916]: time="2025-07-16T00:53:07.765746460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"23688ad2d9c2dd9e8d467fcf43c29d061b7f327c6c1ae755425a18996b839739\" pid:7544 exited_at:{seconds:1752627187 nanos:765284248}" Jul 16 00:53:09.317735 containerd[1916]: time="2025-07-16T00:53:09.317709306Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"28f2d72a76bf9afa2ad5bbb758c8a28afe1b7eebe57f4730caceb3bd3654e6cb\" pid:7579 exited_at:{seconds:1752627189 nanos:317534502}" Jul 16 00:53:15.807620 containerd[1916]: time="2025-07-16T00:53:15.807587292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"2c3d6160e5671e94d0e9226508aabbb856d915803426b5e9c506f28cc9940e73\" pid:7614 exited_at:{seconds:1752627195 nanos:807413022}" Jul 16 00:53:23.837089 containerd[1916]: time="2025-07-16T00:53:23.837066660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"9c1d698872024e8c8f025c6bfa36fff1b28fb82084db7504c1170dc34f5dd085\" pid:7636 exited_at:{seconds:1752627203 nanos:836901085}" Jul 16 00:53:24.318209 systemd[1]: Started sshd@69-147.75.90.137:22-121.201.125.75:41630.service - OpenSSH per-connection server daemon (121.201.125.75:41630). Jul 16 00:53:25.371914 sshd[7657]: Received disconnect from 121.201.125.75 port 41630:11: Bye Bye [preauth] Jul 16 00:53:25.371914 sshd[7657]: Disconnected from authenticating user root 121.201.125.75 port 41630 [preauth] Jul 16 00:53:25.375121 systemd[1]: sshd@69-147.75.90.137:22-121.201.125.75:41630.service: Deactivated successfully. Jul 16 00:53:29.293691 systemd[1]: Started sshd@70-147.75.90.137:22-144.126.219.123:40462.service - OpenSSH per-connection server daemon (144.126.219.123:40462). Jul 16 00:53:29.403186 sshd[7662]: Received disconnect from 144.126.219.123 port 40462:11: Bye Bye [preauth] Jul 16 00:53:29.403186 sshd[7662]: Disconnected from authenticating user root 144.126.219.123 port 40462 [preauth] Jul 16 00:53:29.406636 systemd[1]: sshd@70-147.75.90.137:22-144.126.219.123:40462.service: Deactivated successfully. Jul 16 00:53:37.788804 containerd[1916]: time="2025-07-16T00:53:37.788776243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"106fb92ccba114c988548af8a98def0bb78b75cbc8719745c19a4e951b1826df\" pid:7679 exited_at:{seconds:1752627217 nanos:788601377}" Jul 16 00:53:45.840582 containerd[1916]: time="2025-07-16T00:53:45.840552240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"c34f5179c6e9b300212122c406bc0301f1bef735e5cbfe958d8549969201fcab\" pid:7715 exited_at:{seconds:1752627225 nanos:840418525}" Jul 16 00:53:53.853069 containerd[1916]: time="2025-07-16T00:53:53.853009775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"36402fdf0d6d8f628917efc4db4586ba56539947beaef7b486d64cfdf6e7de25\" pid:7740 exited_at:{seconds:1752627233 nanos:852787710}" Jul 16 00:54:00.941380 containerd[1916]: time="2025-07-16T00:54:00.941240522Z" level=warning msg="container event discarded" container=fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d type=CONTAINER_CREATED_EVENT Jul 16 00:54:00.941380 containerd[1916]: time="2025-07-16T00:54:00.941368892Z" level=warning msg="container event discarded" container=fb01e0e29856f381adbc88c323a4920531800a77517ae21c23acf6da5b14455d type=CONTAINER_STARTED_EVENT Jul 16 00:54:00.957720 containerd[1916]: time="2025-07-16T00:54:00.957660817Z" level=warning msg="container event discarded" container=bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c type=CONTAINER_CREATED_EVENT Jul 16 00:54:00.957720 containerd[1916]: time="2025-07-16T00:54:00.957699741Z" level=warning msg="container event discarded" container=563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff type=CONTAINER_CREATED_EVENT Jul 16 00:54:00.957720 containerd[1916]: time="2025-07-16T00:54:00.957718914Z" level=warning msg="container event discarded" container=563ba67b118c5d58e8faa817b2ec87d0932471c6204d473d99e266d56311c3ff type=CONTAINER_STARTED_EVENT Jul 16 00:54:00.957894 containerd[1916]: time="2025-07-16T00:54:00.957731862Z" level=warning msg="container event discarded" container=0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c type=CONTAINER_CREATED_EVENT Jul 16 00:54:00.966911 containerd[1916]: time="2025-07-16T00:54:00.966887420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"c4c1a60677474c97ce17ca23b0a090c1908624beefc72651f8ba4b25800a3f19\" pid:7770 exited_at:{seconds:1752627240 nanos:966735813}" Jul 16 00:54:01.004860 containerd[1916]: time="2025-07-16T00:54:01.004802382Z" level=warning msg="container event discarded" container=4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02 type=CONTAINER_CREATED_EVENT Jul 16 00:54:01.004860 containerd[1916]: time="2025-07-16T00:54:01.004820796Z" level=warning msg="container event discarded" container=4d562869458bfaec66ff6c681552c8a11a2e8bd6505a5b618bc11c9956217e02 type=CONTAINER_STARTED_EVENT Jul 16 00:54:01.004860 containerd[1916]: time="2025-07-16T00:54:01.004832952Z" level=warning msg="container event discarded" container=bb59c22785f339280674436aee3feccea8604ff8bd633196a88b7ab50192618c type=CONTAINER_STARTED_EVENT Jul 16 00:54:01.004860 containerd[1916]: time="2025-07-16T00:54:01.004839466Z" level=warning msg="container event discarded" container=0140d556a80091973c9496ed004f86844fef6a5f598b9e1ec997ccee2cc1c37c type=CONTAINER_STARTED_EVENT Jul 16 00:54:01.004860 containerd[1916]: time="2025-07-16T00:54:01.004847115Z" level=warning msg="container event discarded" container=365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f type=CONTAINER_CREATED_EVENT Jul 16 00:54:01.058224 containerd[1916]: time="2025-07-16T00:54:01.058117604Z" level=warning msg="container event discarded" container=365f66673fe2c42c36257c5b059bab7d99729c1f3bbf657a1914821a5b5d1b5f type=CONTAINER_STARTED_EVENT Jul 16 00:54:07.759819 containerd[1916]: time="2025-07-16T00:54:07.759789996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"8b779fed4c64f3a27cfbb7b305db3ffaa517601c5aee80f5f26996c4ff48cb6d\" pid:7795 exited_at:{seconds:1752627247 nanos:759632273}" Jul 16 00:54:09.286616 containerd[1916]: time="2025-07-16T00:54:09.286549087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"57fbbd738b8460224de6bb04e27c0dec4de71ac99495320b1ad3297536e14e3a\" pid:7828 exited_at:{seconds:1752627249 nanos:286360375}" Jul 16 00:54:10.853175 containerd[1916]: time="2025-07-16T00:54:10.853003570Z" level=warning msg="container event discarded" container=feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add type=CONTAINER_CREATED_EVENT Jul 16 00:54:10.853175 containerd[1916]: time="2025-07-16T00:54:10.853153364Z" level=warning msg="container event discarded" container=feab078f5750ac72c520d655d976198feb39b3d414dbd800ff1aafa653966add type=CONTAINER_STARTED_EVENT Jul 16 00:54:10.915785 containerd[1916]: time="2025-07-16T00:54:10.915624828Z" level=warning msg="container event discarded" container=d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97 type=CONTAINER_CREATED_EVENT Jul 16 00:54:10.971255 containerd[1916]: time="2025-07-16T00:54:10.971094598Z" level=warning msg="container event discarded" container=d8bd266cfe5ba849f3b4293a276aa42c2d0b18a7d28d3a25e9bf9a9dff6f5c97 type=CONTAINER_STARTED_EVENT Jul 16 00:54:11.508731 containerd[1916]: time="2025-07-16T00:54:11.508617533Z" level=warning msg="container event discarded" container=8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29 type=CONTAINER_CREATED_EVENT Jul 16 00:54:11.508731 containerd[1916]: time="2025-07-16T00:54:11.508719696Z" level=warning msg="container event discarded" container=8fa45907b1c4c24d80562e2cb5f3532233c87097f268c9aee7e1bb8e258f9b29 type=CONTAINER_STARTED_EVENT Jul 16 00:54:13.263468 containerd[1916]: time="2025-07-16T00:54:13.263266777Z" level=warning msg="container event discarded" container=bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8 type=CONTAINER_CREATED_EVENT Jul 16 00:54:13.291028 containerd[1916]: time="2025-07-16T00:54:13.290855658Z" level=warning msg="container event discarded" container=bf7807d5393b265257c8e1105347553b360221c211ec86fbf7d182c59ac6b9f8 type=CONTAINER_STARTED_EVENT Jul 16 00:54:15.789530 containerd[1916]: time="2025-07-16T00:54:15.789505832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"0f2de7ff08344f0491c92a1db1ec393d8fce81aa1fa2b7551a860fa2a215b84f\" pid:7861 exited_at:{seconds:1752627255 nanos:789387597}" Jul 16 00:54:20.091805 systemd[1]: Started sshd@71-147.75.90.137:22-203.55.224.216:45036.service - OpenSSH per-connection server daemon (203.55.224.216:45036). Jul 16 00:54:20.162498 containerd[1916]: time="2025-07-16T00:54:20.162396340Z" level=warning msg="container event discarded" container=51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682 type=CONTAINER_CREATED_EVENT Jul 16 00:54:20.162498 containerd[1916]: time="2025-07-16T00:54:20.162464446Z" level=warning msg="container event discarded" container=51279558ca288f874dc3637803939ff0f64ddb51a4d21c0f49b9668ad928f682 type=CONTAINER_STARTED_EVENT Jul 16 00:54:20.478825 containerd[1916]: time="2025-07-16T00:54:20.478548139Z" level=warning msg="container event discarded" container=bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464 type=CONTAINER_CREATED_EVENT Jul 16 00:54:20.478825 containerd[1916]: time="2025-07-16T00:54:20.478637880Z" level=warning msg="container event discarded" container=bdc3409baaaf10cc6b76c35b3819b158f2f529dcd7f68f7ac384c5ae516aa464 type=CONTAINER_STARTED_EVENT Jul 16 00:54:22.527688 containerd[1916]: time="2025-07-16T00:54:22.527490271Z" level=warning msg="container event discarded" container=cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e type=CONTAINER_CREATED_EVENT Jul 16 00:54:22.582462 containerd[1916]: time="2025-07-16T00:54:22.582294152Z" level=warning msg="container event discarded" container=cc190c7e91b100e3a8639a5bf5b4bb33f974a0765719a577262a1937d8774a1e type=CONTAINER_STARTED_EVENT Jul 16 00:54:23.839972 containerd[1916]: time="2025-07-16T00:54:23.839926428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"4d8ee2c6b8e345864c6040695623c0d6cb80bb53f38aef10d3a5f9fb7981dc43\" pid:7885 exited_at:{seconds:1752627263 nanos:839739552}" Jul 16 00:54:24.550871 containerd[1916]: time="2025-07-16T00:54:24.550671330Z" level=warning msg="container event discarded" container=34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04 type=CONTAINER_CREATED_EVENT Jul 16 00:54:24.589318 containerd[1916]: time="2025-07-16T00:54:24.589262613Z" level=warning msg="container event discarded" container=34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04 type=CONTAINER_STARTED_EVENT Jul 16 00:54:25.503737 containerd[1916]: time="2025-07-16T00:54:25.503579753Z" level=warning msg="container event discarded" container=34f69abf12c3c8c319f2dee5022cbef0d297f991adbaeecef1e72ebb52c8bd04 type=CONTAINER_STOPPED_EVENT Jul 16 00:54:28.568781 containerd[1916]: time="2025-07-16T00:54:28.568633346Z" level=warning msg="container event discarded" container=7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361 type=CONTAINER_CREATED_EVENT Jul 16 00:54:28.603751 containerd[1916]: time="2025-07-16T00:54:28.603626364Z" level=warning msg="container event discarded" container=7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361 type=CONTAINER_STARTED_EVENT Jul 16 00:54:29.597708 containerd[1916]: time="2025-07-16T00:54:29.597619641Z" level=warning msg="container event discarded" container=7e317f6f0628733b12153a306c699ba77825a17ca84c9c7fe54bb0c614928361 type=CONTAINER_STOPPED_EVENT Jul 16 00:54:35.115270 containerd[1916]: time="2025-07-16T00:54:35.114974308Z" level=warning msg="container event discarded" container=18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4 type=CONTAINER_CREATED_EVENT Jul 16 00:54:35.168524 containerd[1916]: time="2025-07-16T00:54:35.168351851Z" level=warning msg="container event discarded" container=18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4 type=CONTAINER_STARTED_EVENT Jul 16 00:54:36.278228 containerd[1916]: time="2025-07-16T00:54:36.278111986Z" level=warning msg="container event discarded" container=490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1 type=CONTAINER_CREATED_EVENT Jul 16 00:54:36.278228 containerd[1916]: time="2025-07-16T00:54:36.278228629Z" level=warning msg="container event discarded" container=490c2f0da3a913e082108b716148599d4f90718406cd95266c59f38394ba29b1 type=CONTAINER_STARTED_EVENT Jul 16 00:54:37.757959 containerd[1916]: time="2025-07-16T00:54:37.757898948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"ebaf9936b389860b5289a987dcede3487fdb627d336b8db9db9892f7dbf10ac1\" pid:7941 exited_at:{seconds:1752627277 nanos:757661786}" Jul 16 00:54:37.879287 containerd[1916]: time="2025-07-16T00:54:37.879130884Z" level=warning msg="container event discarded" container=ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca type=CONTAINER_CREATED_EVENT Jul 16 00:54:37.938062 containerd[1916]: time="2025-07-16T00:54:37.937927849Z" level=warning msg="container event discarded" container=ceddf7691ad01dad4d97c4fde4f071f9741748d53b7e9ac315e35cd7a6eb8dca type=CONTAINER_STARTED_EVENT Jul 16 00:54:39.688141 systemd[1]: Started sshd@72-147.75.90.137:22-144.126.219.123:36668.service - OpenSSH per-connection server daemon (144.126.219.123:36668). Jul 16 00:54:39.692802 systemd[1]: sshd@67-147.75.90.137:22-203.55.224.216:55308.service: Deactivated successfully. Jul 16 00:54:39.781077 sshd[7964]: Received disconnect from 144.126.219.123 port 36668:11: Bye Bye [preauth] Jul 16 00:54:39.781077 sshd[7964]: Disconnected from authenticating user root 144.126.219.123 port 36668 [preauth] Jul 16 00:54:39.782017 systemd[1]: sshd@72-147.75.90.137:22-144.126.219.123:36668.service: Deactivated successfully. Jul 16 00:54:40.333176 containerd[1916]: time="2025-07-16T00:54:40.332999695Z" level=warning msg="container event discarded" container=8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d type=CONTAINER_CREATED_EVENT Jul 16 00:54:40.383727 containerd[1916]: time="2025-07-16T00:54:40.383545301Z" level=warning msg="container event discarded" container=8528ab9e431adfbe59d22f6ab8ac28fef8a4ff38f92ac3b191e06c1bfda74c1d type=CONTAINER_STARTED_EVENT Jul 16 00:54:40.716783 containerd[1916]: time="2025-07-16T00:54:40.716520699Z" level=warning msg="container event discarded" container=0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1 type=CONTAINER_CREATED_EVENT Jul 16 00:54:40.716783 containerd[1916]: time="2025-07-16T00:54:40.716601742Z" level=warning msg="container event discarded" container=0411dae305ccec0be772ca2267337d71a309c9c196d04e55e0657efaeda4a7b1 type=CONTAINER_STARTED_EVENT Jul 16 00:54:40.716783 containerd[1916]: time="2025-07-16T00:54:40.716628659Z" level=warning msg="container event discarded" container=4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72 type=CONTAINER_CREATED_EVENT Jul 16 00:54:40.763156 containerd[1916]: time="2025-07-16T00:54:40.763024447Z" level=warning msg="container event discarded" container=4a79c118f849a62bb623292e2d9a741c1308a1a4307796140c1724fea0929f72 type=CONTAINER_STARTED_EVENT Jul 16 00:54:42.759312 containerd[1916]: time="2025-07-16T00:54:42.759152491Z" level=warning msg="container event discarded" container=441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d type=CONTAINER_CREATED_EVENT Jul 16 00:54:42.759312 containerd[1916]: time="2025-07-16T00:54:42.759256274Z" level=warning msg="container event discarded" container=441fafeef86c67001d7efcedb5efc9d43cf083134ae3c47bc5312c618b27d66d type=CONTAINER_STARTED_EVENT Jul 16 00:54:42.830809 containerd[1916]: time="2025-07-16T00:54:42.830649916Z" level=warning msg="container event discarded" container=3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5 type=CONTAINER_CREATED_EVENT Jul 16 00:54:42.830809 containerd[1916]: time="2025-07-16T00:54:42.830757846Z" level=warning msg="container event discarded" container=3691a15ca9bd50000fa253c6dfd0de4124a8f56f440c76aac9fdd7724d669fd5 type=CONTAINER_STARTED_EVENT Jul 16 00:54:43.731135 containerd[1916]: time="2025-07-16T00:54:43.730995847Z" level=warning msg="container event discarded" container=56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5 type=CONTAINER_CREATED_EVENT Jul 16 00:54:43.731135 containerd[1916]: time="2025-07-16T00:54:43.731080908Z" level=warning msg="container event discarded" container=56784214c8743756634111848b0a1ddedc365e8de8feaf5b0aa16fb296f6f9e5 type=CONTAINER_STARTED_EVENT Jul 16 00:54:44.711741 containerd[1916]: time="2025-07-16T00:54:44.711566585Z" level=warning msg="container event discarded" container=cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d type=CONTAINER_CREATED_EVENT Jul 16 00:54:44.711741 containerd[1916]: time="2025-07-16T00:54:44.711723419Z" level=warning msg="container event discarded" container=cb387f623f56fd9f9b7ef380ed722f573f217b72a6174f47a4fb8a15e6014e6d type=CONTAINER_STARTED_EVENT Jul 16 00:54:44.712701 containerd[1916]: time="2025-07-16T00:54:44.711761813Z" level=warning msg="container event discarded" container=9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0 type=CONTAINER_CREATED_EVENT Jul 16 00:54:44.755883 containerd[1916]: time="2025-07-16T00:54:44.755750089Z" level=warning msg="container event discarded" container=9183763af681ecf397d8dc1f6ee7f3f405efa8c52a1c35cfbaef4ad26ae0a1c0 type=CONTAINER_STARTED_EVENT Jul 16 00:54:44.828467 containerd[1916]: time="2025-07-16T00:54:44.828283166Z" level=warning msg="container event discarded" container=2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0 type=CONTAINER_CREATED_EVENT Jul 16 00:54:44.828467 containerd[1916]: time="2025-07-16T00:54:44.828396921Z" level=warning msg="container event discarded" container=2d1f0a909d792319ac2e9975d7f22e5fedd05165db9a3213def39c3732dd7ca0 type=CONTAINER_STARTED_EVENT Jul 16 00:54:45.440782 containerd[1916]: time="2025-07-16T00:54:45.440614236Z" level=warning msg="container event discarded" container=6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e type=CONTAINER_CREATED_EVENT Jul 16 00:54:45.486606 containerd[1916]: time="2025-07-16T00:54:45.486448406Z" level=warning msg="container event discarded" container=6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e type=CONTAINER_STARTED_EVENT Jul 16 00:54:45.692785 containerd[1916]: time="2025-07-16T00:54:45.692523270Z" level=warning msg="container event discarded" container=c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba type=CONTAINER_CREATED_EVENT Jul 16 00:54:45.692785 containerd[1916]: time="2025-07-16T00:54:45.692597455Z" level=warning msg="container event discarded" container=c892c186883043594e92579eed9af1e6a04f0e60bc49ce3e5ac496744f966bba type=CONTAINER_STARTED_EVENT Jul 16 00:54:45.790585 containerd[1916]: time="2025-07-16T00:54:45.790530864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"6975153a4c18833ffec113197a7b767d8597b0f0542030999b085d436f8095d0\" pid:7985 exited_at:{seconds:1752627285 nanos:790382502}" Jul 16 00:54:48.100509 containerd[1916]: time="2025-07-16T00:54:48.100451423Z" level=warning msg="container event discarded" container=ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81 type=CONTAINER_CREATED_EVENT Jul 16 00:54:48.151084 containerd[1916]: time="2025-07-16T00:54:48.150956219Z" level=warning msg="container event discarded" container=ee42f2eaf1f9efa637bcead4febc47d1633938d1c89117ca17d024d501132d81 type=CONTAINER_STARTED_EVENT Jul 16 00:54:48.505380 containerd[1916]: time="2025-07-16T00:54:48.505105908Z" level=warning msg="container event discarded" container=82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d type=CONTAINER_CREATED_EVENT Jul 16 00:54:48.555755 containerd[1916]: time="2025-07-16T00:54:48.555611077Z" level=warning msg="container event discarded" container=82ed2d3604db70c003ddf42853470f82a52c732beddf2ede185114b1a828a35d type=CONTAINER_STARTED_EVENT Jul 16 00:54:51.727817 containerd[1916]: time="2025-07-16T00:54:51.727657311Z" level=warning msg="container event discarded" container=be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0 type=CONTAINER_CREATED_EVENT Jul 16 00:54:51.774211 containerd[1916]: time="2025-07-16T00:54:51.774078444Z" level=warning msg="container event discarded" container=be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0 type=CONTAINER_STARTED_EVENT Jul 16 00:54:53.099980 containerd[1916]: time="2025-07-16T00:54:53.099902209Z" level=warning msg="container event discarded" container=e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06 type=CONTAINER_CREATED_EVENT Jul 16 00:54:53.145395 containerd[1916]: time="2025-07-16T00:54:53.145259711Z" level=warning msg="container event discarded" container=e58e420b950049033f8d48f5d5ef3a151c581ea80dd2dfe7af509d827c38db06 type=CONTAINER_STARTED_EVENT Jul 16 00:54:53.833519 containerd[1916]: time="2025-07-16T00:54:53.833476069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"988ee84f94d03cbe740a72ae7dda537bc95db298274a714381edbe0d72c270d7\" pid:8007 exited_at:{seconds:1752627293 nanos:833191230}" Jul 16 00:54:54.698430 containerd[1916]: time="2025-07-16T00:54:54.698302579Z" level=warning msg="container event discarded" container=78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14 type=CONTAINER_CREATED_EVENT Jul 16 00:54:54.775756 containerd[1916]: time="2025-07-16T00:54:54.775610194Z" level=warning msg="container event discarded" container=78bcb39aa5be2a45d8c44060c2626f8a6f5f30a9bcfd3eb1cb83c0fb1af38b14 type=CONTAINER_STARTED_EVENT Jul 16 00:55:00.916857 containerd[1916]: time="2025-07-16T00:55:00.916835074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"29b5b7bdc5f6d5eb36c6aa67b27ed8d4f54a7ffe89689a862755122614b8a276\" pid:8049 exited_at:{seconds:1752627300 nanos:916731298}" Jul 16 00:55:07.750773 containerd[1916]: time="2025-07-16T00:55:07.750749461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"02cf7412c4cc5f6dca5f9e553514acd0999d5c4eb60a9f78ee4e8bfe8413cf6e\" pid:8075 exited_at:{seconds:1752627307 nanos:750505568}" Jul 16 00:55:09.266661 containerd[1916]: time="2025-07-16T00:55:09.266593745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"9a50a4bf85aded5286a74f751a82b76c5c2e111b0c1a2051fcf065c590181969\" pid:8111 exited_at:{seconds:1752627309 nanos:266385504}" Jul 16 00:55:15.796888 containerd[1916]: time="2025-07-16T00:55:15.796856627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"fdc2cfa844eb7cebbc52d46de6c94b6b17d284350729322e4b5f2fba804335e9\" pid:8147 exited_at:{seconds:1752627315 nanos:796680958}" Jul 16 00:55:23.877776 containerd[1916]: time="2025-07-16T00:55:23.877753589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"c04c83c8fbc9a0760a41422937514fbdec98ad8612a4ef73768d9bf760abdde3\" pid:8171 exited_at:{seconds:1752627323 nanos:877579568}" Jul 16 00:55:26.302064 systemd[1]: Started sshd@73-147.75.90.137:22-147.75.109.163:58414.service - OpenSSH per-connection server daemon (147.75.109.163:58414). Jul 16 00:55:26.391011 sshd[8194]: Accepted publickey for core from 147.75.109.163 port 58414 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:26.392207 sshd-session[8194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:26.396526 systemd-logind[1904]: New session 12 of user core. Jul 16 00:55:26.406087 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 16 00:55:26.548694 sshd[8196]: Connection closed by 147.75.109.163 port 58414 Jul 16 00:55:26.549032 sshd-session[8194]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:26.552148 systemd[1]: sshd@73-147.75.90.137:22-147.75.109.163:58414.service: Deactivated successfully. Jul 16 00:55:26.554080 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 00:55:26.556138 systemd-logind[1904]: Session 12 logged out. Waiting for processes to exit. Jul 16 00:55:26.557376 systemd-logind[1904]: Removed session 12. Jul 16 00:55:31.583665 systemd[1]: Started sshd@74-147.75.90.137:22-147.75.109.163:33482.service - OpenSSH per-connection server daemon (147.75.109.163:33482). Jul 16 00:55:31.678469 sshd[8223]: Accepted publickey for core from 147.75.109.163 port 33482 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:31.679432 sshd-session[8223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:31.683132 systemd-logind[1904]: New session 13 of user core. Jul 16 00:55:31.699345 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 16 00:55:31.825853 sshd[8225]: Connection closed by 147.75.109.163 port 33482 Jul 16 00:55:31.826040 sshd-session[8223]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:31.828149 systemd[1]: sshd@74-147.75.90.137:22-147.75.109.163:33482.service: Deactivated successfully. Jul 16 00:55:31.829109 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 00:55:31.829588 systemd-logind[1904]: Session 13 logged out. Waiting for processes to exit. Jul 16 00:55:31.830200 systemd-logind[1904]: Removed session 13. Jul 16 00:55:36.860852 systemd[1]: Started sshd@75-147.75.90.137:22-147.75.109.163:33490.service - OpenSSH per-connection server daemon (147.75.109.163:33490). Jul 16 00:55:36.913931 sshd[8251]: Accepted publickey for core from 147.75.109.163 port 33490 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:36.914778 sshd-session[8251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:36.918759 systemd-logind[1904]: New session 14 of user core. Jul 16 00:55:36.932135 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 16 00:55:37.076994 sshd[8253]: Connection closed by 147.75.109.163 port 33490 Jul 16 00:55:37.077175 sshd-session[8251]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:37.107483 systemd[1]: sshd@75-147.75.90.137:22-147.75.109.163:33490.service: Deactivated successfully. Jul 16 00:55:37.112112 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 00:55:37.114658 systemd-logind[1904]: Session 14 logged out. Waiting for processes to exit. Jul 16 00:55:37.121230 systemd[1]: Started sshd@76-147.75.90.137:22-147.75.109.163:33492.service - OpenSSH per-connection server daemon (147.75.109.163:33492). Jul 16 00:55:37.124199 systemd-logind[1904]: Removed session 14. Jul 16 00:55:37.204746 sshd[8279]: Accepted publickey for core from 147.75.109.163 port 33492 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:37.208051 sshd-session[8279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:37.220721 systemd-logind[1904]: New session 15 of user core. Jul 16 00:55:37.246251 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 16 00:55:37.417930 sshd[8281]: Connection closed by 147.75.109.163 port 33492 Jul 16 00:55:37.418694 sshd-session[8279]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:37.442785 systemd[1]: sshd@76-147.75.90.137:22-147.75.109.163:33492.service: Deactivated successfully. Jul 16 00:55:37.447461 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 00:55:37.449705 systemd-logind[1904]: Session 15 logged out. Waiting for processes to exit. Jul 16 00:55:37.455063 systemd[1]: Started sshd@77-147.75.90.137:22-147.75.109.163:33498.service - OpenSSH per-connection server daemon (147.75.109.163:33498). Jul 16 00:55:37.456215 systemd-logind[1904]: Removed session 15. Jul 16 00:55:37.518423 sshd[8304]: Accepted publickey for core from 147.75.109.163 port 33498 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:37.519814 sshd-session[8304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:37.525585 systemd-logind[1904]: New session 16 of user core. Jul 16 00:55:37.553213 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 16 00:55:37.685004 sshd[8306]: Connection closed by 147.75.109.163 port 33498 Jul 16 00:55:37.685150 sshd-session[8304]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:37.686944 systemd[1]: sshd@77-147.75.90.137:22-147.75.109.163:33498.service: Deactivated successfully. Jul 16 00:55:37.687923 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 00:55:37.688658 systemd-logind[1904]: Session 16 logged out. Waiting for processes to exit. Jul 16 00:55:37.689270 systemd-logind[1904]: Removed session 16. Jul 16 00:55:37.786905 containerd[1916]: time="2025-07-16T00:55:37.786844733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"cc8f01ad7931f68c96b126bfd55a85a0da680568a04ac9079fa1b19bde7471e6\" pid:8341 exited_at:{seconds:1752627337 nanos:786600596}" Jul 16 00:55:42.716122 systemd[1]: Started sshd@78-147.75.90.137:22-147.75.109.163:48814.service - OpenSSH per-connection server daemon (147.75.109.163:48814). Jul 16 00:55:42.807655 sshd[8371]: Accepted publickey for core from 147.75.109.163 port 48814 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:42.808291 sshd-session[8371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:42.811058 systemd-logind[1904]: New session 17 of user core. Jul 16 00:55:42.823108 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 16 00:55:42.912023 sshd[8373]: Connection closed by 147.75.109.163 port 48814 Jul 16 00:55:42.912230 sshd-session[8371]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:42.914544 systemd[1]: sshd@78-147.75.90.137:22-147.75.109.163:48814.service: Deactivated successfully. Jul 16 00:55:42.915677 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 00:55:42.916290 systemd-logind[1904]: Session 17 logged out. Waiting for processes to exit. Jul 16 00:55:42.917033 systemd-logind[1904]: Removed session 17. Jul 16 00:55:45.834917 containerd[1916]: time="2025-07-16T00:55:45.834885031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"1231a7a3f2657554bab8afe201be6a68e5a8df6e5bae8d5d5f1f40a752cef097\" pid:8413 exited_at:{seconds:1752627345 nanos:834679187}" Jul 16 00:55:47.936226 systemd[1]: Started sshd@79-147.75.90.137:22-147.75.109.163:48824.service - OpenSSH per-connection server daemon (147.75.109.163:48824). Jul 16 00:55:47.981465 sshd[8424]: Accepted publickey for core from 147.75.109.163 port 48824 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:47.984724 sshd-session[8424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:47.996923 systemd-logind[1904]: New session 18 of user core. Jul 16 00:55:48.015298 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 16 00:55:48.112430 sshd[8426]: Connection closed by 147.75.109.163 port 48824 Jul 16 00:55:48.112620 sshd-session[8424]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:48.114466 systemd[1]: sshd@79-147.75.90.137:22-147.75.109.163:48824.service: Deactivated successfully. Jul 16 00:55:48.115492 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 00:55:48.116248 systemd-logind[1904]: Session 18 logged out. Waiting for processes to exit. Jul 16 00:55:48.116824 systemd-logind[1904]: Removed session 18. Jul 16 00:55:50.087328 systemd[1]: Started sshd@80-147.75.90.137:22-144.126.219.123:56474.service - OpenSSH per-connection server daemon (144.126.219.123:56474). Jul 16 00:55:50.178897 sshd[8451]: Received disconnect from 144.126.219.123 port 56474:11: Bye Bye [preauth] Jul 16 00:55:50.178897 sshd[8451]: Disconnected from authenticating user root 144.126.219.123 port 56474 [preauth] Jul 16 00:55:50.182327 systemd[1]: sshd@80-147.75.90.137:22-144.126.219.123:56474.service: Deactivated successfully. Jul 16 00:55:53.142050 systemd[1]: Started sshd@81-147.75.90.137:22-147.75.109.163:54770.service - OpenSSH per-connection server daemon (147.75.109.163:54770). Jul 16 00:55:53.228062 sshd[8457]: Accepted publickey for core from 147.75.109.163 port 54770 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:53.231302 sshd-session[8457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:53.243656 systemd-logind[1904]: New session 19 of user core. Jul 16 00:55:53.263238 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 16 00:55:53.356425 sshd[8459]: Connection closed by 147.75.109.163 port 54770 Jul 16 00:55:53.356656 sshd-session[8457]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:53.358787 systemd[1]: sshd@81-147.75.90.137:22-147.75.109.163:54770.service: Deactivated successfully. Jul 16 00:55:53.359771 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 00:55:53.360307 systemd-logind[1904]: Session 19 logged out. Waiting for processes to exit. Jul 16 00:55:53.361070 systemd-logind[1904]: Removed session 19. Jul 16 00:55:53.877943 containerd[1916]: time="2025-07-16T00:55:53.877922797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"2c9ebb5a648536104b51533fec236d2cdc5150df35185fc806b7ad5c78bf2e9c\" pid:8497 exited_at:{seconds:1752627353 nanos:877759520}" Jul 16 00:55:58.383884 systemd[1]: Started sshd@82-147.75.90.137:22-147.75.109.163:44708.service - OpenSSH per-connection server daemon (147.75.109.163:44708). Jul 16 00:55:58.438017 sshd[8519]: Accepted publickey for core from 147.75.109.163 port 44708 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:58.438926 sshd-session[8519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:58.442919 systemd-logind[1904]: New session 20 of user core. Jul 16 00:55:58.460125 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 16 00:55:58.551739 sshd[8521]: Connection closed by 147.75.109.163 port 44708 Jul 16 00:55:58.551950 sshd-session[8519]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:58.569720 systemd[1]: sshd@82-147.75.90.137:22-147.75.109.163:44708.service: Deactivated successfully. Jul 16 00:55:58.573764 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 00:55:58.576021 systemd-logind[1904]: Session 20 logged out. Waiting for processes to exit. Jul 16 00:55:58.582824 systemd[1]: Started sshd@83-147.75.90.137:22-147.75.109.163:44710.service - OpenSSH per-connection server daemon (147.75.109.163:44710). Jul 16 00:55:58.584687 systemd-logind[1904]: Removed session 20. Jul 16 00:55:58.668432 sshd[8546]: Accepted publickey for core from 147.75.109.163 port 44710 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:58.669584 sshd-session[8546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:58.674590 systemd-logind[1904]: New session 21 of user core. Jul 16 00:55:58.695229 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 16 00:55:58.796084 sshd[8549]: Connection closed by 147.75.109.163 port 44710 Jul 16 00:55:58.796292 sshd-session[8546]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:58.826177 systemd[1]: sshd@83-147.75.90.137:22-147.75.109.163:44710.service: Deactivated successfully. Jul 16 00:55:58.830149 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 00:55:58.832357 systemd-logind[1904]: Session 21 logged out. Waiting for processes to exit. Jul 16 00:55:58.838254 systemd[1]: Started sshd@84-147.75.90.137:22-147.75.109.163:44724.service - OpenSSH per-connection server daemon (147.75.109.163:44724). Jul 16 00:55:58.840114 systemd-logind[1904]: Removed session 21. Jul 16 00:55:58.933402 sshd[8570]: Accepted publickey for core from 147.75.109.163 port 44724 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:58.936810 sshd-session[8570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:58.949694 systemd-logind[1904]: New session 22 of user core. Jul 16 00:55:58.967244 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 16 00:55:59.700056 sshd[8572]: Connection closed by 147.75.109.163 port 44724 Jul 16 00:55:59.700518 sshd-session[8570]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:59.716890 systemd[1]: sshd@84-147.75.90.137:22-147.75.109.163:44724.service: Deactivated successfully. Jul 16 00:55:59.719283 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 00:55:59.720548 systemd-logind[1904]: Session 22 logged out. Waiting for processes to exit. Jul 16 00:55:59.724283 systemd[1]: Started sshd@85-147.75.90.137:22-147.75.109.163:44740.service - OpenSSH per-connection server daemon (147.75.109.163:44740). Jul 16 00:55:59.725353 systemd-logind[1904]: Removed session 22. Jul 16 00:55:59.810241 sshd[8602]: Accepted publickey for core from 147.75.109.163 port 44740 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:55:59.811568 sshd-session[8602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:59.816873 systemd-logind[1904]: New session 23 of user core. Jul 16 00:55:59.829250 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 16 00:56:00.017475 sshd[8605]: Connection closed by 147.75.109.163 port 44740 Jul 16 00:56:00.017647 sshd-session[8602]: pam_unix(sshd:session): session closed for user core Jul 16 00:56:00.041465 systemd[1]: sshd@85-147.75.90.137:22-147.75.109.163:44740.service: Deactivated successfully. Jul 16 00:56:00.045487 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 00:56:00.047699 systemd-logind[1904]: Session 23 logged out. Waiting for processes to exit. Jul 16 00:56:00.054190 systemd[1]: Started sshd@86-147.75.90.137:22-147.75.109.163:44742.service - OpenSSH per-connection server daemon (147.75.109.163:44742). Jul 16 00:56:00.056074 systemd-logind[1904]: Removed session 23. Jul 16 00:56:00.153990 sshd[8629]: Accepted publickey for core from 147.75.109.163 port 44742 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:56:00.155304 sshd-session[8629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:56:00.159819 systemd-logind[1904]: New session 24 of user core. Jul 16 00:56:00.176318 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 16 00:56:00.308411 sshd[8637]: Connection closed by 147.75.109.163 port 44742 Jul 16 00:56:00.308610 sshd-session[8629]: pam_unix(sshd:session): session closed for user core Jul 16 00:56:00.310659 systemd[1]: sshd@86-147.75.90.137:22-147.75.109.163:44742.service: Deactivated successfully. Jul 16 00:56:00.311552 systemd[1]: session-24.scope: Deactivated successfully. Jul 16 00:56:00.312043 systemd-logind[1904]: Session 24 logged out. Waiting for processes to exit. Jul 16 00:56:00.312679 systemd-logind[1904]: Removed session 24. Jul 16 00:56:00.909329 containerd[1916]: time="2025-07-16T00:56:00.909296860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"19ec73454af71b3d14061d71f4b6e7feccdac7f3d10191df8215f901f7169543\" pid:8675 exited_at:{seconds:1752627360 nanos:909144732}" Jul 16 00:56:03.232362 systemd[1]: Started sshd@87-147.75.90.137:22-203.55.224.216:56308.service - OpenSSH per-connection server daemon (203.55.224.216:56308). Jul 16 00:56:05.331297 sshd[8688]: Received disconnect from 203.55.224.216 port 56308:11: Bye Bye [preauth] Jul 16 00:56:05.331297 sshd[8688]: Disconnected from authenticating user root 203.55.224.216 port 56308 [preauth] Jul 16 00:56:05.334609 systemd[1]: sshd@87-147.75.90.137:22-203.55.224.216:56308.service: Deactivated successfully. Jul 16 00:56:05.343063 systemd[1]: Started sshd@88-147.75.90.137:22-147.75.109.163:44748.service - OpenSSH per-connection server daemon (147.75.109.163:44748). Jul 16 00:56:05.433020 sshd[8713]: Accepted publickey for core from 147.75.109.163 port 44748 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:56:05.434199 sshd-session[8713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:56:05.438918 systemd-logind[1904]: New session 25 of user core. Jul 16 00:56:05.452079 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 16 00:56:05.542522 sshd[8715]: Connection closed by 147.75.109.163 port 44748 Jul 16 00:56:05.542712 sshd-session[8713]: pam_unix(sshd:session): session closed for user core Jul 16 00:56:05.544741 systemd[1]: sshd@88-147.75.90.137:22-147.75.109.163:44748.service: Deactivated successfully. Jul 16 00:56:05.545653 systemd[1]: session-25.scope: Deactivated successfully. Jul 16 00:56:05.546166 systemd-logind[1904]: Session 25 logged out. Waiting for processes to exit. Jul 16 00:56:05.546823 systemd-logind[1904]: Removed session 25. Jul 16 00:56:07.756140 containerd[1916]: time="2025-07-16T00:56:07.756108494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18dab54ef0693535b1092e84ba72a8e49c139c132335e0dd0f6d5c4f2227eef4\" id:\"a5f7d4c274d2464d5164d6afd5bb919cf819a012d6d19a508a90f31e10e0f283\" pid:8751 exited_at:{seconds:1752627367 nanos:755872889}" Jul 16 00:56:09.271244 containerd[1916]: time="2025-07-16T00:56:09.271209408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be800ec18f7b81899758407874dcdc54a3764b6d537e5b7acde51b3b213d59f0\" id:\"827fcda7e4188d9763f6d3011d1c800623c470a2ac25aa6fd5323726181b65aa\" pid:8785 exited_at:{seconds:1752627369 nanos:270978377}" Jul 16 00:56:10.571441 systemd[1]: Started sshd@89-147.75.90.137:22-147.75.109.163:36056.service - OpenSSH per-connection server daemon (147.75.109.163:36056). Jul 16 00:56:10.628284 sshd[8809]: Accepted publickey for core from 147.75.109.163 port 36056 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:56:10.629020 sshd-session[8809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:56:10.632344 systemd-logind[1904]: New session 26 of user core. Jul 16 00:56:10.647089 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 16 00:56:10.738325 sshd[8811]: Connection closed by 147.75.109.163 port 36056 Jul 16 00:56:10.738560 sshd-session[8809]: pam_unix(sshd:session): session closed for user core Jul 16 00:56:10.740456 systemd[1]: sshd@89-147.75.90.137:22-147.75.109.163:36056.service: Deactivated successfully. Jul 16 00:56:10.741500 systemd[1]: session-26.scope: Deactivated successfully. Jul 16 00:56:10.742245 systemd-logind[1904]: Session 26 logged out. Waiting for processes to exit. Jul 16 00:56:10.742884 systemd-logind[1904]: Removed session 26. Jul 16 00:56:15.759517 systemd[1]: Started sshd@90-147.75.90.137:22-147.75.109.163:36064.service - OpenSSH per-connection server daemon (147.75.109.163:36064). Jul 16 00:56:15.802623 containerd[1916]: time="2025-07-16T00:56:15.802593968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6af3035db27f8718f3a2fde2f9cccb134a0e6f14c30a3e1a3e3fc444aa35139e\" id:\"d10d4e4d82fafa2508f1bf6b9db06a92966a7624d28750f502c7d1bd82081257\" pid:8849 exited_at:{seconds:1752627375 nanos:802408583}" Jul 16 00:56:15.815472 sshd[8843]: Accepted publickey for core from 147.75.109.163 port 36064 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:56:15.816966 sshd-session[8843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:56:15.822365 systemd-logind[1904]: New session 27 of user core. Jul 16 00:56:15.836985 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 16 00:56:15.916177 sshd[8860]: Connection closed by 147.75.109.163 port 36064 Jul 16 00:56:15.916395 sshd-session[8843]: pam_unix(sshd:session): session closed for user core Jul 16 00:56:15.918589 systemd[1]: sshd@90-147.75.90.137:22-147.75.109.163:36064.service: Deactivated successfully. Jul 16 00:56:15.919495 systemd[1]: session-27.scope: Deactivated successfully. Jul 16 00:56:15.919886 systemd-logind[1904]: Session 27 logged out. Waiting for processes to exit. Jul 16 00:56:15.920447 systemd-logind[1904]: Removed session 27.