Sep 12 17:54:06.001740 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:54:06.001754 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:54:06.001762 kernel: BIOS-provided physical RAM map: Sep 12 17:54:06.001766 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 12 17:54:06.001770 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 12 17:54:06.001774 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 12 17:54:06.001778 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 12 17:54:06.001782 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 12 17:54:06.001786 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c3fff] usable Sep 12 17:54:06.001790 kernel: BIOS-e820: [mem 0x00000000819c4000-0x00000000819c4fff] ACPI NVS Sep 12 17:54:06.001794 kernel: BIOS-e820: [mem 0x00000000819c5000-0x00000000819c5fff] reserved Sep 12 17:54:06.001799 kernel: BIOS-e820: [mem 0x00000000819c6000-0x000000008afcdfff] usable Sep 12 17:54:06.001804 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Sep 12 17:54:06.001808 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Sep 12 17:54:06.001813 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Sep 12 17:54:06.001818 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Sep 12 17:54:06.001823 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 12 17:54:06.001828 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 12 17:54:06.001832 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:54:06.001837 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 12 17:54:06.001841 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 12 17:54:06.001846 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 12 17:54:06.001850 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 12 17:54:06.001855 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 12 17:54:06.001860 kernel: NX (Execute Disable) protection: active Sep 12 17:54:06.001864 kernel: APIC: Static calls initialized Sep 12 17:54:06.001869 kernel: SMBIOS 3.2.1 present. Sep 12 17:54:06.001873 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Sep 12 17:54:06.001879 kernel: tsc: Detected 3400.000 MHz processor Sep 12 17:54:06.001883 kernel: tsc: Detected 3399.906 MHz TSC Sep 12 17:54:06.001888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:54:06.001893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:54:06.001898 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 12 17:54:06.001903 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 12 17:54:06.001907 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:54:06.001912 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 12 17:54:06.001917 kernel: Using GB pages for direct mapping Sep 12 17:54:06.001922 kernel: ACPI: Early table checksum verification disabled Sep 12 17:54:06.001927 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 12 17:54:06.001932 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 12 17:54:06.001939 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Sep 12 17:54:06.001944 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 12 17:54:06.001949 kernel: ACPI: FACS 0x000000008C66DF80 000040 Sep 12 17:54:06.001954 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Sep 12 17:54:06.001960 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Sep 12 17:54:06.001965 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 12 17:54:06.001970 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 12 17:54:06.001975 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 12 17:54:06.001980 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 12 17:54:06.001985 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 12 17:54:06.001990 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 12 17:54:06.001996 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002001 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 12 17:54:06.002006 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 12 17:54:06.002010 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002015 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002020 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 12 17:54:06.002025 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 12 17:54:06.002030 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002036 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002041 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 12 17:54:06.002046 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:54:06.002051 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 12 17:54:06.002056 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 12 17:54:06.002061 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 12 17:54:06.002066 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 12 17:54:06.002071 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 12 17:54:06.002076 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 12 17:54:06.002082 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 12 17:54:06.002087 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 12 17:54:06.002092 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 12 17:54:06.002097 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Sep 12 17:54:06.002102 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Sep 12 17:54:06.002107 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Sep 12 17:54:06.002112 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Sep 12 17:54:06.002117 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Sep 12 17:54:06.002122 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Sep 12 17:54:06.002128 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Sep 12 17:54:06.002133 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Sep 12 17:54:06.002138 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Sep 12 17:54:06.002143 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Sep 12 17:54:06.002147 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Sep 12 17:54:06.002152 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Sep 12 17:54:06.002157 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Sep 12 17:54:06.002162 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Sep 12 17:54:06.002167 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Sep 12 17:54:06.002173 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Sep 12 17:54:06.002178 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Sep 12 17:54:06.002183 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Sep 12 17:54:06.002188 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Sep 12 17:54:06.002200 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Sep 12 17:54:06.002206 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Sep 12 17:54:06.002229 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Sep 12 17:54:06.002234 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Sep 12 17:54:06.002254 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Sep 12 17:54:06.002260 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Sep 12 17:54:06.002265 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Sep 12 17:54:06.002270 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Sep 12 17:54:06.002275 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Sep 12 17:54:06.002280 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Sep 12 17:54:06.002285 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Sep 12 17:54:06.002290 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Sep 12 17:54:06.002294 kernel: No NUMA configuration found Sep 12 17:54:06.002299 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 12 17:54:06.002304 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 12 17:54:06.002310 kernel: Zone ranges: Sep 12 17:54:06.002315 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:54:06.002320 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:54:06.002325 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 12 17:54:06.002330 kernel: Movable zone start for each node Sep 12 17:54:06.002335 kernel: Early memory node ranges Sep 12 17:54:06.002340 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 12 17:54:06.002345 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 12 17:54:06.002350 kernel: node 0: [mem 0x0000000040400000-0x00000000819c3fff] Sep 12 17:54:06.002356 kernel: node 0: [mem 0x00000000819c6000-0x000000008afcdfff] Sep 12 17:54:06.002361 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Sep 12 17:54:06.002366 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 12 17:54:06.002371 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 12 17:54:06.002380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 12 17:54:06.002385 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:54:06.002391 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 12 17:54:06.002396 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 12 17:54:06.002402 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 12 17:54:06.002408 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 12 17:54:06.002413 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Sep 12 17:54:06.002418 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 12 17:54:06.002424 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 12 17:54:06.002429 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 12 17:54:06.002434 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 12 17:54:06.002440 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 12 17:54:06.002445 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 12 17:54:06.002451 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 12 17:54:06.002456 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 12 17:54:06.002462 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 12 17:54:06.002467 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 12 17:54:06.002472 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 12 17:54:06.002477 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 12 17:54:06.002482 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 12 17:54:06.002488 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 12 17:54:06.002493 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 12 17:54:06.002499 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 12 17:54:06.002505 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 12 17:54:06.002510 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 12 17:54:06.002515 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 12 17:54:06.002520 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 12 17:54:06.002526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:54:06.002531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:54:06.002536 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:54:06.002542 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:54:06.002548 kernel: TSC deadline timer available Sep 12 17:54:06.002553 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 12 17:54:06.002558 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 12 17:54:06.002564 kernel: Booting paravirtualized kernel on bare hardware Sep 12 17:54:06.002569 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:54:06.002575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 12 17:54:06.002580 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 12 17:54:06.002585 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 12 17:54:06.002591 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 12 17:54:06.002597 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:54:06.002603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:54:06.002608 kernel: random: crng init done Sep 12 17:54:06.002614 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 12 17:54:06.002619 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 12 17:54:06.002624 kernel: Fallback order for Node 0: 0 Sep 12 17:54:06.002630 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Sep 12 17:54:06.002635 kernel: Policy zone: Normal Sep 12 17:54:06.002641 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:54:06.002646 kernel: software IO TLB: area num 16. Sep 12 17:54:06.002652 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 732416K reserved, 0K cma-reserved) Sep 12 17:54:06.002657 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 12 17:54:06.002663 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:54:06.002668 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:54:06.002673 kernel: Dynamic Preempt: voluntary Sep 12 17:54:06.002679 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:54:06.002684 kernel: rcu: RCU event tracing is enabled. Sep 12 17:54:06.002691 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 12 17:54:06.002696 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:54:06.002702 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:54:06.002707 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:54:06.002712 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:54:06.002718 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 12 17:54:06.002723 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 12 17:54:06.002728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:54:06.002733 kernel: Console: colour dummy device 80x25 Sep 12 17:54:06.002739 kernel: printk: console [tty0] enabled Sep 12 17:54:06.002745 kernel: printk: console [ttyS1] enabled Sep 12 17:54:06.002751 kernel: ACPI: Core revision 20230628 Sep 12 17:54:06.002756 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Sep 12 17:54:06.002762 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:54:06.002767 kernel: DMAR: Host address width 39 Sep 12 17:54:06.002772 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 12 17:54:06.002778 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 12 17:54:06.002783 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Sep 12 17:54:06.002788 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 12 17:54:06.002795 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 12 17:54:06.002800 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 12 17:54:06.002805 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 12 17:54:06.002811 kernel: x2apic enabled Sep 12 17:54:06.002816 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 12 17:54:06.002821 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:54:06.002827 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 12 17:54:06.002832 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 12 17:54:06.002838 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 12 17:54:06.002844 kernel: process: using mwait in idle threads Sep 12 17:54:06.002849 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:54:06.002854 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:54:06.002860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:54:06.002865 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:54:06.002870 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:54:06.002876 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 12 17:54:06.002881 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 12 17:54:06.002887 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 12 17:54:06.002893 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:54:06.002898 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:54:06.002903 kernel: TAA: Mitigation: TSX disabled Sep 12 17:54:06.002909 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 12 17:54:06.002914 kernel: SRBDS: Mitigation: Microcode Sep 12 17:54:06.002920 kernel: GDS: Mitigation: Microcode Sep 12 17:54:06.002925 kernel: active return thunk: its_return_thunk Sep 12 17:54:06.002930 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:54:06.002935 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Sep 12 17:54:06.002942 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:54:06.002947 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:54:06.002952 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:54:06.002958 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:54:06.002963 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:54:06.002968 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:54:06.002973 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:54:06.002979 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:54:06.002984 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 12 17:54:06.002990 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:54:06.002996 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:54:06.003001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:54:06.003006 kernel: landlock: Up and running. Sep 12 17:54:06.003012 kernel: SELinux: Initializing. Sep 12 17:54:06.003017 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.003022 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.003027 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 12 17:54:06.003034 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 17:54:06.003039 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 17:54:06.003045 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 17:54:06.003050 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 12 17:54:06.003056 kernel: ... version: 4 Sep 12 17:54:06.003061 kernel: ... bit width: 48 Sep 12 17:54:06.003066 kernel: ... generic registers: 4 Sep 12 17:54:06.003071 kernel: ... value mask: 0000ffffffffffff Sep 12 17:54:06.003077 kernel: ... max period: 00007fffffffffff Sep 12 17:54:06.003083 kernel: ... fixed-purpose events: 3 Sep 12 17:54:06.003088 kernel: ... event mask: 000000070000000f Sep 12 17:54:06.003094 kernel: signal: max sigframe size: 2032 Sep 12 17:54:06.003099 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 12 17:54:06.003104 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:54:06.003110 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:54:06.003115 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 12 17:54:06.003120 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:54:06.003126 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:54:06.003132 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 12 17:54:06.003138 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:54:06.003143 kernel: smp: Brought up 1 node, 16 CPUs Sep 12 17:54:06.003148 kernel: smpboot: Max logical packages: 1 Sep 12 17:54:06.003154 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 12 17:54:06.003159 kernel: devtmpfs: initialized Sep 12 17:54:06.003164 kernel: x86/mm: Memory block size: 128MB Sep 12 17:54:06.003170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819c4000-0x819c4fff] (4096 bytes) Sep 12 17:54:06.003175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Sep 12 17:54:06.003181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:54:06.003187 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 12 17:54:06.003194 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:54:06.003200 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:54:06.003205 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:54:06.003232 kernel: audit: type=2000 audit(1757699640.119:1): state=initialized audit_enabled=0 res=1 Sep 12 17:54:06.003237 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:54:06.003258 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:54:06.003264 kernel: cpuidle: using governor menu Sep 12 17:54:06.003270 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:54:06.003276 kernel: dca service started, version 1.12.1 Sep 12 17:54:06.003281 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 12 17:54:06.003286 kernel: PCI: Using configuration type 1 for base access Sep 12 17:54:06.003292 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 12 17:54:06.003297 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:54:06.003302 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:54:06.003308 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:54:06.003313 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:54:06.003319 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:54:06.003325 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:54:06.003330 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:54:06.003335 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:54:06.003341 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 12 17:54:06.003346 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003351 kernel: ACPI: SSDT 0xFFFF989041AF6000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 12 17:54:06.003357 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003362 kernel: ACPI: SSDT 0xFFFF989041AE9800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 12 17:54:06.003368 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003374 kernel: ACPI: SSDT 0xFFFF989040246900 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 12 17:54:06.003379 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003384 kernel: ACPI: SSDT 0xFFFF989041E58800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 12 17:54:06.003389 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003394 kernel: ACPI: SSDT 0xFFFF98904012B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 12 17:54:06.003400 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003405 kernel: ACPI: SSDT 0xFFFF989041AF0C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 12 17:54:06.003410 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 12 17:54:06.003417 kernel: ACPI: Interpreter enabled Sep 12 17:54:06.003422 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:54:06.003427 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:54:06.003433 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 12 17:54:06.003438 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 12 17:54:06.003443 kernel: HEST: Table parsing has been initialized. Sep 12 17:54:06.003448 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 12 17:54:06.003454 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:54:06.003459 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:54:06.003465 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 12 17:54:06.003471 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 12 17:54:06.003476 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 12 17:54:06.003482 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 12 17:54:06.003487 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 12 17:54:06.003492 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 12 17:54:06.003498 kernel: ACPI: \_TZ_.FN00: New power resource Sep 12 17:54:06.003503 kernel: ACPI: \_TZ_.FN01: New power resource Sep 12 17:54:06.003509 kernel: ACPI: \_TZ_.FN02: New power resource Sep 12 17:54:06.003514 kernel: ACPI: \_TZ_.FN03: New power resource Sep 12 17:54:06.003520 kernel: ACPI: \_TZ_.FN04: New power resource Sep 12 17:54:06.003526 kernel: ACPI: \PIN_: New power resource Sep 12 17:54:06.003531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 12 17:54:06.003609 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:54:06.003663 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 12 17:54:06.003713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 12 17:54:06.003721 kernel: PCI host bridge to bus 0000:00 Sep 12 17:54:06.003776 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:54:06.003821 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:54:06.003864 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:54:06.003909 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 12 17:54:06.003951 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 12 17:54:06.003993 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 12 17:54:06.004051 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 12 17:54:06.004109 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 12 17:54:06.004158 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.004234 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Sep 12 17:54:06.004298 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.004350 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 12 17:54:06.004402 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 12 17:54:06.004458 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 12 17:54:06.004507 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 12 17:54:06.004560 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 12 17:54:06.004609 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 12 17:54:06.004657 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 12 17:54:06.004710 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 12 17:54:06.004762 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 12 17:54:06.004811 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 12 17:54:06.004862 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 12 17:54:06.004911 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 17:54:06.004964 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 12 17:54:06.005012 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 17:54:06.005075 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 12 17:54:06.005123 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 12 17:54:06.005171 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 12 17:54:06.005258 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 12 17:54:06.005308 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 12 17:54:06.005356 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 12 17:54:06.005408 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 12 17:54:06.005459 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 12 17:54:06.005508 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 12 17:54:06.005562 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 12 17:54:06.005611 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 12 17:54:06.005658 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 12 17:54:06.005707 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 12 17:54:06.005757 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 12 17:54:06.005806 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 12 17:54:06.005853 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 12 17:54:06.005901 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 12 17:54:06.005954 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 12 17:54:06.006005 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006059 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 12 17:54:06.006107 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006163 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 12 17:54:06.006253 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006310 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 12 17:54:06.006360 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006419 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Sep 12 17:54:06.006469 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006524 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 12 17:54:06.006572 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 17:54:06.006626 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 12 17:54:06.006680 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 12 17:54:06.006729 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 12 17:54:06.006778 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 12 17:54:06.006830 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 12 17:54:06.006879 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 12 17:54:06.006928 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:54:06.006987 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Sep 12 17:54:06.007038 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 12 17:54:06.007088 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 12 17:54:06.007137 kernel: pci 0000:02:00.0: PME# supported from D3cold Sep 12 17:54:06.007187 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 12 17:54:06.007274 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 12 17:54:06.007331 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Sep 12 17:54:06.007385 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 12 17:54:06.007434 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 12 17:54:06.007484 kernel: pci 0000:02:00.1: PME# supported from D3cold Sep 12 17:54:06.007533 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 12 17:54:06.007584 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 12 17:54:06.007633 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 12 17:54:06.007682 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Sep 12 17:54:06.007733 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 17:54:06.007782 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 12 17:54:06.007838 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 12 17:54:06.007888 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 12 17:54:06.007938 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 12 17:54:06.007986 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Sep 12 17:54:06.008037 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 12 17:54:06.008085 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.008138 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 12 17:54:06.008186 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 12 17:54:06.008272 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 12 17:54:06.008328 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Sep 12 17:54:06.008377 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Sep 12 17:54:06.008427 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 12 17:54:06.008476 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Sep 12 17:54:06.008528 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 12 17:54:06.008577 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.008627 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 12 17:54:06.008678 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 12 17:54:06.008728 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 12 17:54:06.008776 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 12 17:54:06.008830 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Sep 12 17:54:06.008884 kernel: pci 0000:07:00.0: enabling Extended Tags Sep 12 17:54:06.008933 kernel: pci 0000:07:00.0: supports D1 D2 Sep 12 17:54:06.008983 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:54:06.009031 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 12 17:54:06.009079 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.009127 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.009180 kernel: pci_bus 0000:08: extended config space not accessible Sep 12 17:54:06.009284 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Sep 12 17:54:06.009340 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 12 17:54:06.009392 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 12 17:54:06.009443 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Sep 12 17:54:06.009495 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:54:06.009545 kernel: pci 0000:08:00.0: supports D1 D2 Sep 12 17:54:06.009597 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:54:06.009648 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 12 17:54:06.009699 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.009752 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.009761 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 12 17:54:06.009767 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 12 17:54:06.009773 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 12 17:54:06.009778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 12 17:54:06.009784 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 12 17:54:06.009790 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 12 17:54:06.009795 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 12 17:54:06.009803 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 12 17:54:06.009809 kernel: iommu: Default domain type: Translated Sep 12 17:54:06.009814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:54:06.009820 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:54:06.009826 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:54:06.009831 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 12 17:54:06.009837 kernel: e820: reserve RAM buffer [mem 0x819c4000-0x83ffffff] Sep 12 17:54:06.009842 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Sep 12 17:54:06.009849 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Sep 12 17:54:06.009854 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 12 17:54:06.009860 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 12 17:54:06.009910 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Sep 12 17:54:06.009962 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Sep 12 17:54:06.010014 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:54:06.010022 kernel: vgaarb: loaded Sep 12 17:54:06.010028 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:54:06.010034 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Sep 12 17:54:06.010041 kernel: clocksource: Switched to clocksource tsc-early Sep 12 17:54:06.010047 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:54:06.010053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:54:06.010058 kernel: pnp: PnP ACPI init Sep 12 17:54:06.010111 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 12 17:54:06.010161 kernel: pnp 00:02: [dma 0 disabled] Sep 12 17:54:06.010236 kernel: pnp 00:03: [dma 0 disabled] Sep 12 17:54:06.010308 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 12 17:54:06.010353 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 12 17:54:06.010401 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Sep 12 17:54:06.010446 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Sep 12 17:54:06.010490 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Sep 12 17:54:06.010534 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Sep 12 17:54:06.010578 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 12 17:54:06.010624 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 12 17:54:06.010669 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 12 17:54:06.010713 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 12 17:54:06.010765 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Sep 12 17:54:06.010813 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 12 17:54:06.010858 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 12 17:54:06.010903 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 12 17:54:06.010950 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 12 17:54:06.010994 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 12 17:54:06.011038 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Sep 12 17:54:06.011086 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Sep 12 17:54:06.011095 kernel: pnp: PnP ACPI: found 9 devices Sep 12 17:54:06.011101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:54:06.011107 kernel: NET: Registered PF_INET protocol family Sep 12 17:54:06.011114 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:54:06.011120 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 12 17:54:06.011126 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:54:06.011131 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:54:06.011137 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:54:06.011143 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 12 17:54:06.011150 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.011156 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.011161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:54:06.011168 kernel: NET: Registered PF_XDP protocol family Sep 12 17:54:06.011262 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 12 17:54:06.011313 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 12 17:54:06.011362 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 12 17:54:06.011411 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:54:06.011462 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011513 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011563 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011616 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011667 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 12 17:54:06.011716 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Sep 12 17:54:06.011764 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 17:54:06.011812 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 12 17:54:06.011864 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 12 17:54:06.011913 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 12 17:54:06.011962 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 12 17:54:06.012010 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 12 17:54:06.012059 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 12 17:54:06.012106 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 12 17:54:06.012154 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 12 17:54:06.012249 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 12 17:54:06.012302 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.012353 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.012401 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 12 17:54:06.012450 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.012498 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.012543 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 12 17:54:06.012586 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:54:06.012630 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:54:06.012672 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:54:06.012718 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 12 17:54:06.012760 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 12 17:54:06.012810 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Sep 12 17:54:06.012855 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 17:54:06.012904 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Sep 12 17:54:06.012948 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Sep 12 17:54:06.013001 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 12 17:54:06.013046 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Sep 12 17:54:06.013094 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 12 17:54:06.013140 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 12 17:54:06.013187 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Sep 12 17:54:06.013280 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Sep 12 17:54:06.013288 kernel: PCI: CLS 64 bytes, default 64 Sep 12 17:54:06.013296 kernel: DMAR: No ATSR found Sep 12 17:54:06.013302 kernel: DMAR: No SATC found Sep 12 17:54:06.013308 kernel: DMAR: dmar0: Using Queued invalidation Sep 12 17:54:06.013356 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 12 17:54:06.013406 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 12 17:54:06.013454 kernel: pci 0000:00:01.1: Adding to iommu group 1 Sep 12 17:54:06.013503 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 12 17:54:06.013551 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 12 17:54:06.013600 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 12 17:54:06.013650 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 12 17:54:06.013699 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 12 17:54:06.013746 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 12 17:54:06.013795 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 12 17:54:06.013843 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 12 17:54:06.013892 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 12 17:54:06.013940 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 12 17:54:06.013989 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 12 17:54:06.014040 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 12 17:54:06.014088 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 12 17:54:06.014137 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 12 17:54:06.014184 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Sep 12 17:54:06.014279 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 12 17:54:06.014327 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 12 17:54:06.014375 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 12 17:54:06.014423 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 12 17:54:06.014476 kernel: pci 0000:02:00.0: Adding to iommu group 1 Sep 12 17:54:06.014524 kernel: pci 0000:02:00.1: Adding to iommu group 1 Sep 12 17:54:06.014575 kernel: pci 0000:04:00.0: Adding to iommu group 15 Sep 12 17:54:06.014624 kernel: pci 0000:05:00.0: Adding to iommu group 16 Sep 12 17:54:06.014674 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 12 17:54:06.014726 kernel: pci 0000:08:00.0: Adding to iommu group 17 Sep 12 17:54:06.014735 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 12 17:54:06.014741 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:54:06.014749 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Sep 12 17:54:06.014755 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 12 17:54:06.014761 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 12 17:54:06.014766 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 12 17:54:06.014772 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 12 17:54:06.014826 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 12 17:54:06.014835 kernel: Initialise system trusted keyrings Sep 12 17:54:06.014841 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 12 17:54:06.014848 kernel: Key type asymmetric registered Sep 12 17:54:06.014854 kernel: Asymmetric key parser 'x509' registered Sep 12 17:54:06.014859 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:54:06.014865 kernel: io scheduler mq-deadline registered Sep 12 17:54:06.014871 kernel: io scheduler kyber registered Sep 12 17:54:06.014876 kernel: io scheduler bfq registered Sep 12 17:54:06.014926 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 12 17:54:06.014975 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Sep 12 17:54:06.015023 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Sep 12 17:54:06.015074 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Sep 12 17:54:06.015123 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Sep 12 17:54:06.015171 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Sep 12 17:54:06.015263 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Sep 12 17:54:06.015316 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 12 17:54:06.015325 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 12 17:54:06.015331 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 12 17:54:06.015339 kernel: pstore: Using crash dump compression: deflate Sep 12 17:54:06.015344 kernel: pstore: Registered erst as persistent store backend Sep 12 17:54:06.015350 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:54:06.015356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:54:06.015362 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:54:06.015367 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:54:06.015417 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 12 17:54:06.015425 kernel: i8042: PNP: No PS/2 controller found. Sep 12 17:54:06.015471 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 12 17:54:06.015517 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 12 17:54:06.015562 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-12T17:54:04 UTC (1757699644) Sep 12 17:54:06.015606 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 12 17:54:06.015614 kernel: intel_pstate: Intel P-state driver initializing Sep 12 17:54:06.015620 kernel: intel_pstate: Disabling energy efficiency optimization Sep 12 17:54:06.015626 kernel: intel_pstate: HWP enabled Sep 12 17:54:06.015632 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 12 17:54:06.015637 kernel: vesafb: scrolling: redraw Sep 12 17:54:06.015645 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 12 17:54:06.015651 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000008f7d1357, using 768k, total 768k Sep 12 17:54:06.015656 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:54:06.015662 kernel: fb0: VESA VGA frame buffer device Sep 12 17:54:06.015668 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:54:06.015673 kernel: Segment Routing with IPv6 Sep 12 17:54:06.015679 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:54:06.015685 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:54:06.015690 kernel: Key type dns_resolver registered Sep 12 17:54:06.015697 kernel: microcode: Current revision: 0x00000102 Sep 12 17:54:06.015703 kernel: microcode: Microcode Update Driver: v2.2. Sep 12 17:54:06.015708 kernel: IPI shorthand broadcast: enabled Sep 12 17:54:06.015714 kernel: sched_clock: Marking stable (1661000723, 1374259454)->(4470775919, -1435515742) Sep 12 17:54:06.015720 kernel: registered taskstats version 1 Sep 12 17:54:06.015725 kernel: Loading compiled-in X.509 certificates Sep 12 17:54:06.015731 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:54:06.015737 kernel: Key type .fscrypt registered Sep 12 17:54:06.015742 kernel: Key type fscrypt-provisioning registered Sep 12 17:54:06.015749 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:54:06.015754 kernel: ima: No architecture policies found Sep 12 17:54:06.015760 kernel: clk: Disabling unused clocks Sep 12 17:54:06.015766 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:54:06.015771 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:54:06.015777 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:54:06.015783 kernel: Run /init as init process Sep 12 17:54:06.015788 kernel: with arguments: Sep 12 17:54:06.015794 kernel: /init Sep 12 17:54:06.015801 kernel: with environment: Sep 12 17:54:06.015806 kernel: HOME=/ Sep 12 17:54:06.015812 kernel: TERM=linux Sep 12 17:54:06.015817 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:54:06.015824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:54:06.015831 systemd[1]: Detected architecture x86-64. Sep 12 17:54:06.015837 systemd[1]: Running in initrd. Sep 12 17:54:06.015844 systemd[1]: No hostname configured, using default hostname. Sep 12 17:54:06.015850 systemd[1]: Hostname set to . Sep 12 17:54:06.015856 systemd[1]: Initializing machine ID from random generator. Sep 12 17:54:06.015862 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:54:06.015867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:54:06.015873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:54:06.015880 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:54:06.015886 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:54:06.015893 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:54:06.015899 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:54:06.015905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:54:06.015911 kernel: tsc: Refined TSC clocksource calibration: 3408.043 MHz Sep 12 17:54:06.015917 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311ffc74570, max_idle_ns: 440795256509 ns Sep 12 17:54:06.015923 kernel: clocksource: Switched to clocksource tsc Sep 12 17:54:06.015929 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:54:06.015936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:54:06.015942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:54:06.015948 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:54:06.015954 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:54:06.015960 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:54:06.015966 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:54:06.015971 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:54:06.015977 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:54:06.015983 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:54:06.015990 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:54:06.015996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:54:06.016002 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:54:06.016008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:54:06.016014 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:54:06.016020 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:54:06.016026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:54:06.016032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:54:06.016038 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:54:06.016044 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:54:06.016061 systemd-journald[267]: Collecting audit messages is disabled. Sep 12 17:54:06.016075 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:54:06.016082 systemd-journald[267]: Journal started Sep 12 17:54:06.016095 systemd-journald[267]: Runtime Journal (/run/log/journal/6465932ce0234e749983575eaeccbb8d) is 8.0M, max 639.9M, 631.9M free. Sep 12 17:54:06.050212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:54:06.050411 systemd-modules-load[269]: Inserted module 'overlay' Sep 12 17:54:06.080174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:54:06.080289 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:54:06.080280 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:54:06.080368 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:54:06.081222 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:54:06.081632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:54:06.123198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:54:06.140817 systemd-modules-load[269]: Inserted module 'br_netfilter' Sep 12 17:54:06.189517 kernel: Bridge firewalling registered Sep 12 17:54:06.141199 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:54:06.206715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:06.227638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:54:06.248960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:54:06.292504 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:54:06.292941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:54:06.293389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:54:06.298518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:54:06.298666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:54:06.299766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:54:06.318427 systemd-resolved[303]: Positive Trust Anchors: Sep 12 17:54:06.318432 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:54:06.318456 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:54:06.320020 systemd-resolved[303]: Defaulting to hostname 'linux'. Sep 12 17:54:06.320469 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:06.350509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:54:06.413917 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:54:06.447477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:54:06.501308 dracut-cmdline[305]: dracut-dracut-053 Sep 12 17:54:06.508428 dracut-cmdline[305]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:54:06.701239 kernel: SCSI subsystem initialized Sep 12 17:54:06.725224 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:54:06.748231 kernel: iscsi: registered transport (tcp) Sep 12 17:54:06.780896 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:54:06.780914 kernel: QLogic iSCSI HBA Driver Sep 12 17:54:06.813470 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:54:06.839513 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:54:06.896884 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:54:06.896907 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:54:06.916528 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:54:06.975229 kernel: raid6: avx2x4 gen() 53431 MB/s Sep 12 17:54:07.007268 kernel: raid6: avx2x2 gen() 53929 MB/s Sep 12 17:54:07.043614 kernel: raid6: avx2x1 gen() 45273 MB/s Sep 12 17:54:07.043631 kernel: raid6: using algorithm avx2x2 gen() 53929 MB/s Sep 12 17:54:07.090670 kernel: raid6: .... xor() 30689 MB/s, rmw enabled Sep 12 17:54:07.090687 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:54:07.131196 kernel: xor: automatically using best checksumming function avx Sep 12 17:54:07.245245 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:54:07.251414 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:54:07.270465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:54:07.291737 systemd-udevd[491]: Using default interface naming scheme 'v255'. Sep 12 17:54:07.294329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:54:07.332429 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:54:07.352391 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 12 17:54:07.361573 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:54:07.379597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:54:07.507862 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:54:07.533225 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:54:07.533264 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:54:07.542122 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:54:07.556480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:54:07.583472 kernel: PTP clock support registered Sep 12 17:54:07.583488 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:54:07.556514 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:07.608246 kernel: ACPI: bus type USB registered Sep 12 17:54:07.608264 kernel: usbcore: registered new interface driver usbfs Sep 12 17:54:07.624460 kernel: usbcore: registered new interface driver hub Sep 12 17:54:07.624437 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:54:07.728955 kernel: usbcore: registered new device driver usb Sep 12 17:54:07.728971 kernel: libata version 3.00 loaded. Sep 12 17:54:07.728979 kernel: ahci 0000:00:17.0: version 3.0 Sep 12 17:54:07.729075 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:54:07.729084 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Sep 12 17:54:07.729149 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 12 17:54:07.729219 kernel: scsi host0: ahci Sep 12 17:54:07.729290 kernel: AES CTR mode by8 optimization enabled Sep 12 17:54:07.729298 kernel: scsi host1: ahci Sep 12 17:54:07.661827 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:54:07.765476 kernel: scsi host2: ahci Sep 12 17:54:07.765590 kernel: scsi host3: ahci Sep 12 17:54:07.661874 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:07.834146 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 12 17:54:07.834162 kernel: scsi host4: ahci Sep 12 17:54:07.834283 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 12 17:54:07.834294 kernel: scsi host5: ahci Sep 12 17:54:07.834362 kernel: scsi host6: ahci Sep 12 17:54:07.765309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:54:08.150190 kernel: igb 0000:04:00.0: added PHC on eth0 Sep 12 17:54:08.150347 kernel: scsi host7: ahci Sep 12 17:54:08.150472 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 12 17:54:08.150592 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Sep 12 17:54:08.150613 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d4 Sep 12 17:54:08.150729 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Sep 12 17:54:08.150738 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Sep 12 17:54:08.150847 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Sep 12 17:54:08.150867 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 12 17:54:08.150998 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Sep 12 17:54:08.151012 kernel: igb 0000:05:00.0: added PHC on eth1 Sep 12 17:54:08.151134 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Sep 12 17:54:08.151152 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 12 17:54:08.151279 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Sep 12 17:54:08.151294 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d5 Sep 12 17:54:08.151400 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Sep 12 17:54:08.151415 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Sep 12 17:54:08.151484 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Sep 12 17:54:08.151493 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 12 17:54:07.882297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:54:08.200316 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Sep 12 17:54:08.200408 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 12 17:54:08.184418 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:54:08.200911 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:54:08.200937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:54:08.200963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:54:08.214329 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:54:08.278479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:08.299496 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:54:08.353239 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.353254 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 12 17:54:08.353264 kernel: ata8: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.357197 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.373196 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.388195 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 12 17:54:08.388207 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.418225 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.433234 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 12 17:54:08.450224 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 12 17:54:08.466265 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 12 17:54:08.470303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:54:08.534706 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 12 17:54:08.534718 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Sep 12 17:54:08.534804 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 12 17:54:08.545669 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:08.585584 kernel: ata1.00: Features: NCQ-prio Sep 12 17:54:08.585596 kernel: ata2.00: Features: NCQ-prio Sep 12 17:54:08.585603 kernel: ata1.00: configured for UDMA/133 Sep 12 17:54:08.595198 kernel: ata2.00: configured for UDMA/133 Sep 12 17:54:08.595221 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 12 17:54:08.626197 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 12 17:54:08.656182 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 12 17:54:08.656310 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Sep 12 17:54:08.656395 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 12 17:54:08.719174 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 12 17:54:08.719322 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 12 17:54:08.736724 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 12 17:54:08.754243 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 12 17:54:08.768567 kernel: hub 1-0:1.0: USB hub found Sep 12 17:54:08.768767 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 17:54:08.768859 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Sep 12 17:54:08.772252 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Sep 12 17:54:08.772344 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 12 17:54:08.779197 kernel: hub 1-0:1.0: 16 ports detected Sep 12 17:54:08.881736 kernel: hub 2-0:1.0: USB hub found Sep 12 17:54:08.881827 kernel: hub 2-0:1.0: 10 ports detected Sep 12 17:54:08.892235 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:08.906143 kernel: ata2.00: Enabling discard_zeroes_data Sep 12 17:54:08.906160 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 12 17:54:08.910864 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 12 17:54:08.925815 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:54:08.925901 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 12 17:54:08.931041 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:54:08.936276 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 12 17:54:08.941058 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 12 17:54:08.945851 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 12 17:54:08.955247 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:54:08.964740 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:54:08.973790 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 12 17:54:08.982840 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 12 17:54:09.069635 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.069654 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 12 17:54:09.069739 kernel: ata2.00: Enabling discard_zeroes_data Sep 12 17:54:09.115550 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 12 17:54:09.116240 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Sep 12 17:54:09.116333 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 12 17:54:09.181874 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:54:09.181891 kernel: GPT:9289727 != 937703087 Sep 12 17:54:09.196799 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:54:09.209319 kernel: GPT:9289727 != 937703087 Sep 12 17:54:09.223442 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:54:09.237282 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.260200 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:54:09.260312 kernel: hub 1-14:1.0: USB hub found Sep 12 17:54:09.276745 kernel: hub 1-14:1.0: 4 ports detected Sep 12 17:54:09.293658 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 12 17:54:09.347340 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (541) Sep 12 17:54:09.347357 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (563) Sep 12 17:54:09.327333 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Sep 12 17:54:09.384852 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 12 17:54:09.396356 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 12 17:54:09.438666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 12 17:54:09.464471 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:54:09.520326 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 17:54:09.520423 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.520432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.520440 disk-uuid[732]: Primary Header is updated. Sep 12 17:54:09.520440 disk-uuid[732]: Secondary Entries is updated. Sep 12 17:54:09.520440 disk-uuid[732]: Secondary Header is updated. Sep 12 17:54:09.591468 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.591482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.591491 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.591497 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Sep 12 17:54:09.591628 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.591677 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 12 17:54:09.591733 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Sep 12 17:54:09.702199 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:54:09.724027 kernel: usbcore: registered new interface driver usbhid Sep 12 17:54:09.724063 kernel: usbhid: USB HID core driver Sep 12 17:54:09.767379 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 12 17:54:09.871454 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 12 17:54:09.871603 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 12 17:54:09.904868 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 12 17:54:10.575711 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:10.595198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:10.595727 disk-uuid[733]: The operation has completed successfully. Sep 12 17:54:10.638009 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:54:10.638073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:54:10.693413 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:54:10.723306 sh[754]: Success Sep 12 17:54:10.733299 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:54:10.781060 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:54:10.792211 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:54:10.800184 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:54:10.853068 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:54:10.853089 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:10.874798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:54:10.894256 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:54:10.912560 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:54:10.952230 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:54:10.954833 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:54:10.964622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:54:10.970310 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:54:11.087311 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:11.087324 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:11.087406 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:54:11.087414 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:54:11.087424 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:54:11.074627 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:54:11.123345 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:11.113132 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:54:11.133429 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:54:11.160466 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:54:11.176039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:54:11.220359 systemd-networkd[936]: lo: Link UP Sep 12 17:54:11.220362 systemd-networkd[936]: lo: Gained carrier Sep 12 17:54:11.222791 systemd-networkd[936]: Enumeration completed Sep 12 17:54:11.233722 ignition[935]: Ignition 2.19.0 Sep 12 17:54:11.222874 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:54:11.233726 ignition[935]: Stage: fetch-offline Sep 12 17:54:11.223490 systemd-networkd[936]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.233749 ignition[935]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:11.227505 systemd[1]: Reached target network.target - Network. Sep 12 17:54:11.233755 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:11.235755 unknown[935]: fetched base config from "system" Sep 12 17:54:11.233810 ignition[935]: parsed url from cmdline: "" Sep 12 17:54:11.235760 unknown[935]: fetched user config from "system" Sep 12 17:54:11.233811 ignition[935]: no config URL provided Sep 12 17:54:11.251135 systemd-networkd[936]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.233814 ignition[935]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:54:11.257530 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:54:11.233837 ignition[935]: parsing config with SHA512: 345fbfc4f08650a3230a1b9335d73de33178afd80340f9e47a8c758f53ea8201b48485ea617fcc689b4a0b6c31c2bb3dad5fd699c03d58d4f0b52d90c9515c6e Sep 12 17:54:11.279251 systemd-networkd[936]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.235974 ignition[935]: fetch-offline: fetch-offline passed Sep 12 17:54:11.282683 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:54:11.235976 ignition[935]: POST message to Packet Timeline Sep 12 17:54:11.292368 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:54:11.235978 ignition[935]: POST Status error: resource requires networking Sep 12 17:54:11.236014 ignition[935]: Ignition finished successfully Sep 12 17:54:11.340982 ignition[950]: Ignition 2.19.0 Sep 12 17:54:11.341000 ignition[950]: Stage: kargs Sep 12 17:54:11.341526 ignition[950]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:11.341568 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:11.506355 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 12 17:54:11.500202 systemd-networkd[936]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.344404 ignition[950]: kargs: kargs passed Sep 12 17:54:11.344417 ignition[950]: POST message to Packet Timeline Sep 12 17:54:11.344452 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:11.346308 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46537->[::1]:53: read: connection refused Sep 12 17:54:11.546783 ignition[950]: GET https://metadata.packet.net/metadata: attempt #2 Sep 12 17:54:11.547813 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48313->[::1]:53: read: connection refused Sep 12 17:54:11.738320 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 12 17:54:11.741127 systemd-networkd[936]: eno1: Link UP Sep 12 17:54:11.741299 systemd-networkd[936]: eno2: Link UP Sep 12 17:54:11.741415 systemd-networkd[936]: enp2s0f0np0: Link UP Sep 12 17:54:11.741549 systemd-networkd[936]: enp2s0f0np0: Gained carrier Sep 12 17:54:11.756431 systemd-networkd[936]: enp2s0f1np1: Link UP Sep 12 17:54:11.782378 systemd-networkd[936]: enp2s0f0np0: DHCPv4 address 139.178.94.21/31, gateway 139.178.94.20 acquired from 145.40.83.140 Sep 12 17:54:11.948279 ignition[950]: GET https://metadata.packet.net/metadata: attempt #3 Sep 12 17:54:11.949321 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35276->[::1]:53: read: connection refused Sep 12 17:54:12.551818 systemd-networkd[936]: enp2s0f1np1: Gained carrier Sep 12 17:54:12.749749 ignition[950]: GET https://metadata.packet.net/metadata: attempt #4 Sep 12 17:54:12.750791 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37193->[::1]:53: read: connection refused Sep 12 17:54:13.319681 systemd-networkd[936]: enp2s0f0np0: Gained IPv6LL Sep 12 17:54:14.279727 systemd-networkd[936]: enp2s0f1np1: Gained IPv6LL Sep 12 17:54:14.352462 ignition[950]: GET https://metadata.packet.net/metadata: attempt #5 Sep 12 17:54:14.353946 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60894->[::1]:53: read: connection refused Sep 12 17:54:17.554277 ignition[950]: GET https://metadata.packet.net/metadata: attempt #6 Sep 12 17:54:18.691722 ignition[950]: GET result: OK Sep 12 17:54:19.176384 ignition[950]: Ignition finished successfully Sep 12 17:54:19.182229 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:54:19.206509 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:54:19.212818 ignition[968]: Ignition 2.19.0 Sep 12 17:54:19.212822 ignition[968]: Stage: disks Sep 12 17:54:19.212934 ignition[968]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:19.212941 ignition[968]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:19.213467 ignition[968]: disks: disks passed Sep 12 17:54:19.213469 ignition[968]: POST message to Packet Timeline Sep 12 17:54:19.213478 ignition[968]: GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:20.364113 ignition[968]: GET result: OK Sep 12 17:54:20.950312 ignition[968]: Ignition finished successfully Sep 12 17:54:20.953678 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:54:20.969463 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:54:20.987481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:54:21.008471 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:54:21.029542 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:54:21.049536 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:54:21.082464 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:54:21.108124 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:54:21.118581 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:54:21.127373 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:54:21.265261 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:54:21.265715 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:54:21.275654 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:54:21.311369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:54:21.320747 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:54:21.445671 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (993) Sep 12 17:54:21.445691 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:21.445705 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:21.445718 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:54:21.445731 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:54:21.445747 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:54:21.364827 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:54:21.446082 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Sep 12 17:54:21.483275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:54:21.483303 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:54:21.543406 coreos-metadata[995]: Sep 12 17:54:21.526 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 17:54:21.507199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:54:21.584345 coreos-metadata[1011]: Sep 12 17:54:21.526 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 17:54:21.533588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:54:21.563425 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:54:21.618291 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:54:21.628274 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:54:21.639308 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:54:21.649304 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:54:21.660662 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:54:21.682399 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:54:21.724406 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:21.702374 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:54:21.734014 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:54:21.756712 ignition[1115]: INFO : Ignition 2.19.0 Sep 12 17:54:21.756712 ignition[1115]: INFO : Stage: mount Sep 12 17:54:21.761280 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:54:21.790377 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:21.790377 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:21.790377 ignition[1115]: INFO : mount: mount passed Sep 12 17:54:21.790377 ignition[1115]: INFO : POST message to Packet Timeline Sep 12 17:54:21.790377 ignition[1115]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:22.475682 coreos-metadata[995]: Sep 12 17:54:22.475 INFO Fetch successful Sep 12 17:54:22.552470 coreos-metadata[1011]: Sep 12 17:54:22.552 INFO Fetch successful Sep 12 17:54:22.562312 coreos-metadata[995]: Sep 12 17:54:22.555 INFO wrote hostname ci-4081.3.6-a-7e79e463ed to /sysroot/etc/hostname Sep 12 17:54:22.557579 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:54:22.584465 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 12 17:54:22.584510 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Sep 12 17:54:23.342586 ignition[1115]: INFO : GET result: OK Sep 12 17:54:23.740087 ignition[1115]: INFO : Ignition finished successfully Sep 12 17:54:23.743119 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:54:23.768319 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:54:23.782684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:54:23.859311 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1141) Sep 12 17:54:23.859339 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:23.879819 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:23.898145 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:54:23.938385 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:54:23.938432 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:54:23.953033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:54:23.978895 ignition[1158]: INFO : Ignition 2.19.0 Sep 12 17:54:23.978895 ignition[1158]: INFO : Stage: files Sep 12 17:54:23.994419 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:23.994419 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:23.994419 ignition[1158]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:54:23.994419 ignition[1158]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:54:23.982061 unknown[1158]: wrote ssh authorized keys file for user: core Sep 12 17:54:24.185955 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:54:24.347296 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:54:26.244709 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:54:26.551906 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:26.551906 ignition[1158]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: files passed Sep 12 17:54:26.584414 ignition[1158]: INFO : POST message to Packet Timeline Sep 12 17:54:26.584414 ignition[1158]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:28.150390 ignition[1158]: INFO : GET result: OK Sep 12 17:54:28.907685 ignition[1158]: INFO : Ignition finished successfully Sep 12 17:54:28.910723 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:54:28.941462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:54:28.941944 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:54:28.970770 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:54:28.970852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:54:29.001039 initrd-setup-root-after-ignition[1197]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:54:29.001039 initrd-setup-root-after-ignition[1197]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:54:29.047422 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:54:29.008270 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:54:29.035607 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:54:29.071450 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:54:29.113780 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:54:29.113894 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:54:29.134327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:54:29.154474 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:54:29.174671 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:54:29.192438 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:54:29.241664 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:54:29.271653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:54:29.301256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:54:29.312677 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:54:29.333886 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:54:29.352943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:54:29.353389 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:54:29.391699 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:54:29.401822 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:54:29.421963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:54:29.441942 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:54:29.462817 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:54:29.483818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:54:29.503959 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:54:29.525983 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:54:29.547832 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:54:29.567953 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:54:29.585668 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:54:29.586069 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:54:29.611930 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:54:29.631974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:54:29.653684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:54:29.654149 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:54:29.676835 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:54:29.677261 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:54:29.708786 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:54:29.709266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:54:29.729010 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:54:29.748677 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:54:29.749185 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:54:29.769826 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:54:29.788946 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:54:29.807933 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:54:29.808271 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:54:29.827974 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:54:29.828308 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:54:29.850858 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:54:29.851281 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:54:29.980413 ignition[1223]: INFO : Ignition 2.19.0 Sep 12 17:54:29.980413 ignition[1223]: INFO : Stage: umount Sep 12 17:54:29.980413 ignition[1223]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:29.980413 ignition[1223]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:29.980413 ignition[1223]: INFO : umount: umount passed Sep 12 17:54:29.980413 ignition[1223]: INFO : POST message to Packet Timeline Sep 12 17:54:29.980413 ignition[1223]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:29.870893 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:54:29.871297 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:54:29.888856 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:54:29.889268 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:54:29.921321 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:54:29.931267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:54:29.931466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:54:29.957393 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:54:29.969317 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:54:29.969560 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:54:29.991608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:54:29.991706 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:54:30.016388 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:54:30.017110 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:54:30.017232 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:54:30.052480 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:54:30.052838 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:54:31.398053 ignition[1223]: INFO : GET result: OK Sep 12 17:54:32.365256 ignition[1223]: INFO : Ignition finished successfully Sep 12 17:54:32.368524 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:54:32.368813 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:54:32.385618 systemd[1]: Stopped target network.target - Network. Sep 12 17:54:32.400452 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:54:32.400632 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:54:32.418545 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:54:32.418683 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:54:32.436750 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:54:32.436912 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:54:32.455748 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:54:32.455923 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:54:32.475566 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:54:32.475737 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:54:32.493999 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:54:32.504350 systemd-networkd[936]: enp2s0f0np0: DHCPv6 lease lost Sep 12 17:54:32.512440 systemd-networkd[936]: enp2s0f1np1: DHCPv6 lease lost Sep 12 17:54:32.513669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:54:32.533470 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:54:32.533779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:54:32.552378 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:54:32.552724 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:54:32.574042 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:54:32.574169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:54:32.606352 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:54:32.632338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:54:32.632381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:54:32.652539 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:54:32.652644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:54:32.671615 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:54:32.671785 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:54:32.692594 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:54:32.692762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:54:32.712818 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:54:32.735672 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:54:32.736051 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:54:32.769254 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:54:32.769406 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:54:32.775725 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:54:32.775837 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:54:32.803454 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:54:32.803598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:54:32.847486 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:54:32.847680 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:54:32.875727 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:54:32.875904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:32.933328 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:54:32.947404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:54:32.947444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:54:32.977387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:54:33.190450 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Sep 12 17:54:32.977468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:32.999582 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:54:32.999829 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:54:33.069833 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:54:33.070090 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:54:33.088764 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:54:33.121528 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:54:33.142384 systemd[1]: Switching root. Sep 12 17:54:33.264412 systemd-journald[267]: Journal stopped Sep 12 17:54:06.001740 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:54:06.001754 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:54:06.001762 kernel: BIOS-provided physical RAM map: Sep 12 17:54:06.001766 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 12 17:54:06.001770 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 12 17:54:06.001774 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 12 17:54:06.001778 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 12 17:54:06.001782 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 12 17:54:06.001786 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c3fff] usable Sep 12 17:54:06.001790 kernel: BIOS-e820: [mem 0x00000000819c4000-0x00000000819c4fff] ACPI NVS Sep 12 17:54:06.001794 kernel: BIOS-e820: [mem 0x00000000819c5000-0x00000000819c5fff] reserved Sep 12 17:54:06.001799 kernel: BIOS-e820: [mem 0x00000000819c6000-0x000000008afcdfff] usable Sep 12 17:54:06.001804 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Sep 12 17:54:06.001808 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Sep 12 17:54:06.001813 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Sep 12 17:54:06.001818 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Sep 12 17:54:06.001823 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 12 17:54:06.001828 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 12 17:54:06.001832 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:54:06.001837 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 12 17:54:06.001841 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 12 17:54:06.001846 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 12 17:54:06.001850 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 12 17:54:06.001855 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 12 17:54:06.001860 kernel: NX (Execute Disable) protection: active Sep 12 17:54:06.001864 kernel: APIC: Static calls initialized Sep 12 17:54:06.001869 kernel: SMBIOS 3.2.1 present. Sep 12 17:54:06.001873 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Sep 12 17:54:06.001879 kernel: tsc: Detected 3400.000 MHz processor Sep 12 17:54:06.001883 kernel: tsc: Detected 3399.906 MHz TSC Sep 12 17:54:06.001888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:54:06.001893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:54:06.001898 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 12 17:54:06.001903 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 12 17:54:06.001907 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:54:06.001912 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 12 17:54:06.001917 kernel: Using GB pages for direct mapping Sep 12 17:54:06.001922 kernel: ACPI: Early table checksum verification disabled Sep 12 17:54:06.001927 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 12 17:54:06.001932 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 12 17:54:06.001939 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Sep 12 17:54:06.001944 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 12 17:54:06.001949 kernel: ACPI: FACS 0x000000008C66DF80 000040 Sep 12 17:54:06.001954 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Sep 12 17:54:06.001960 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Sep 12 17:54:06.001965 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 12 17:54:06.001970 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 12 17:54:06.001975 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 12 17:54:06.001980 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 12 17:54:06.001985 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 12 17:54:06.001990 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 12 17:54:06.001996 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002001 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 12 17:54:06.002006 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 12 17:54:06.002010 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002015 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002020 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 12 17:54:06.002025 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 12 17:54:06.002030 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002036 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 12 17:54:06.002041 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 12 17:54:06.002046 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:54:06.002051 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 12 17:54:06.002056 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 12 17:54:06.002061 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 12 17:54:06.002066 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 12 17:54:06.002071 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 12 17:54:06.002076 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 12 17:54:06.002082 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 12 17:54:06.002087 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 12 17:54:06.002092 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 12 17:54:06.002097 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Sep 12 17:54:06.002102 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Sep 12 17:54:06.002107 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Sep 12 17:54:06.002112 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Sep 12 17:54:06.002117 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Sep 12 17:54:06.002122 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Sep 12 17:54:06.002128 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Sep 12 17:54:06.002133 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Sep 12 17:54:06.002138 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Sep 12 17:54:06.002143 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Sep 12 17:54:06.002147 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Sep 12 17:54:06.002152 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Sep 12 17:54:06.002157 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Sep 12 17:54:06.002162 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Sep 12 17:54:06.002167 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Sep 12 17:54:06.002173 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Sep 12 17:54:06.002178 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Sep 12 17:54:06.002183 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Sep 12 17:54:06.002188 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Sep 12 17:54:06.002200 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Sep 12 17:54:06.002206 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Sep 12 17:54:06.002229 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Sep 12 17:54:06.002234 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Sep 12 17:54:06.002254 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Sep 12 17:54:06.002260 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Sep 12 17:54:06.002265 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Sep 12 17:54:06.002270 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Sep 12 17:54:06.002275 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Sep 12 17:54:06.002280 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Sep 12 17:54:06.002285 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Sep 12 17:54:06.002290 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Sep 12 17:54:06.002294 kernel: No NUMA configuration found Sep 12 17:54:06.002299 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 12 17:54:06.002304 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 12 17:54:06.002310 kernel: Zone ranges: Sep 12 17:54:06.002315 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:54:06.002320 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:54:06.002325 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 12 17:54:06.002330 kernel: Movable zone start for each node Sep 12 17:54:06.002335 kernel: Early memory node ranges Sep 12 17:54:06.002340 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 12 17:54:06.002345 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 12 17:54:06.002350 kernel: node 0: [mem 0x0000000040400000-0x00000000819c3fff] Sep 12 17:54:06.002356 kernel: node 0: [mem 0x00000000819c6000-0x000000008afcdfff] Sep 12 17:54:06.002361 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Sep 12 17:54:06.002366 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 12 17:54:06.002371 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 12 17:54:06.002380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 12 17:54:06.002385 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:54:06.002391 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 12 17:54:06.002396 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 12 17:54:06.002402 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 12 17:54:06.002408 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 12 17:54:06.002413 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Sep 12 17:54:06.002418 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 12 17:54:06.002424 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 12 17:54:06.002429 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 12 17:54:06.002434 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 12 17:54:06.002440 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 12 17:54:06.002445 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 12 17:54:06.002451 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 12 17:54:06.002456 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 12 17:54:06.002462 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 12 17:54:06.002467 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 12 17:54:06.002472 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 12 17:54:06.002477 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 12 17:54:06.002482 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 12 17:54:06.002488 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 12 17:54:06.002493 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 12 17:54:06.002499 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 12 17:54:06.002505 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 12 17:54:06.002510 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 12 17:54:06.002515 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 12 17:54:06.002520 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 12 17:54:06.002526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:54:06.002531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:54:06.002536 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:54:06.002542 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:54:06.002548 kernel: TSC deadline timer available Sep 12 17:54:06.002553 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 12 17:54:06.002558 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 12 17:54:06.002564 kernel: Booting paravirtualized kernel on bare hardware Sep 12 17:54:06.002569 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:54:06.002575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 12 17:54:06.002580 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 12 17:54:06.002585 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 12 17:54:06.002591 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 12 17:54:06.002597 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:54:06.002603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:54:06.002608 kernel: random: crng init done Sep 12 17:54:06.002614 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 12 17:54:06.002619 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 12 17:54:06.002624 kernel: Fallback order for Node 0: 0 Sep 12 17:54:06.002630 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Sep 12 17:54:06.002635 kernel: Policy zone: Normal Sep 12 17:54:06.002641 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:54:06.002646 kernel: software IO TLB: area num 16. Sep 12 17:54:06.002652 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 732416K reserved, 0K cma-reserved) Sep 12 17:54:06.002657 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 12 17:54:06.002663 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:54:06.002668 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:54:06.002673 kernel: Dynamic Preempt: voluntary Sep 12 17:54:06.002679 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:54:06.002684 kernel: rcu: RCU event tracing is enabled. Sep 12 17:54:06.002691 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 12 17:54:06.002696 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:54:06.002702 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:54:06.002707 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:54:06.002712 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:54:06.002718 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 12 17:54:06.002723 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 12 17:54:06.002728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:54:06.002733 kernel: Console: colour dummy device 80x25 Sep 12 17:54:06.002739 kernel: printk: console [tty0] enabled Sep 12 17:54:06.002745 kernel: printk: console [ttyS1] enabled Sep 12 17:54:06.002751 kernel: ACPI: Core revision 20230628 Sep 12 17:54:06.002756 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Sep 12 17:54:06.002762 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:54:06.002767 kernel: DMAR: Host address width 39 Sep 12 17:54:06.002772 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 12 17:54:06.002778 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 12 17:54:06.002783 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Sep 12 17:54:06.002788 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 12 17:54:06.002795 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 12 17:54:06.002800 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 12 17:54:06.002805 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 12 17:54:06.002811 kernel: x2apic enabled Sep 12 17:54:06.002816 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 12 17:54:06.002821 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:54:06.002827 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 12 17:54:06.002832 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 12 17:54:06.002838 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 12 17:54:06.002844 kernel: process: using mwait in idle threads Sep 12 17:54:06.002849 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:54:06.002854 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:54:06.002860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:54:06.002865 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 12 17:54:06.002870 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 12 17:54:06.002876 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 12 17:54:06.002881 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 12 17:54:06.002887 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 12 17:54:06.002893 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:54:06.002898 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:54:06.002903 kernel: TAA: Mitigation: TSX disabled Sep 12 17:54:06.002909 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 12 17:54:06.002914 kernel: SRBDS: Mitigation: Microcode Sep 12 17:54:06.002920 kernel: GDS: Mitigation: Microcode Sep 12 17:54:06.002925 kernel: active return thunk: its_return_thunk Sep 12 17:54:06.002930 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:54:06.002935 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Sep 12 17:54:06.002942 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:54:06.002947 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:54:06.002952 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:54:06.002958 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:54:06.002963 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:54:06.002968 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:54:06.002973 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:54:06.002979 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:54:06.002984 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 12 17:54:06.002990 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:54:06.002996 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:54:06.003001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:54:06.003006 kernel: landlock: Up and running. Sep 12 17:54:06.003012 kernel: SELinux: Initializing. Sep 12 17:54:06.003017 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.003022 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.003027 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 12 17:54:06.003034 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 17:54:06.003039 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 17:54:06.003045 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 17:54:06.003050 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 12 17:54:06.003056 kernel: ... version: 4 Sep 12 17:54:06.003061 kernel: ... bit width: 48 Sep 12 17:54:06.003066 kernel: ... generic registers: 4 Sep 12 17:54:06.003071 kernel: ... value mask: 0000ffffffffffff Sep 12 17:54:06.003077 kernel: ... max period: 00007fffffffffff Sep 12 17:54:06.003083 kernel: ... fixed-purpose events: 3 Sep 12 17:54:06.003088 kernel: ... event mask: 000000070000000f Sep 12 17:54:06.003094 kernel: signal: max sigframe size: 2032 Sep 12 17:54:06.003099 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 12 17:54:06.003104 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:54:06.003110 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:54:06.003115 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 12 17:54:06.003120 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:54:06.003126 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:54:06.003132 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 12 17:54:06.003138 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:54:06.003143 kernel: smp: Brought up 1 node, 16 CPUs Sep 12 17:54:06.003148 kernel: smpboot: Max logical packages: 1 Sep 12 17:54:06.003154 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 12 17:54:06.003159 kernel: devtmpfs: initialized Sep 12 17:54:06.003164 kernel: x86/mm: Memory block size: 128MB Sep 12 17:54:06.003170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819c4000-0x819c4fff] (4096 bytes) Sep 12 17:54:06.003175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Sep 12 17:54:06.003181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:54:06.003187 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 12 17:54:06.003194 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:54:06.003200 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:54:06.003205 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:54:06.003232 kernel: audit: type=2000 audit(1757699640.119:1): state=initialized audit_enabled=0 res=1 Sep 12 17:54:06.003237 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:54:06.003258 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:54:06.003264 kernel: cpuidle: using governor menu Sep 12 17:54:06.003270 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:54:06.003276 kernel: dca service started, version 1.12.1 Sep 12 17:54:06.003281 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 12 17:54:06.003286 kernel: PCI: Using configuration type 1 for base access Sep 12 17:54:06.003292 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 12 17:54:06.003297 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:54:06.003302 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:54:06.003308 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:54:06.003313 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:54:06.003319 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:54:06.003325 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:54:06.003330 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:54:06.003335 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:54:06.003341 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 12 17:54:06.003346 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003351 kernel: ACPI: SSDT 0xFFFF989041AF6000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 12 17:54:06.003357 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003362 kernel: ACPI: SSDT 0xFFFF989041AE9800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 12 17:54:06.003368 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003374 kernel: ACPI: SSDT 0xFFFF989040246900 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 12 17:54:06.003379 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003384 kernel: ACPI: SSDT 0xFFFF989041E58800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 12 17:54:06.003389 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003394 kernel: ACPI: SSDT 0xFFFF98904012B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 12 17:54:06.003400 kernel: ACPI: Dynamic OEM Table Load: Sep 12 17:54:06.003405 kernel: ACPI: SSDT 0xFFFF989041AF0C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 12 17:54:06.003410 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 12 17:54:06.003417 kernel: ACPI: Interpreter enabled Sep 12 17:54:06.003422 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:54:06.003427 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:54:06.003433 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 12 17:54:06.003438 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 12 17:54:06.003443 kernel: HEST: Table parsing has been initialized. Sep 12 17:54:06.003448 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 12 17:54:06.003454 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:54:06.003459 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:54:06.003465 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 12 17:54:06.003471 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 12 17:54:06.003476 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 12 17:54:06.003482 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 12 17:54:06.003487 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 12 17:54:06.003492 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 12 17:54:06.003498 kernel: ACPI: \_TZ_.FN00: New power resource Sep 12 17:54:06.003503 kernel: ACPI: \_TZ_.FN01: New power resource Sep 12 17:54:06.003509 kernel: ACPI: \_TZ_.FN02: New power resource Sep 12 17:54:06.003514 kernel: ACPI: \_TZ_.FN03: New power resource Sep 12 17:54:06.003520 kernel: ACPI: \_TZ_.FN04: New power resource Sep 12 17:54:06.003526 kernel: ACPI: \PIN_: New power resource Sep 12 17:54:06.003531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 12 17:54:06.003609 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:54:06.003663 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 12 17:54:06.003713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 12 17:54:06.003721 kernel: PCI host bridge to bus 0000:00 Sep 12 17:54:06.003776 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:54:06.003821 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:54:06.003864 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:54:06.003909 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 12 17:54:06.003951 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 12 17:54:06.003993 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 12 17:54:06.004051 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 12 17:54:06.004109 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 12 17:54:06.004158 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.004234 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Sep 12 17:54:06.004298 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.004350 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 12 17:54:06.004402 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 12 17:54:06.004458 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 12 17:54:06.004507 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 12 17:54:06.004560 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 12 17:54:06.004609 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 12 17:54:06.004657 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 12 17:54:06.004710 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 12 17:54:06.004762 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 12 17:54:06.004811 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 12 17:54:06.004862 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 12 17:54:06.004911 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 17:54:06.004964 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 12 17:54:06.005012 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 17:54:06.005075 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 12 17:54:06.005123 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 12 17:54:06.005171 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 12 17:54:06.005258 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 12 17:54:06.005308 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 12 17:54:06.005356 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 12 17:54:06.005408 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 12 17:54:06.005459 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 12 17:54:06.005508 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 12 17:54:06.005562 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 12 17:54:06.005611 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 12 17:54:06.005658 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 12 17:54:06.005707 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 12 17:54:06.005757 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 12 17:54:06.005806 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 12 17:54:06.005853 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 12 17:54:06.005901 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 12 17:54:06.005954 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 12 17:54:06.006005 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006059 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 12 17:54:06.006107 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006163 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 12 17:54:06.006253 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006310 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 12 17:54:06.006360 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006419 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Sep 12 17:54:06.006469 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.006524 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 12 17:54:06.006572 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 12 17:54:06.006626 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 12 17:54:06.006680 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 12 17:54:06.006729 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 12 17:54:06.006778 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 12 17:54:06.006830 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 12 17:54:06.006879 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 12 17:54:06.006928 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:54:06.006987 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Sep 12 17:54:06.007038 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 12 17:54:06.007088 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 12 17:54:06.007137 kernel: pci 0000:02:00.0: PME# supported from D3cold Sep 12 17:54:06.007187 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 12 17:54:06.007274 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 12 17:54:06.007331 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Sep 12 17:54:06.007385 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 12 17:54:06.007434 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 12 17:54:06.007484 kernel: pci 0000:02:00.1: PME# supported from D3cold Sep 12 17:54:06.007533 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 12 17:54:06.007584 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 12 17:54:06.007633 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 12 17:54:06.007682 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Sep 12 17:54:06.007733 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 17:54:06.007782 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 12 17:54:06.007838 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 12 17:54:06.007888 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 12 17:54:06.007938 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 12 17:54:06.007986 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Sep 12 17:54:06.008037 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 12 17:54:06.008085 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.008138 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 12 17:54:06.008186 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 12 17:54:06.008272 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 12 17:54:06.008328 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Sep 12 17:54:06.008377 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Sep 12 17:54:06.008427 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 12 17:54:06.008476 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Sep 12 17:54:06.008528 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 12 17:54:06.008577 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Sep 12 17:54:06.008627 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 12 17:54:06.008678 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 12 17:54:06.008728 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 12 17:54:06.008776 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 12 17:54:06.008830 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Sep 12 17:54:06.008884 kernel: pci 0000:07:00.0: enabling Extended Tags Sep 12 17:54:06.008933 kernel: pci 0000:07:00.0: supports D1 D2 Sep 12 17:54:06.008983 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:54:06.009031 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 12 17:54:06.009079 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.009127 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.009180 kernel: pci_bus 0000:08: extended config space not accessible Sep 12 17:54:06.009284 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Sep 12 17:54:06.009340 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 12 17:54:06.009392 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 12 17:54:06.009443 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Sep 12 17:54:06.009495 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:54:06.009545 kernel: pci 0000:08:00.0: supports D1 D2 Sep 12 17:54:06.009597 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:54:06.009648 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 12 17:54:06.009699 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.009752 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.009761 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 12 17:54:06.009767 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 12 17:54:06.009773 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 12 17:54:06.009778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 12 17:54:06.009784 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 12 17:54:06.009790 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 12 17:54:06.009795 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 12 17:54:06.009803 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 12 17:54:06.009809 kernel: iommu: Default domain type: Translated Sep 12 17:54:06.009814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:54:06.009820 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:54:06.009826 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:54:06.009831 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 12 17:54:06.009837 kernel: e820: reserve RAM buffer [mem 0x819c4000-0x83ffffff] Sep 12 17:54:06.009842 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Sep 12 17:54:06.009849 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Sep 12 17:54:06.009854 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 12 17:54:06.009860 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 12 17:54:06.009910 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Sep 12 17:54:06.009962 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Sep 12 17:54:06.010014 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:54:06.010022 kernel: vgaarb: loaded Sep 12 17:54:06.010028 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:54:06.010034 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Sep 12 17:54:06.010041 kernel: clocksource: Switched to clocksource tsc-early Sep 12 17:54:06.010047 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:54:06.010053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:54:06.010058 kernel: pnp: PnP ACPI init Sep 12 17:54:06.010111 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 12 17:54:06.010161 kernel: pnp 00:02: [dma 0 disabled] Sep 12 17:54:06.010236 kernel: pnp 00:03: [dma 0 disabled] Sep 12 17:54:06.010308 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 12 17:54:06.010353 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 12 17:54:06.010401 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Sep 12 17:54:06.010446 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Sep 12 17:54:06.010490 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Sep 12 17:54:06.010534 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Sep 12 17:54:06.010578 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 12 17:54:06.010624 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 12 17:54:06.010669 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 12 17:54:06.010713 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 12 17:54:06.010765 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Sep 12 17:54:06.010813 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 12 17:54:06.010858 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 12 17:54:06.010903 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 12 17:54:06.010950 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 12 17:54:06.010994 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 12 17:54:06.011038 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Sep 12 17:54:06.011086 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Sep 12 17:54:06.011095 kernel: pnp: PnP ACPI: found 9 devices Sep 12 17:54:06.011101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:54:06.011107 kernel: NET: Registered PF_INET protocol family Sep 12 17:54:06.011114 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:54:06.011120 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 12 17:54:06.011126 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:54:06.011131 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:54:06.011137 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:54:06.011143 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 12 17:54:06.011150 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.011156 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:54:06.011161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:54:06.011168 kernel: NET: Registered PF_XDP protocol family Sep 12 17:54:06.011262 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 12 17:54:06.011313 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 12 17:54:06.011362 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 12 17:54:06.011411 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 12 17:54:06.011462 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011513 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011563 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011616 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 12 17:54:06.011667 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 12 17:54:06.011716 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Sep 12 17:54:06.011764 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 17:54:06.011812 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 12 17:54:06.011864 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 12 17:54:06.011913 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 12 17:54:06.011962 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 12 17:54:06.012010 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 12 17:54:06.012059 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 12 17:54:06.012106 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 12 17:54:06.012154 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 12 17:54:06.012249 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 12 17:54:06.012302 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.012353 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.012401 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 12 17:54:06.012450 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 12 17:54:06.012498 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Sep 12 17:54:06.012543 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 12 17:54:06.012586 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:54:06.012630 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:54:06.012672 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:54:06.012718 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 12 17:54:06.012760 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 12 17:54:06.012810 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Sep 12 17:54:06.012855 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 12 17:54:06.012904 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Sep 12 17:54:06.012948 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Sep 12 17:54:06.013001 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 12 17:54:06.013046 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Sep 12 17:54:06.013094 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 12 17:54:06.013140 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 12 17:54:06.013187 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Sep 12 17:54:06.013280 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Sep 12 17:54:06.013288 kernel: PCI: CLS 64 bytes, default 64 Sep 12 17:54:06.013296 kernel: DMAR: No ATSR found Sep 12 17:54:06.013302 kernel: DMAR: No SATC found Sep 12 17:54:06.013308 kernel: DMAR: dmar0: Using Queued invalidation Sep 12 17:54:06.013356 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 12 17:54:06.013406 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 12 17:54:06.013454 kernel: pci 0000:00:01.1: Adding to iommu group 1 Sep 12 17:54:06.013503 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 12 17:54:06.013551 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 12 17:54:06.013600 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 12 17:54:06.013650 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 12 17:54:06.013699 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 12 17:54:06.013746 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 12 17:54:06.013795 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 12 17:54:06.013843 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 12 17:54:06.013892 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 12 17:54:06.013940 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 12 17:54:06.013989 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 12 17:54:06.014040 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 12 17:54:06.014088 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 12 17:54:06.014137 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 12 17:54:06.014184 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Sep 12 17:54:06.014279 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 12 17:54:06.014327 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 12 17:54:06.014375 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 12 17:54:06.014423 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 12 17:54:06.014476 kernel: pci 0000:02:00.0: Adding to iommu group 1 Sep 12 17:54:06.014524 kernel: pci 0000:02:00.1: Adding to iommu group 1 Sep 12 17:54:06.014575 kernel: pci 0000:04:00.0: Adding to iommu group 15 Sep 12 17:54:06.014624 kernel: pci 0000:05:00.0: Adding to iommu group 16 Sep 12 17:54:06.014674 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 12 17:54:06.014726 kernel: pci 0000:08:00.0: Adding to iommu group 17 Sep 12 17:54:06.014735 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 12 17:54:06.014741 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:54:06.014749 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Sep 12 17:54:06.014755 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 12 17:54:06.014761 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 12 17:54:06.014766 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 12 17:54:06.014772 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 12 17:54:06.014826 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 12 17:54:06.014835 kernel: Initialise system trusted keyrings Sep 12 17:54:06.014841 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 12 17:54:06.014848 kernel: Key type asymmetric registered Sep 12 17:54:06.014854 kernel: Asymmetric key parser 'x509' registered Sep 12 17:54:06.014859 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:54:06.014865 kernel: io scheduler mq-deadline registered Sep 12 17:54:06.014871 kernel: io scheduler kyber registered Sep 12 17:54:06.014876 kernel: io scheduler bfq registered Sep 12 17:54:06.014926 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 12 17:54:06.014975 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Sep 12 17:54:06.015023 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Sep 12 17:54:06.015074 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Sep 12 17:54:06.015123 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Sep 12 17:54:06.015171 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Sep 12 17:54:06.015263 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Sep 12 17:54:06.015316 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 12 17:54:06.015325 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 12 17:54:06.015331 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 12 17:54:06.015339 kernel: pstore: Using crash dump compression: deflate Sep 12 17:54:06.015344 kernel: pstore: Registered erst as persistent store backend Sep 12 17:54:06.015350 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:54:06.015356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:54:06.015362 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:54:06.015367 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:54:06.015417 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 12 17:54:06.015425 kernel: i8042: PNP: No PS/2 controller found. Sep 12 17:54:06.015471 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 12 17:54:06.015517 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 12 17:54:06.015562 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-12T17:54:04 UTC (1757699644) Sep 12 17:54:06.015606 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 12 17:54:06.015614 kernel: intel_pstate: Intel P-state driver initializing Sep 12 17:54:06.015620 kernel: intel_pstate: Disabling energy efficiency optimization Sep 12 17:54:06.015626 kernel: intel_pstate: HWP enabled Sep 12 17:54:06.015632 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 12 17:54:06.015637 kernel: vesafb: scrolling: redraw Sep 12 17:54:06.015645 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 12 17:54:06.015651 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000008f7d1357, using 768k, total 768k Sep 12 17:54:06.015656 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:54:06.015662 kernel: fb0: VESA VGA frame buffer device Sep 12 17:54:06.015668 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:54:06.015673 kernel: Segment Routing with IPv6 Sep 12 17:54:06.015679 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:54:06.015685 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:54:06.015690 kernel: Key type dns_resolver registered Sep 12 17:54:06.015697 kernel: microcode: Current revision: 0x00000102 Sep 12 17:54:06.015703 kernel: microcode: Microcode Update Driver: v2.2. Sep 12 17:54:06.015708 kernel: IPI shorthand broadcast: enabled Sep 12 17:54:06.015714 kernel: sched_clock: Marking stable (1661000723, 1374259454)->(4470775919, -1435515742) Sep 12 17:54:06.015720 kernel: registered taskstats version 1 Sep 12 17:54:06.015725 kernel: Loading compiled-in X.509 certificates Sep 12 17:54:06.015731 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:54:06.015737 kernel: Key type .fscrypt registered Sep 12 17:54:06.015742 kernel: Key type fscrypt-provisioning registered Sep 12 17:54:06.015749 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:54:06.015754 kernel: ima: No architecture policies found Sep 12 17:54:06.015760 kernel: clk: Disabling unused clocks Sep 12 17:54:06.015766 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:54:06.015771 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:54:06.015777 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:54:06.015783 kernel: Run /init as init process Sep 12 17:54:06.015788 kernel: with arguments: Sep 12 17:54:06.015794 kernel: /init Sep 12 17:54:06.015801 kernel: with environment: Sep 12 17:54:06.015806 kernel: HOME=/ Sep 12 17:54:06.015812 kernel: TERM=linux Sep 12 17:54:06.015817 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:54:06.015824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:54:06.015831 systemd[1]: Detected architecture x86-64. Sep 12 17:54:06.015837 systemd[1]: Running in initrd. Sep 12 17:54:06.015844 systemd[1]: No hostname configured, using default hostname. Sep 12 17:54:06.015850 systemd[1]: Hostname set to . Sep 12 17:54:06.015856 systemd[1]: Initializing machine ID from random generator. Sep 12 17:54:06.015862 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:54:06.015867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:54:06.015873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:54:06.015880 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:54:06.015886 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:54:06.015893 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:54:06.015899 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:54:06.015905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:54:06.015911 kernel: tsc: Refined TSC clocksource calibration: 3408.043 MHz Sep 12 17:54:06.015917 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311ffc74570, max_idle_ns: 440795256509 ns Sep 12 17:54:06.015923 kernel: clocksource: Switched to clocksource tsc Sep 12 17:54:06.015929 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:54:06.015936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:54:06.015942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:54:06.015948 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:54:06.015954 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:54:06.015960 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:54:06.015966 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:54:06.015971 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:54:06.015977 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:54:06.015983 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:54:06.015990 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:54:06.015996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:54:06.016002 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:54:06.016008 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:54:06.016014 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:54:06.016020 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:54:06.016026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:54:06.016032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:54:06.016038 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:54:06.016044 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:54:06.016061 systemd-journald[267]: Collecting audit messages is disabled. Sep 12 17:54:06.016075 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:54:06.016082 systemd-journald[267]: Journal started Sep 12 17:54:06.016095 systemd-journald[267]: Runtime Journal (/run/log/journal/6465932ce0234e749983575eaeccbb8d) is 8.0M, max 639.9M, 631.9M free. Sep 12 17:54:06.050212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:54:06.050411 systemd-modules-load[269]: Inserted module 'overlay' Sep 12 17:54:06.080174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:54:06.080289 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:54:06.080280 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:54:06.080368 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:54:06.081222 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:54:06.081632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:54:06.123198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:54:06.140817 systemd-modules-load[269]: Inserted module 'br_netfilter' Sep 12 17:54:06.189517 kernel: Bridge firewalling registered Sep 12 17:54:06.141199 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:54:06.206715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:06.227638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:54:06.248960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:54:06.292504 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:54:06.292941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:54:06.293389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:54:06.298518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:54:06.298666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:54:06.299766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:54:06.318427 systemd-resolved[303]: Positive Trust Anchors: Sep 12 17:54:06.318432 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:54:06.318456 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:54:06.320020 systemd-resolved[303]: Defaulting to hostname 'linux'. Sep 12 17:54:06.320469 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:06.350509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:54:06.413917 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:54:06.447477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:54:06.501308 dracut-cmdline[305]: dracut-dracut-053 Sep 12 17:54:06.508428 dracut-cmdline[305]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:54:06.701239 kernel: SCSI subsystem initialized Sep 12 17:54:06.725224 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:54:06.748231 kernel: iscsi: registered transport (tcp) Sep 12 17:54:06.780896 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:54:06.780914 kernel: QLogic iSCSI HBA Driver Sep 12 17:54:06.813470 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:54:06.839513 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:54:06.896884 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:54:06.896907 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:54:06.916528 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:54:06.975229 kernel: raid6: avx2x4 gen() 53431 MB/s Sep 12 17:54:07.007268 kernel: raid6: avx2x2 gen() 53929 MB/s Sep 12 17:54:07.043614 kernel: raid6: avx2x1 gen() 45273 MB/s Sep 12 17:54:07.043631 kernel: raid6: using algorithm avx2x2 gen() 53929 MB/s Sep 12 17:54:07.090670 kernel: raid6: .... xor() 30689 MB/s, rmw enabled Sep 12 17:54:07.090687 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:54:07.131196 kernel: xor: automatically using best checksumming function avx Sep 12 17:54:07.245245 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:54:07.251414 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:54:07.270465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:54:07.291737 systemd-udevd[491]: Using default interface naming scheme 'v255'. Sep 12 17:54:07.294329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:54:07.332429 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:54:07.352391 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 12 17:54:07.361573 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:54:07.379597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:54:07.507862 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:54:07.533225 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:54:07.533264 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:54:07.542122 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:54:07.556480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:54:07.583472 kernel: PTP clock support registered Sep 12 17:54:07.583488 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:54:07.556514 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:07.608246 kernel: ACPI: bus type USB registered Sep 12 17:54:07.608264 kernel: usbcore: registered new interface driver usbfs Sep 12 17:54:07.624460 kernel: usbcore: registered new interface driver hub Sep 12 17:54:07.624437 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:54:07.728955 kernel: usbcore: registered new device driver usb Sep 12 17:54:07.728971 kernel: libata version 3.00 loaded. Sep 12 17:54:07.728979 kernel: ahci 0000:00:17.0: version 3.0 Sep 12 17:54:07.729075 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:54:07.729084 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Sep 12 17:54:07.729149 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 12 17:54:07.729219 kernel: scsi host0: ahci Sep 12 17:54:07.729290 kernel: AES CTR mode by8 optimization enabled Sep 12 17:54:07.729298 kernel: scsi host1: ahci Sep 12 17:54:07.661827 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:54:07.765476 kernel: scsi host2: ahci Sep 12 17:54:07.765590 kernel: scsi host3: ahci Sep 12 17:54:07.661874 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:07.834146 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 12 17:54:07.834162 kernel: scsi host4: ahci Sep 12 17:54:07.834283 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 12 17:54:07.834294 kernel: scsi host5: ahci Sep 12 17:54:07.834362 kernel: scsi host6: ahci Sep 12 17:54:07.765309 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:54:08.150190 kernel: igb 0000:04:00.0: added PHC on eth0 Sep 12 17:54:08.150347 kernel: scsi host7: ahci Sep 12 17:54:08.150472 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 12 17:54:08.150592 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Sep 12 17:54:08.150613 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d4 Sep 12 17:54:08.150729 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Sep 12 17:54:08.150738 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Sep 12 17:54:08.150847 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Sep 12 17:54:08.150867 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 12 17:54:08.150998 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Sep 12 17:54:08.151012 kernel: igb 0000:05:00.0: added PHC on eth1 Sep 12 17:54:08.151134 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Sep 12 17:54:08.151152 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 12 17:54:08.151279 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Sep 12 17:54:08.151294 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d5 Sep 12 17:54:08.151400 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Sep 12 17:54:08.151415 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Sep 12 17:54:08.151484 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Sep 12 17:54:08.151493 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 12 17:54:07.882297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:54:08.200316 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Sep 12 17:54:08.200408 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 12 17:54:08.184418 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:54:08.200911 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:54:08.200937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:54:08.200963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:54:08.214329 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:54:08.278479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:08.299496 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:54:08.353239 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.353254 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 12 17:54:08.353264 kernel: ata8: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.357197 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.373196 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.388195 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 12 17:54:08.388207 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.418225 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 12 17:54:08.433234 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 12 17:54:08.450224 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 12 17:54:08.466265 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 12 17:54:08.470303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:54:08.534706 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 12 17:54:08.534718 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Sep 12 17:54:08.534804 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 12 17:54:08.545669 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:08.585584 kernel: ata1.00: Features: NCQ-prio Sep 12 17:54:08.585596 kernel: ata2.00: Features: NCQ-prio Sep 12 17:54:08.585603 kernel: ata1.00: configured for UDMA/133 Sep 12 17:54:08.595198 kernel: ata2.00: configured for UDMA/133 Sep 12 17:54:08.595221 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 12 17:54:08.626197 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 12 17:54:08.656182 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 12 17:54:08.656310 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Sep 12 17:54:08.656395 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 12 17:54:08.719174 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 12 17:54:08.719322 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 12 17:54:08.736724 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 12 17:54:08.754243 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 12 17:54:08.768567 kernel: hub 1-0:1.0: USB hub found Sep 12 17:54:08.768767 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 17:54:08.768859 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Sep 12 17:54:08.772252 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Sep 12 17:54:08.772344 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 12 17:54:08.779197 kernel: hub 1-0:1.0: 16 ports detected Sep 12 17:54:08.881736 kernel: hub 2-0:1.0: USB hub found Sep 12 17:54:08.881827 kernel: hub 2-0:1.0: 10 ports detected Sep 12 17:54:08.892235 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:08.906143 kernel: ata2.00: Enabling discard_zeroes_data Sep 12 17:54:08.906160 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 12 17:54:08.910864 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 12 17:54:08.925815 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:54:08.925901 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 12 17:54:08.931041 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:54:08.936276 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 12 17:54:08.941058 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 12 17:54:08.945851 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 12 17:54:08.955247 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:54:08.964740 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 12 17:54:08.973790 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 12 17:54:08.982840 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 12 17:54:09.069635 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.069654 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Sep 12 17:54:09.069739 kernel: ata2.00: Enabling discard_zeroes_data Sep 12 17:54:09.115550 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 12 17:54:09.116240 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Sep 12 17:54:09.116333 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 12 17:54:09.181874 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:54:09.181891 kernel: GPT:9289727 != 937703087 Sep 12 17:54:09.196799 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:54:09.209319 kernel: GPT:9289727 != 937703087 Sep 12 17:54:09.223442 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:54:09.237282 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.260200 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:54:09.260312 kernel: hub 1-14:1.0: USB hub found Sep 12 17:54:09.276745 kernel: hub 1-14:1.0: 4 ports detected Sep 12 17:54:09.293658 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 12 17:54:09.347340 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (541) Sep 12 17:54:09.347357 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (563) Sep 12 17:54:09.327333 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Sep 12 17:54:09.384852 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 12 17:54:09.396356 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 12 17:54:09.438666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 12 17:54:09.464471 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:54:09.520326 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 12 17:54:09.520423 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.520432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.520440 disk-uuid[732]: Primary Header is updated. Sep 12 17:54:09.520440 disk-uuid[732]: Secondary Entries is updated. Sep 12 17:54:09.520440 disk-uuid[732]: Secondary Header is updated. Sep 12 17:54:09.591468 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.591482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.591491 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:09.591497 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Sep 12 17:54:09.591628 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:09.591677 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 12 17:54:09.591733 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Sep 12 17:54:09.702199 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:54:09.724027 kernel: usbcore: registered new interface driver usbhid Sep 12 17:54:09.724063 kernel: usbhid: USB HID core driver Sep 12 17:54:09.767379 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 12 17:54:09.871454 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 12 17:54:09.871603 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 12 17:54:09.904868 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 12 17:54:10.575711 kernel: ata1.00: Enabling discard_zeroes_data Sep 12 17:54:10.595198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:54:10.595727 disk-uuid[733]: The operation has completed successfully. Sep 12 17:54:10.638009 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:54:10.638073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:54:10.693413 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:54:10.723306 sh[754]: Success Sep 12 17:54:10.733299 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:54:10.781060 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:54:10.792211 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:54:10.800184 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:54:10.853068 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:54:10.853089 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:10.874798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:54:10.894256 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:54:10.912560 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:54:10.952230 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:54:10.954833 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:54:10.964622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:54:10.970310 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:54:11.087311 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:11.087324 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:11.087406 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:54:11.087414 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:54:11.087424 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:54:11.074627 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:54:11.123345 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:11.113132 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:54:11.133429 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:54:11.160466 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:54:11.176039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:54:11.220359 systemd-networkd[936]: lo: Link UP Sep 12 17:54:11.220362 systemd-networkd[936]: lo: Gained carrier Sep 12 17:54:11.222791 systemd-networkd[936]: Enumeration completed Sep 12 17:54:11.233722 ignition[935]: Ignition 2.19.0 Sep 12 17:54:11.222874 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:54:11.233726 ignition[935]: Stage: fetch-offline Sep 12 17:54:11.223490 systemd-networkd[936]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.233749 ignition[935]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:11.227505 systemd[1]: Reached target network.target - Network. Sep 12 17:54:11.233755 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:11.235755 unknown[935]: fetched base config from "system" Sep 12 17:54:11.233810 ignition[935]: parsed url from cmdline: "" Sep 12 17:54:11.235760 unknown[935]: fetched user config from "system" Sep 12 17:54:11.233811 ignition[935]: no config URL provided Sep 12 17:54:11.251135 systemd-networkd[936]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.233814 ignition[935]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:54:11.257530 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:54:11.233837 ignition[935]: parsing config with SHA512: 345fbfc4f08650a3230a1b9335d73de33178afd80340f9e47a8c758f53ea8201b48485ea617fcc689b4a0b6c31c2bb3dad5fd699c03d58d4f0b52d90c9515c6e Sep 12 17:54:11.279251 systemd-networkd[936]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.235974 ignition[935]: fetch-offline: fetch-offline passed Sep 12 17:54:11.282683 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:54:11.235976 ignition[935]: POST message to Packet Timeline Sep 12 17:54:11.292368 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:54:11.235978 ignition[935]: POST Status error: resource requires networking Sep 12 17:54:11.236014 ignition[935]: Ignition finished successfully Sep 12 17:54:11.340982 ignition[950]: Ignition 2.19.0 Sep 12 17:54:11.341000 ignition[950]: Stage: kargs Sep 12 17:54:11.341526 ignition[950]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:11.341568 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:11.506355 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 12 17:54:11.500202 systemd-networkd[936]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:54:11.344404 ignition[950]: kargs: kargs passed Sep 12 17:54:11.344417 ignition[950]: POST message to Packet Timeline Sep 12 17:54:11.344452 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:11.346308 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46537->[::1]:53: read: connection refused Sep 12 17:54:11.546783 ignition[950]: GET https://metadata.packet.net/metadata: attempt #2 Sep 12 17:54:11.547813 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48313->[::1]:53: read: connection refused Sep 12 17:54:11.738320 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 12 17:54:11.741127 systemd-networkd[936]: eno1: Link UP Sep 12 17:54:11.741299 systemd-networkd[936]: eno2: Link UP Sep 12 17:54:11.741415 systemd-networkd[936]: enp2s0f0np0: Link UP Sep 12 17:54:11.741549 systemd-networkd[936]: enp2s0f0np0: Gained carrier Sep 12 17:54:11.756431 systemd-networkd[936]: enp2s0f1np1: Link UP Sep 12 17:54:11.782378 systemd-networkd[936]: enp2s0f0np0: DHCPv4 address 139.178.94.21/31, gateway 139.178.94.20 acquired from 145.40.83.140 Sep 12 17:54:11.948279 ignition[950]: GET https://metadata.packet.net/metadata: attempt #3 Sep 12 17:54:11.949321 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35276->[::1]:53: read: connection refused Sep 12 17:54:12.551818 systemd-networkd[936]: enp2s0f1np1: Gained carrier Sep 12 17:54:12.749749 ignition[950]: GET https://metadata.packet.net/metadata: attempt #4 Sep 12 17:54:12.750791 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37193->[::1]:53: read: connection refused Sep 12 17:54:13.319681 systemd-networkd[936]: enp2s0f0np0: Gained IPv6LL Sep 12 17:54:14.279727 systemd-networkd[936]: enp2s0f1np1: Gained IPv6LL Sep 12 17:54:14.352462 ignition[950]: GET https://metadata.packet.net/metadata: attempt #5 Sep 12 17:54:14.353946 ignition[950]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60894->[::1]:53: read: connection refused Sep 12 17:54:17.554277 ignition[950]: GET https://metadata.packet.net/metadata: attempt #6 Sep 12 17:54:18.691722 ignition[950]: GET result: OK Sep 12 17:54:19.176384 ignition[950]: Ignition finished successfully Sep 12 17:54:19.182229 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:54:19.206509 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:54:19.212818 ignition[968]: Ignition 2.19.0 Sep 12 17:54:19.212822 ignition[968]: Stage: disks Sep 12 17:54:19.212934 ignition[968]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:19.212941 ignition[968]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:19.213467 ignition[968]: disks: disks passed Sep 12 17:54:19.213469 ignition[968]: POST message to Packet Timeline Sep 12 17:54:19.213478 ignition[968]: GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:20.364113 ignition[968]: GET result: OK Sep 12 17:54:20.950312 ignition[968]: Ignition finished successfully Sep 12 17:54:20.953678 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:54:20.969463 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:54:20.987481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:54:21.008471 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:54:21.029542 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:54:21.049536 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:54:21.082464 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:54:21.108124 systemd-fsck[984]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:54:21.118581 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:54:21.127373 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:54:21.265261 kernel: EXT4-fs (sda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:54:21.265715 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:54:21.275654 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:54:21.311369 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:54:21.320747 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:54:21.445671 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (993) Sep 12 17:54:21.445691 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:21.445705 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:21.445718 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:54:21.445731 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:54:21.445747 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:54:21.364827 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:54:21.446082 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Sep 12 17:54:21.483275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:54:21.483303 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:54:21.543406 coreos-metadata[995]: Sep 12 17:54:21.526 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 17:54:21.507199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:54:21.584345 coreos-metadata[1011]: Sep 12 17:54:21.526 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 17:54:21.533588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:54:21.563425 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:54:21.618291 initrd-setup-root[1025]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:54:21.628274 initrd-setup-root[1032]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:54:21.639308 initrd-setup-root[1039]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:54:21.649304 initrd-setup-root[1046]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:54:21.660662 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:54:21.682399 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:54:21.724406 kernel: BTRFS info (device sda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:21.702374 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:54:21.734014 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:54:21.756712 ignition[1115]: INFO : Ignition 2.19.0 Sep 12 17:54:21.756712 ignition[1115]: INFO : Stage: mount Sep 12 17:54:21.761280 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:54:21.790377 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:21.790377 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:21.790377 ignition[1115]: INFO : mount: mount passed Sep 12 17:54:21.790377 ignition[1115]: INFO : POST message to Packet Timeline Sep 12 17:54:21.790377 ignition[1115]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:22.475682 coreos-metadata[995]: Sep 12 17:54:22.475 INFO Fetch successful Sep 12 17:54:22.552470 coreos-metadata[1011]: Sep 12 17:54:22.552 INFO Fetch successful Sep 12 17:54:22.562312 coreos-metadata[995]: Sep 12 17:54:22.555 INFO wrote hostname ci-4081.3.6-a-7e79e463ed to /sysroot/etc/hostname Sep 12 17:54:22.557579 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:54:22.584465 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 12 17:54:22.584510 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Sep 12 17:54:23.342586 ignition[1115]: INFO : GET result: OK Sep 12 17:54:23.740087 ignition[1115]: INFO : Ignition finished successfully Sep 12 17:54:23.743119 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:54:23.768319 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:54:23.782684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:54:23.859311 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1141) Sep 12 17:54:23.859339 kernel: BTRFS info (device sda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:54:23.879819 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:54:23.898145 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:54:23.938385 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 12 17:54:23.938432 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:54:23.953033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:54:23.978895 ignition[1158]: INFO : Ignition 2.19.0 Sep 12 17:54:23.978895 ignition[1158]: INFO : Stage: files Sep 12 17:54:23.994419 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:23.994419 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:23.994419 ignition[1158]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:54:23.994419 ignition[1158]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:54:23.994419 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:54:23.982061 unknown[1158]: wrote ssh authorized keys file for user: core Sep 12 17:54:24.185955 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:54:24.347296 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:24.364384 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:54:26.244709 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:54:26.551906 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:54:26.551906 ignition[1158]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:54:26.584414 ignition[1158]: INFO : files: files passed Sep 12 17:54:26.584414 ignition[1158]: INFO : POST message to Packet Timeline Sep 12 17:54:26.584414 ignition[1158]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:28.150390 ignition[1158]: INFO : GET result: OK Sep 12 17:54:28.907685 ignition[1158]: INFO : Ignition finished successfully Sep 12 17:54:28.910723 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:54:28.941462 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:54:28.941944 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:54:28.970770 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:54:28.970852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:54:29.001039 initrd-setup-root-after-ignition[1197]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:54:29.001039 initrd-setup-root-after-ignition[1197]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:54:29.047422 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:54:29.008270 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:54:29.035607 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:54:29.071450 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:54:29.113780 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:54:29.113894 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:54:29.134327 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:54:29.154474 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:54:29.174671 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:54:29.192438 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:54:29.241664 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:54:29.271653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:54:29.301256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:54:29.312677 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:54:29.333886 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:54:29.352943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:54:29.353389 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:54:29.391699 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:54:29.401822 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:54:29.421963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:54:29.441942 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:54:29.462817 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:54:29.483818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:54:29.503959 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:54:29.525983 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:54:29.547832 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:54:29.567953 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:54:29.585668 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:54:29.586069 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:54:29.611930 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:54:29.631974 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:54:29.653684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:54:29.654149 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:54:29.676835 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:54:29.677261 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:54:29.708786 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:54:29.709266 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:54:29.729010 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:54:29.748677 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:54:29.749185 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:54:29.769826 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:54:29.788946 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:54:29.807933 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:54:29.808271 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:54:29.827974 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:54:29.828308 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:54:29.850858 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:54:29.851281 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:54:29.980413 ignition[1223]: INFO : Ignition 2.19.0 Sep 12 17:54:29.980413 ignition[1223]: INFO : Stage: umount Sep 12 17:54:29.980413 ignition[1223]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:54:29.980413 ignition[1223]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 12 17:54:29.980413 ignition[1223]: INFO : umount: umount passed Sep 12 17:54:29.980413 ignition[1223]: INFO : POST message to Packet Timeline Sep 12 17:54:29.980413 ignition[1223]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 12 17:54:29.870893 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:54:29.871297 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:54:29.888856 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:54:29.889268 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:54:29.921321 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:54:29.931267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:54:29.931466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:54:29.957393 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:54:29.969317 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:54:29.969560 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:54:29.991608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:54:29.991706 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:54:30.016388 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:54:30.017110 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:54:30.017232 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:54:30.052480 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:54:30.052838 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:54:31.398053 ignition[1223]: INFO : GET result: OK Sep 12 17:54:32.365256 ignition[1223]: INFO : Ignition finished successfully Sep 12 17:54:32.368524 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:54:32.368813 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:54:32.385618 systemd[1]: Stopped target network.target - Network. Sep 12 17:54:32.400452 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:54:32.400632 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:54:32.418545 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:54:32.418683 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:54:32.436750 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:54:32.436912 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:54:32.455748 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:54:32.455923 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:54:32.475566 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:54:32.475737 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:54:32.493999 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:54:32.504350 systemd-networkd[936]: enp2s0f0np0: DHCPv6 lease lost Sep 12 17:54:32.512440 systemd-networkd[936]: enp2s0f1np1: DHCPv6 lease lost Sep 12 17:54:32.513669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:54:32.533470 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:54:32.533779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:54:32.552378 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:54:32.552724 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:54:32.574042 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:54:32.574169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:54:32.606352 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:54:32.632338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:54:32.632381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:54:32.652539 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:54:32.652644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:54:32.671615 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:54:32.671785 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:54:32.692594 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:54:32.692762 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:54:32.712818 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:54:32.735672 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:54:32.736051 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:54:32.769254 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:54:32.769406 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:54:32.775725 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:54:32.775837 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:54:32.803454 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:54:32.803598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:54:32.847486 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:54:32.847680 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:54:32.875727 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:54:32.875904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:54:32.933328 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:54:32.947404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:54:32.947444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:54:32.977387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:54:33.190450 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Sep 12 17:54:32.977468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:32.999582 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:54:32.999829 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:54:33.069833 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:54:33.070090 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:54:33.088764 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:54:33.121528 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:54:33.142384 systemd[1]: Switching root. Sep 12 17:54:33.264412 systemd-journald[267]: Journal stopped Sep 12 17:54:36.042929 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:54:36.042944 kernel: SELinux: policy capability open_perms=1 Sep 12 17:54:36.042951 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:54:36.042957 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:54:36.042963 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:54:36.042968 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:54:36.042974 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:54:36.042979 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:54:36.042985 kernel: audit: type=1403 audit(1757699673.650:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:54:36.042991 systemd[1]: Successfully loaded SELinux policy in 169.398ms. Sep 12 17:54:36.042999 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.253ms. Sep 12 17:54:36.043006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:54:36.043012 systemd[1]: Detected architecture x86-64. Sep 12 17:54:36.043018 systemd[1]: Detected first boot. Sep 12 17:54:36.043025 systemd[1]: Hostname set to . Sep 12 17:54:36.043032 systemd[1]: Initializing machine ID from random generator. Sep 12 17:54:36.043039 zram_generator::config[1273]: No configuration found. Sep 12 17:54:36.043046 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:54:36.043052 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:54:36.043058 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:54:36.043065 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:54:36.043071 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:54:36.043078 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:54:36.043085 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:54:36.043091 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:54:36.043098 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:54:36.043104 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:54:36.043111 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:54:36.043118 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:54:36.043125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:54:36.043132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:54:36.043139 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:54:36.043145 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:54:36.043152 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:54:36.043158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:54:36.043165 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Sep 12 17:54:36.043171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:54:36.043181 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:54:36.043187 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:54:36.043205 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:54:36.043214 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:54:36.043221 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:54:36.043228 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:54:36.043234 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:54:36.043242 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:54:36.043249 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:54:36.043255 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:54:36.043262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:54:36.043269 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:54:36.043275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:54:36.043283 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:54:36.043290 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:54:36.043297 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:54:36.043304 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:54:36.043310 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:54:36.043317 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:54:36.043324 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:54:36.043332 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:54:36.043339 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:54:36.043346 systemd[1]: Reached target machines.target - Containers. Sep 12 17:54:36.043353 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:54:36.043359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:54:36.043366 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:54:36.043373 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:54:36.043380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:54:36.043387 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:54:36.043394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:54:36.043401 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:54:36.043408 kernel: ACPI: bus type drm_connector registered Sep 12 17:54:36.043414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:54:36.043421 kernel: fuse: init (API version 7.39) Sep 12 17:54:36.043427 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:54:36.043434 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:54:36.043441 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:54:36.043448 kernel: loop: module loaded Sep 12 17:54:36.043455 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:54:36.043462 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:54:36.043468 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:54:36.043483 systemd-journald[1378]: Collecting audit messages is disabled. Sep 12 17:54:36.043501 systemd-journald[1378]: Journal started Sep 12 17:54:36.043515 systemd-journald[1378]: Runtime Journal (/run/log/journal/15e54400d94f42888a9d30a877382931) is 8.0M, max 639.9M, 631.9M free. Sep 12 17:54:34.205821 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:54:34.230636 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:54:34.230923 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:54:36.071235 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:54:36.105271 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:54:36.148380 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:54:36.196247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:54:36.230551 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:54:36.230572 systemd[1]: Stopped verity-setup.service. Sep 12 17:54:36.294240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:54:36.315383 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:54:36.325766 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:54:36.336460 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:54:36.346456 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:54:36.357437 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:54:36.368446 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:54:36.378437 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:54:36.389593 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:54:36.401025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:54:36.413040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:54:36.413434 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:54:36.425049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:54:36.425443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:54:36.437065 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:54:36.437463 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:54:36.448064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:54:36.448449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:54:36.460073 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:54:36.460469 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:54:36.472043 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:54:36.472431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:54:36.484077 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:54:36.495055 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:54:36.507337 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:54:36.519105 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:54:36.554329 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:54:36.578648 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:54:36.591420 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:54:36.601459 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:54:36.601560 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:54:36.614186 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:54:36.639645 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:54:36.653156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:54:36.663679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:54:36.667191 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:54:36.677887 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:54:36.688310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:54:36.689018 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:54:36.692642 systemd-journald[1378]: Time spent on flushing to /var/log/journal/15e54400d94f42888a9d30a877382931 is 14.059ms for 1378 entries. Sep 12 17:54:36.692642 systemd-journald[1378]: System Journal (/var/log/journal/15e54400d94f42888a9d30a877382931) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:54:36.730534 systemd-journald[1378]: Received client request to flush runtime journal. Sep 12 17:54:36.707427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:54:36.709117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:54:36.715511 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:54:36.725150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:54:36.736152 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:54:36.762200 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:54:36.762665 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:54:36.789442 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:54:36.797226 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:54:36.808435 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:54:36.819423 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:54:36.830421 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:54:36.847431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:54:36.858255 kernel: loop1: detected capacity change from 0 to 8 Sep 12 17:54:36.867490 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:54:36.880328 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:54:36.911236 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:54:36.917450 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:54:36.929124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:54:36.940819 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:54:36.941256 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:54:36.952775 udevadm[1414]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:54:36.961336 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Sep 12 17:54:36.961346 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Sep 12 17:54:36.963892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:54:36.991274 kernel: loop3: detected capacity change from 0 to 140768 Sep 12 17:54:37.048059 ldconfig[1404]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:54:37.049327 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:54:37.067260 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 17:54:37.129238 kernel: loop5: detected capacity change from 0 to 8 Sep 12 17:54:37.131265 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:54:37.148197 kernel: loop6: detected capacity change from 0 to 142488 Sep 12 17:54:37.181246 kernel: loop7: detected capacity change from 0 to 140768 Sep 12 17:54:37.182312 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:54:37.195891 systemd-udevd[1436]: Using default interface naming scheme 'v255'. Sep 12 17:54:37.196055 (sd-merge)[1433]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Sep 12 17:54:37.196329 (sd-merge)[1433]: Merged extensions into '/usr'. Sep 12 17:54:37.198707 systemd[1]: Reloading requested from client PID 1409 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:54:37.198713 systemd[1]: Reloading... Sep 12 17:54:37.232205 zram_generator::config[1468]: No configuration found. Sep 12 17:54:37.254296 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 12 17:54:37.254364 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1511) Sep 12 17:54:37.264211 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 17:54:37.293259 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:54:37.329205 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:54:37.334198 kernel: IPMI message handler: version 39.2 Sep 12 17:54:37.334228 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:54:37.364200 kernel: ipmi device interface Sep 12 17:54:37.384233 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 12 17:54:37.384520 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 12 17:54:37.388670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:54:37.419279 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 12 17:54:37.441739 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 12 17:54:37.444086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 12 17:54:37.469333 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 12 17:54:37.469606 systemd[1]: Reloading finished in 270 ms. Sep 12 17:54:37.471198 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 12 17:54:37.471287 kernel: ipmi_si: IPMI System Interface driver Sep 12 17:54:37.492678 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 12 17:54:37.510736 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 12 17:54:37.527468 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 12 17:54:37.538255 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 12 17:54:37.562829 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 12 17:54:37.582292 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 12 17:54:37.598415 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 12 17:54:37.618599 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 12 17:54:37.659255 kernel: iTCO_vendor_support: vendor-support=0 Sep 12 17:54:37.659281 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 12 17:54:37.718200 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Sep 12 17:54:37.718347 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Sep 12 17:54:37.746598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:54:37.764445 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:54:37.769196 kernel: intel_rapl_common: Found RAPL domain package Sep 12 17:54:37.769222 kernel: intel_rapl_common: Found RAPL domain core Sep 12 17:54:37.769241 kernel: intel_rapl_common: Found RAPL domain dram Sep 12 17:54:37.817227 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 12 17:54:37.834198 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 12 17:54:37.853391 systemd[1]: Starting ensure-sysext.service... Sep 12 17:54:37.860834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:54:37.872128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:54:37.881910 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:54:37.882573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:54:37.882840 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:54:37.884722 systemd[1]: Reloading requested from client PID 1614 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:54:37.884729 systemd[1]: Reloading... Sep 12 17:54:37.920200 zram_generator::config[1646]: No configuration found. Sep 12 17:54:37.938291 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:54:37.938626 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:54:37.939515 systemd-tmpfiles[1619]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:54:37.939817 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Sep 12 17:54:37.939879 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Sep 12 17:54:37.942187 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:54:37.942306 systemd-tmpfiles[1619]: Skipping /boot Sep 12 17:54:37.948982 systemd-tmpfiles[1619]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:54:37.948989 systemd-tmpfiles[1619]: Skipping /boot Sep 12 17:54:37.985613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:54:38.039931 systemd[1]: Reloading finished in 155 ms. Sep 12 17:54:38.073412 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:54:38.084448 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:54:38.095416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:54:38.118559 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:54:38.129556 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:54:38.136996 augenrules[1729]: No rules Sep 12 17:54:38.141000 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:54:38.153164 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:54:38.160306 lvm[1734]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:54:38.165608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:54:38.176168 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:54:38.194780 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:54:38.204868 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:54:38.214506 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:54:38.228309 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:54:38.238648 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:54:38.249611 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:54:38.265260 systemd-networkd[1617]: lo: Link UP Sep 12 17:54:38.265265 systemd-networkd[1617]: lo: Gained carrier Sep 12 17:54:38.268362 systemd-networkd[1617]: bond0: netdev ready Sep 12 17:54:38.269288 systemd-networkd[1617]: Enumeration completed Sep 12 17:54:38.279536 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:54:38.287487 systemd-networkd[1617]: enp2s0f0np0: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:3a.network. Sep 12 17:54:38.290454 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:54:38.301292 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:54:38.307985 systemd-resolved[1736]: Positive Trust Anchors: Sep 12 17:54:38.307992 systemd-resolved[1736]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:54:38.308016 systemd-resolved[1736]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:54:38.310790 systemd-resolved[1736]: Using system hostname 'ci-4081.3.6-a-7e79e463ed'. Sep 12 17:54:38.311357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:54:38.311478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:54:38.321401 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:54:38.323554 lvm[1753]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:54:38.332917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:54:38.342890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:54:38.354864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:54:38.364330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:54:38.381706 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:54:38.392841 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:54:38.403225 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:54:38.403290 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:54:38.404123 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:54:38.415487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:54:38.415565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:54:38.426445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:54:38.426517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:54:38.438457 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:54:38.438533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:54:38.448442 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:54:38.460913 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:54:38.461053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:54:38.471392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:54:38.481850 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:54:38.503412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:54:38.514856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:54:38.524330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:54:38.524411 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:54:38.524467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:54:38.533492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:54:38.533567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:54:38.544517 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:54:38.544586 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:54:38.554508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:54:38.554576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:54:38.565468 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:54:38.565534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:54:38.576160 systemd[1]: Finished ensure-sysext.service. Sep 12 17:54:38.585678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:54:38.585708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:54:38.592394 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:54:38.634659 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:54:38.645335 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:54:38.959234 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 12 17:54:38.982640 systemd-networkd[1617]: enp2s0f1np1: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:3b.network. Sep 12 17:54:38.983197 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Sep 12 17:54:39.216216 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 12 17:54:39.238248 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Sep 12 17:54:39.238344 systemd-networkd[1617]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 12 17:54:39.240082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:54:39.240225 systemd-networkd[1617]: enp2s0f0np0: Link UP Sep 12 17:54:39.240836 systemd-networkd[1617]: enp2s0f0np0: Gained carrier Sep 12 17:54:39.260252 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 12 17:54:39.270457 systemd[1]: Reached target network.target - Network. Sep 12 17:54:39.279283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:54:39.280135 systemd-networkd[1617]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:ce:f6:07:a6:3a.network. Sep 12 17:54:39.280509 systemd-networkd[1617]: enp2s0f1np1: Link UP Sep 12 17:54:39.280891 systemd-networkd[1617]: enp2s0f1np1: Gained carrier Sep 12 17:54:39.290388 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:54:39.296670 systemd-networkd[1617]: bond0: Link UP Sep 12 17:54:39.297257 systemd-networkd[1617]: bond0: Gained carrier Sep 12 17:54:39.297710 systemd-timesyncd[1773]: Network configuration changed, trying to establish connection. Sep 12 17:54:39.301641 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:54:39.313464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:54:39.324730 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:54:39.334608 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:54:39.353156 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:54:39.377734 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 12 17:54:39.377791 kernel: bond0: active interface up! Sep 12 17:54:39.388296 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:54:39.388337 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:54:39.397272 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:54:39.405914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:54:39.415992 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:54:39.428098 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:54:39.437575 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:54:39.447342 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:54:39.457281 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:54:39.465311 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:54:39.465327 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:54:39.471306 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:54:39.482067 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:54:39.491836 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:54:39.504928 coreos-metadata[1778]: Sep 12 17:54:39.504 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 17:54:39.509888 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:54:39.517145 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:54:39.517265 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 12 17:54:39.529901 jq[1783]: false Sep 12 17:54:39.538284 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:54:39.538894 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:54:39.539706 dbus-daemon[1779]: [system] SELinux support is enabled Sep 12 17:54:39.547119 extend-filesystems[1784]: Found loop4 Sep 12 17:54:39.547119 extend-filesystems[1784]: Found loop5 Sep 12 17:54:39.601319 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Sep 12 17:54:39.601345 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1562) Sep 12 17:54:39.549919 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:54:39.601451 extend-filesystems[1784]: Found loop6 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found loop7 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda1 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda2 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda3 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found usr Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda4 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda6 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda7 Sep 12 17:54:39.601451 extend-filesystems[1784]: Found sda9 Sep 12 17:54:39.601451 extend-filesystems[1784]: Checking size of /dev/sda9 Sep 12 17:54:39.601451 extend-filesystems[1784]: Resized partition /dev/sda9 Sep 12 17:54:39.622345 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:54:39.729453 extend-filesystems[1792]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:54:39.668376 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:54:39.696300 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:54:39.705358 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Sep 12 17:54:39.729499 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:54:39.729853 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:54:39.736616 systemd-logind[1804]: Watching system buttons on /dev/input/event3 (Power Button) Sep 12 17:54:39.736626 systemd-logind[1804]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 12 17:54:39.736636 systemd-logind[1804]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 12 17:54:39.736793 systemd-logind[1804]: New seat seat0. Sep 12 17:54:39.745946 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:54:39.753621 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:54:39.765583 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:54:39.772186 update_engine[1809]: I20250912 17:54:39.772125 1809 main.cc:92] Flatcar Update Engine starting Sep 12 17:54:39.773005 update_engine[1809]: I20250912 17:54:39.772961 1809 update_check_scheduler.cc:74] Next update check in 3m14s Sep 12 17:54:39.775024 jq[1810]: true Sep 12 17:54:39.782859 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:54:39.782950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:54:39.783104 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:54:39.783195 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:54:39.793661 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:54:39.793757 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:54:39.800875 sshd_keygen[1807]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:54:39.807286 (ntainerd)[1819]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:54:39.808814 jq[1818]: true Sep 12 17:54:39.811653 dbus-daemon[1779]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:54:39.812775 tar[1812]: linux-amd64/helm Sep 12 17:54:39.814576 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:54:39.824513 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 12 17:54:39.824647 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Sep 12 17:54:39.831870 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:54:39.852373 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:54:39.859987 bash[1848]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:54:39.860276 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:54:39.860398 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:54:39.871303 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:54:39.871400 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:54:39.893348 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:54:39.905398 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:54:39.916043 locksmithd[1857]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:54:39.916564 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:54:39.916665 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:54:39.939388 systemd[1]: Starting sshkeys.service... Sep 12 17:54:39.946982 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:54:39.959403 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:54:39.970644 containerd[1819]: time="2025-09-12T17:54:39.970589822Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:54:39.971295 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:54:39.982567 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:54:39.983504 containerd[1819]: time="2025-09-12T17:54:39.983482975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984206 containerd[1819]: time="2025-09-12T17:54:39.984183385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984238 containerd[1819]: time="2025-09-12T17:54:39.984206002Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:54:39.984238 containerd[1819]: time="2025-09-12T17:54:39.984217567Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:54:39.984312 containerd[1819]: time="2025-09-12T17:54:39.984302769Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:54:39.984342 containerd[1819]: time="2025-09-12T17:54:39.984313020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984367 containerd[1819]: time="2025-09-12T17:54:39.984348847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984367 containerd[1819]: time="2025-09-12T17:54:39.984357375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984458 containerd[1819]: time="2025-09-12T17:54:39.984447290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984458 containerd[1819]: time="2025-09-12T17:54:39.984456616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984509 containerd[1819]: time="2025-09-12T17:54:39.984464162Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984509 containerd[1819]: time="2025-09-12T17:54:39.984469862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984559 containerd[1819]: time="2025-09-12T17:54:39.984512839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984646 containerd[1819]: time="2025-09-12T17:54:39.984637873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984701 containerd[1819]: time="2025-09-12T17:54:39.984691510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:54:39.984701 containerd[1819]: time="2025-09-12T17:54:39.984700199Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:54:39.984756 containerd[1819]: time="2025-09-12T17:54:39.984743859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:54:39.984783 containerd[1819]: time="2025-09-12T17:54:39.984772977Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:54:39.994936 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:54:39.996454 containerd[1819]: time="2025-09-12T17:54:39.996437726Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:54:39.996491 containerd[1819]: time="2025-09-12T17:54:39.996467714Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:54:39.996491 containerd[1819]: time="2025-09-12T17:54:39.996477661Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:54:39.996670 containerd[1819]: time="2025-09-12T17:54:39.996597178Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:54:39.996700 containerd[1819]: time="2025-09-12T17:54:39.996682854Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:54:39.996847 containerd[1819]: time="2025-09-12T17:54:39.996832530Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:54:39.997081 containerd[1819]: time="2025-09-12T17:54:39.997069692Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:54:39.997143 containerd[1819]: time="2025-09-12T17:54:39.997135112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:54:39.997170 containerd[1819]: time="2025-09-12T17:54:39.997145995Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:54:39.997170 containerd[1819]: time="2025-09-12T17:54:39.997154225Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:54:39.997170 containerd[1819]: time="2025-09-12T17:54:39.997162226Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997169716Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997176743Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997184482Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997198619Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997208715Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997215809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997222492Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997233801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997241606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997251 containerd[1819]: time="2025-09-12T17:54:39.997248760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997256399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997263523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997270561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997277015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997283903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997290717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997298448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997304956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997311629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997318396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997326867Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997339226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997349483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997491 containerd[1819]: time="2025-09-12T17:54:39.997356122Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997379636Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997389895Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997396302Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997403791Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997409321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997416012Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997421729Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:54:39.997819 containerd[1819]: time="2025-09-12T17:54:39.997429293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:54:39.998016 containerd[1819]: time="2025-09-12T17:54:39.997584908Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:54:39.998016 containerd[1819]: time="2025-09-12T17:54:39.997618354Z" level=info msg="Connect containerd service" Sep 12 17:54:39.998016 containerd[1819]: time="2025-09-12T17:54:39.997636353Z" level=info msg="using legacy CRI server" Sep 12 17:54:39.998016 containerd[1819]: time="2025-09-12T17:54:39.997640818Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:54:39.998016 containerd[1819]: time="2025-09-12T17:54:39.997691972Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:54:39.998016 containerd[1819]: time="2025-09-12T17:54:39.997981038Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998067691Z" level=info msg="Start subscribing containerd event" Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998100513Z" level=info msg="Start recovering state" Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998145483Z" level=info msg="Start event monitor" Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998149379Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998155894Z" level=info msg="Start snapshots syncer" Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998166196Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998170465Z" level=info msg="Start streaming server" Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998177609Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:54:39.998257 containerd[1819]: time="2025-09-12T17:54:39.998210401Z" level=info msg="containerd successfully booted in 0.028139s" Sep 12 17:54:40.001698 coreos-metadata[1877]: Sep 12 17:54:40.001 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 12 17:54:40.004119 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 12 17:54:40.013413 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:54:40.021593 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:54:40.081149 tar[1812]: linux-amd64/LICENSE Sep 12 17:54:40.081212 tar[1812]: linux-amd64/README.md Sep 12 17:54:40.096227 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Sep 12 17:54:40.122691 extend-filesystems[1792]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 12 17:54:40.122691 extend-filesystems[1792]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 12 17:54:40.122691 extend-filesystems[1792]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Sep 12 17:54:40.162278 extend-filesystems[1784]: Resized filesystem in /dev/sda9 Sep 12 17:54:40.162278 extend-filesystems[1784]: Found sdb Sep 12 17:54:40.123410 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:54:40.123508 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:54:40.170486 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:54:41.223510 systemd-networkd[1617]: bond0: Gained IPv6LL Sep 12 17:54:41.225290 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:54:41.236660 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:54:41.258448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:54:41.269003 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:54:41.286915 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:54:42.102895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:54:42.113749 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:54:42.460495 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:54:42.479429 systemd[1]: Started sshd@0-139.178.94.21:22-147.75.109.163:48304.service - OpenSSH per-connection server daemon (147.75.109.163:48304). Sep 12 17:54:42.533578 sshd[1926]: Accepted publickey for core from 147.75.109.163 port 48304 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:42.534631 sshd[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:42.540272 systemd-logind[1804]: New session 1 of user core. Sep 12 17:54:42.541345 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:54:42.565462 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:54:42.578138 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:54:42.586590 kubelet[1914]: E0912 17:54:42.586574 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:54:42.589524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:54:42.589602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:54:42.609432 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:54:42.619098 (systemd)[1937]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:54:42.696090 systemd[1937]: Queued start job for default target default.target. Sep 12 17:54:42.706765 systemd[1937]: Created slice app.slice - User Application Slice. Sep 12 17:54:42.706779 systemd[1937]: Reached target paths.target - Paths. Sep 12 17:54:42.706788 systemd[1937]: Reached target timers.target - Timers. Sep 12 17:54:42.707446 systemd[1937]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:54:42.713224 systemd[1937]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:54:42.713254 systemd[1937]: Reached target sockets.target - Sockets. Sep 12 17:54:42.713263 systemd[1937]: Reached target basic.target - Basic System. Sep 12 17:54:42.713283 systemd[1937]: Reached target default.target - Main User Target. Sep 12 17:54:42.713299 systemd[1937]: Startup finished in 90ms. Sep 12 17:54:42.713393 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:54:42.743280 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:54:43.271217 systemd-resolved[1736]: Clock change detected. Flushing caches. Sep 12 17:54:43.271268 systemd-timesyncd[1773]: Contacted time server 23.186.168.125:123 (0.flatcar.pool.ntp.org). Sep 12 17:54:43.271317 systemd-timesyncd[1773]: Initial clock synchronization to Fri 2025-09-12 17:54:43.271137 UTC. Sep 12 17:54:43.281195 systemd[1]: Started sshd@1-139.178.94.21:22-147.75.109.163:48312.service - OpenSSH per-connection server daemon (147.75.109.163:48312). Sep 12 17:54:43.320226 sshd[1948]: Accepted publickey for core from 147.75.109.163 port 48312 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:43.320944 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:43.323389 systemd-logind[1804]: New session 2 of user core. Sep 12 17:54:43.333625 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:54:43.390694 sshd[1948]: pam_unix(sshd:session): session closed for user core Sep 12 17:54:43.414125 systemd[1]: sshd@1-139.178.94.21:22-147.75.109.163:48312.service: Deactivated successfully. Sep 12 17:54:43.417944 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:54:43.421262 systemd-logind[1804]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:54:43.434533 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Sep 12 17:54:43.434656 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Sep 12 17:54:43.456824 systemd[1]: Started sshd@2-139.178.94.21:22-147.75.109.163:48324.service - OpenSSH per-connection server daemon (147.75.109.163:48324). Sep 12 17:54:43.468277 systemd-logind[1804]: Removed session 2. Sep 12 17:54:43.485421 sshd[1955]: Accepted publickey for core from 147.75.109.163 port 48324 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:43.488817 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:43.499959 systemd-logind[1804]: New session 3 of user core. Sep 12 17:54:43.522018 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:54:43.602518 sshd[1955]: pam_unix(sshd:session): session closed for user core Sep 12 17:54:43.610332 systemd[1]: sshd@2-139.178.94.21:22-147.75.109.163:48324.service: Deactivated successfully. Sep 12 17:54:43.614360 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:54:43.616270 systemd-logind[1804]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:54:43.618645 systemd-logind[1804]: Removed session 3. Sep 12 17:54:43.877543 coreos-metadata[1877]: Sep 12 17:54:43.877 INFO Fetch successful Sep 12 17:54:43.909837 unknown[1877]: wrote ssh authorized keys file for user: core Sep 12 17:54:43.929345 update-ssh-keys[1963]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:54:43.929941 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:54:43.941429 systemd[1]: Finished sshkeys.service. Sep 12 17:54:43.942259 coreos-metadata[1778]: Sep 12 17:54:43.942 INFO Fetch successful Sep 12 17:54:43.994569 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:54:44.005654 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Sep 12 17:54:44.653565 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Sep 12 17:54:44.666446 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:54:44.678235 systemd[1]: Startup finished in 1.858s (kernel) + 28.620s (initrd) + 10.726s (userspace) = 41.205s. Sep 12 17:54:44.701844 login[1892]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:54:44.705482 systemd-logind[1804]: New session 4 of user core. Sep 12 17:54:44.726786 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:54:44.735357 login[1885]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:54:44.738165 systemd-logind[1804]: New session 5 of user core. Sep 12 17:54:44.738924 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:54:53.264911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:54:53.274689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:54:53.535277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:54:53.537411 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:54:53.560559 kubelet[2007]: E0912 17:54:53.560532 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:54:53.562455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:54:53.562536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:54:53.618585 systemd[1]: Started sshd@3-139.178.94.21:22-147.75.109.163:48614.service - OpenSSH per-connection server daemon (147.75.109.163:48614). Sep 12 17:54:53.681577 sshd[2024]: Accepted publickey for core from 147.75.109.163 port 48614 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:53.682207 sshd[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:53.684660 systemd-logind[1804]: New session 6 of user core. Sep 12 17:54:53.693911 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:54:53.753413 sshd[2024]: pam_unix(sshd:session): session closed for user core Sep 12 17:54:53.766131 systemd[1]: sshd@3-139.178.94.21:22-147.75.109.163:48614.service: Deactivated successfully. Sep 12 17:54:53.766942 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:54:53.767693 systemd-logind[1804]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:54:53.768488 systemd[1]: Started sshd@4-139.178.94.21:22-147.75.109.163:48620.service - OpenSSH per-connection server daemon (147.75.109.163:48620). Sep 12 17:54:53.769050 systemd-logind[1804]: Removed session 6. Sep 12 17:54:53.802954 sshd[2031]: Accepted publickey for core from 147.75.109.163 port 48620 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:53.803892 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:53.807382 systemd-logind[1804]: New session 7 of user core. Sep 12 17:54:53.824861 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:54:53.879333 sshd[2031]: pam_unix(sshd:session): session closed for user core Sep 12 17:54:53.896839 systemd[1]: sshd@4-139.178.94.21:22-147.75.109.163:48620.service: Deactivated successfully. Sep 12 17:54:53.900572 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:54:53.904080 systemd-logind[1804]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:54:53.918204 systemd[1]: Started sshd@5-139.178.94.21:22-147.75.109.163:48632.service - OpenSSH per-connection server daemon (147.75.109.163:48632). Sep 12 17:54:53.920951 systemd-logind[1804]: Removed session 7. Sep 12 17:54:53.976949 sshd[2038]: Accepted publickey for core from 147.75.109.163 port 48632 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:53.977973 sshd[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:53.981655 systemd-logind[1804]: New session 8 of user core. Sep 12 17:54:53.994685 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:54:54.059993 sshd[2038]: pam_unix(sshd:session): session closed for user core Sep 12 17:54:54.083232 systemd[1]: sshd@5-139.178.94.21:22-147.75.109.163:48632.service: Deactivated successfully. Sep 12 17:54:54.084224 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:54:54.084927 systemd-logind[1804]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:54:54.085449 systemd[1]: Started sshd@6-139.178.94.21:22-147.75.109.163:48634.service - OpenSSH per-connection server daemon (147.75.109.163:48634). Sep 12 17:54:54.085971 systemd-logind[1804]: Removed session 8. Sep 12 17:54:54.115447 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 48634 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:54.116092 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:54.118676 systemd-logind[1804]: New session 9 of user core. Sep 12 17:54:54.138724 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:54:54.202328 sudo[2048]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:54:54.202486 sudo[2048]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:54:54.223328 sudo[2048]: pam_unix(sudo:session): session closed for user root Sep 12 17:54:54.224582 sshd[2045]: pam_unix(sshd:session): session closed for user core Sep 12 17:54:54.250672 systemd[1]: sshd@6-139.178.94.21:22-147.75.109.163:48634.service: Deactivated successfully. Sep 12 17:54:54.251349 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:54:54.252055 systemd-logind[1804]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:54:54.265766 systemd[1]: Started sshd@7-139.178.94.21:22-147.75.109.163:48642.service - OpenSSH per-connection server daemon (147.75.109.163:48642). Sep 12 17:54:54.266419 systemd-logind[1804]: Removed session 9. Sep 12 17:54:54.296935 sshd[2053]: Accepted publickey for core from 147.75.109.163 port 48642 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:54.297926 sshd[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:54.301354 systemd-logind[1804]: New session 10 of user core. Sep 12 17:54:54.319753 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:54:54.377999 sudo[2057]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:54:54.378152 sudo[2057]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:54:54.380222 sudo[2057]: pam_unix(sudo:session): session closed for user root Sep 12 17:54:54.382834 sudo[2056]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:54:54.382981 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:54:54.402755 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:54:54.404014 auditctl[2060]: No rules Sep 12 17:54:54.404252 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:54:54.404390 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:54:54.406089 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:54:54.432476 augenrules[2078]: No rules Sep 12 17:54:54.432817 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:54:54.433338 sudo[2056]: pam_unix(sudo:session): session closed for user root Sep 12 17:54:54.434588 sshd[2053]: pam_unix(sshd:session): session closed for user core Sep 12 17:54:54.444243 systemd[1]: sshd@7-139.178.94.21:22-147.75.109.163:48642.service: Deactivated successfully. Sep 12 17:54:54.445040 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:54:54.445821 systemd-logind[1804]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:54:54.446685 systemd[1]: Started sshd@8-139.178.94.21:22-147.75.109.163:48646.service - OpenSSH per-connection server daemon (147.75.109.163:48646). Sep 12 17:54:54.447237 systemd-logind[1804]: Removed session 10. Sep 12 17:54:54.477553 sshd[2086]: Accepted publickey for core from 147.75.109.163 port 48646 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 17:54:54.478255 sshd[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:54:54.480914 systemd-logind[1804]: New session 11 of user core. Sep 12 17:54:54.500730 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:54:54.553859 sudo[2089]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:54:54.554009 sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:54:54.816790 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:54:54.816846 (dockerd)[2115]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:54:55.064953 dockerd[2115]: time="2025-09-12T17:54:55.064886220Z" level=info msg="Starting up" Sep 12 17:54:55.132815 dockerd[2115]: time="2025-09-12T17:54:55.132734452Z" level=info msg="Loading containers: start." Sep 12 17:54:55.215476 kernel: Initializing XFRM netlink socket Sep 12 17:54:55.278873 systemd-networkd[1617]: docker0: Link UP Sep 12 17:54:55.317559 dockerd[2115]: time="2025-09-12T17:54:55.317504594Z" level=info msg="Loading containers: done." Sep 12 17:54:55.326139 dockerd[2115]: time="2025-09-12T17:54:55.326090454Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:54:55.326210 dockerd[2115]: time="2025-09-12T17:54:55.326144478Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:54:55.326210 dockerd[2115]: time="2025-09-12T17:54:55.326198437Z" level=info msg="Daemon has completed initialization" Sep 12 17:54:55.326151 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck757353606-merged.mount: Deactivated successfully. Sep 12 17:54:55.341508 dockerd[2115]: time="2025-09-12T17:54:55.341456629Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:54:55.341620 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:54:56.193394 containerd[1819]: time="2025-09-12T17:54:56.193348091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:54:56.722120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332967850.mount: Deactivated successfully. Sep 12 17:54:57.454006 containerd[1819]: time="2025-09-12T17:54:57.453982526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:57.454213 containerd[1819]: time="2025-09-12T17:54:57.454147715Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:54:57.454602 containerd[1819]: time="2025-09-12T17:54:57.454591955Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:57.456619 containerd[1819]: time="2025-09-12T17:54:57.456582493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:57.457118 containerd[1819]: time="2025-09-12T17:54:57.457078547Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.263708904s" Sep 12 17:54:57.457118 containerd[1819]: time="2025-09-12T17:54:57.457099149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:54:57.457398 containerd[1819]: time="2025-09-12T17:54:57.457386580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:54:58.447216 containerd[1819]: time="2025-09-12T17:54:58.447161860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:58.447378 containerd[1819]: time="2025-09-12T17:54:58.447328763Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:54:58.447818 containerd[1819]: time="2025-09-12T17:54:58.447773241Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:58.450077 containerd[1819]: time="2025-09-12T17:54:58.450034703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:58.450665 containerd[1819]: time="2025-09-12T17:54:58.450649707Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 993.244411ms" Sep 12 17:54:58.450698 containerd[1819]: time="2025-09-12T17:54:58.450666637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:54:58.450941 containerd[1819]: time="2025-09-12T17:54:58.450928038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:54:59.298044 containerd[1819]: time="2025-09-12T17:54:59.298020151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:59.298259 containerd[1819]: time="2025-09-12T17:54:59.298237974Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:54:59.298703 containerd[1819]: time="2025-09-12T17:54:59.298659519Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:59.300206 containerd[1819]: time="2025-09-12T17:54:59.300164990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:54:59.301279 containerd[1819]: time="2025-09-12T17:54:59.301237439Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 850.291947ms" Sep 12 17:54:59.301279 containerd[1819]: time="2025-09-12T17:54:59.301254137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:54:59.301531 containerd[1819]: time="2025-09-12T17:54:59.301489520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:55:00.031420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232918209.mount: Deactivated successfully. Sep 12 17:55:00.214757 containerd[1819]: time="2025-09-12T17:55:00.214731835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:00.214916 containerd[1819]: time="2025-09-12T17:55:00.214894230Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:55:00.215188 containerd[1819]: time="2025-09-12T17:55:00.215176405Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:00.216714 containerd[1819]: time="2025-09-12T17:55:00.216690917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:00.217200 containerd[1819]: time="2025-09-12T17:55:00.217156806Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 915.647624ms" Sep 12 17:55:00.217200 containerd[1819]: time="2025-09-12T17:55:00.217175110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:55:00.217451 containerd[1819]: time="2025-09-12T17:55:00.217412969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:55:00.731101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896598027.mount: Deactivated successfully. Sep 12 17:55:01.271103 containerd[1819]: time="2025-09-12T17:55:01.271042496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:01.271323 containerd[1819]: time="2025-09-12T17:55:01.271234281Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:55:01.271731 containerd[1819]: time="2025-09-12T17:55:01.271689675Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:01.273391 containerd[1819]: time="2025-09-12T17:55:01.273342179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:01.274029 containerd[1819]: time="2025-09-12T17:55:01.273985742Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.056557765s" Sep 12 17:55:01.274029 containerd[1819]: time="2025-09-12T17:55:01.274003009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:55:01.274340 containerd[1819]: time="2025-09-12T17:55:01.274303494Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:55:01.684634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290535596.mount: Deactivated successfully. Sep 12 17:55:01.685831 containerd[1819]: time="2025-09-12T17:55:01.685812919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:01.685972 containerd[1819]: time="2025-09-12T17:55:01.685949790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:55:01.686490 containerd[1819]: time="2025-09-12T17:55:01.686477465Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:01.687694 containerd[1819]: time="2025-09-12T17:55:01.687681611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:01.688470 containerd[1819]: time="2025-09-12T17:55:01.688430894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 414.113016ms" Sep 12 17:55:01.688517 containerd[1819]: time="2025-09-12T17:55:01.688473920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:55:01.688804 containerd[1819]: time="2025-09-12T17:55:01.688793832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:55:02.268933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695582424.mount: Deactivated successfully. Sep 12 17:55:03.323763 containerd[1819]: time="2025-09-12T17:55:03.323706784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:03.323975 containerd[1819]: time="2025-09-12T17:55:03.323912790Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:55:03.324362 containerd[1819]: time="2025-09-12T17:55:03.324324278Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:03.326120 containerd[1819]: time="2025-09-12T17:55:03.326102933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:03.326859 containerd[1819]: time="2025-09-12T17:55:03.326842796Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.63803363s" Sep 12 17:55:03.326895 containerd[1819]: time="2025-09-12T17:55:03.326862683Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:55:03.765018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:55:03.781567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:55:04.039357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:55:04.041586 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:55:04.062310 kubelet[2520]: E0912 17:55:04.062232 2520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:55:04.063282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:55:04.063359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:55:04.965755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:55:04.976822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:55:04.992061 systemd[1]: Reloading requested from client PID 2545 ('systemctl') (unit session-11.scope)... Sep 12 17:55:04.992069 systemd[1]: Reloading... Sep 12 17:55:05.037572 zram_generator::config[2584]: No configuration found. Sep 12 17:55:05.105661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:55:05.166523 systemd[1]: Reloading finished in 174 ms. Sep 12 17:55:05.201068 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:55:05.201111 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:55:05.201221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:55:05.221377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:55:05.476416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:55:05.478924 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:55:05.500313 kubelet[2649]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:55:05.500313 kubelet[2649]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:55:05.500313 kubelet[2649]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:55:05.500518 kubelet[2649]: I0912 17:55:05.500321 2649 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:55:05.822883 kubelet[2649]: I0912 17:55:05.822838 2649 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:55:05.822883 kubelet[2649]: I0912 17:55:05.822853 2649 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:55:05.823060 kubelet[2649]: I0912 17:55:05.823025 2649 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:55:05.844978 kubelet[2649]: E0912 17:55:05.844936 2649 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.94.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.21:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:55:05.845594 kubelet[2649]: I0912 17:55:05.845536 2649 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:55:05.851283 kubelet[2649]: E0912 17:55:05.851270 2649 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:55:05.851319 kubelet[2649]: I0912 17:55:05.851284 2649 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:55:05.862144 kubelet[2649]: I0912 17:55:05.862104 2649 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:55:05.863007 kubelet[2649]: I0912 17:55:05.862966 2649 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:55:05.863097 kubelet[2649]: I0912 17:55:05.863049 2649 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:55:05.863194 kubelet[2649]: I0912 17:55:05.863065 2649 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-7e79e463ed","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:55:05.863194 kubelet[2649]: I0912 17:55:05.863177 2649 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:55:05.863194 kubelet[2649]: I0912 17:55:05.863185 2649 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:55:05.863289 kubelet[2649]: I0912 17:55:05.863246 2649 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:55:05.865966 kubelet[2649]: I0912 17:55:05.865930 2649 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:55:05.865966 kubelet[2649]: I0912 17:55:05.865942 2649 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:55:05.865966 kubelet[2649]: I0912 17:55:05.865966 2649 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:55:05.866066 kubelet[2649]: I0912 17:55:05.865979 2649 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:55:05.868419 kubelet[2649]: I0912 17:55:05.868406 2649 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:55:05.868804 kubelet[2649]: I0912 17:55:05.868766 2649 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:55:05.869440 kubelet[2649]: W0912 17:55:05.869427 2649 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:55:05.870392 kubelet[2649]: W0912 17:55:05.870343 2649 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.21:6443: connect: connection refused Sep 12 17:55:05.870392 kubelet[2649]: E0912 17:55:05.870379 2649 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.21:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:55:05.871284 kubelet[2649]: I0912 17:55:05.871237 2649 server.go:1274] "Started kubelet" Sep 12 17:55:05.871350 kubelet[2649]: I0912 17:55:05.871333 2649 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:55:05.871371 kubelet[2649]: I0912 17:55:05.871359 2649 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:55:05.871496 kubelet[2649]: I0912 17:55:05.871487 2649 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:55:05.871670 kubelet[2649]: W0912 17:55:05.871619 2649 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-7e79e463ed&limit=500&resourceVersion=0": dial tcp 139.178.94.21:6443: connect: connection refused Sep 12 17:55:05.871670 kubelet[2649]: E0912 17:55:05.871660 2649 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-7e79e463ed&limit=500&resourceVersion=0\": dial tcp 139.178.94.21:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:55:05.872029 kubelet[2649]: I0912 17:55:05.872021 2649 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:55:05.872065 kubelet[2649]: I0912 17:55:05.872030 2649 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:55:05.872065 kubelet[2649]: I0912 17:55:05.872047 2649 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:55:05.872126 kubelet[2649]: E0912 17:55:05.872077 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:05.872126 kubelet[2649]: I0912 17:55:05.872106 2649 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:55:05.872126 kubelet[2649]: I0912 17:55:05.872123 2649 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:55:05.872209 kubelet[2649]: I0912 17:55:05.872123 2649 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:55:05.875490 kubelet[2649]: E0912 17:55:05.875453 2649 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-7e79e463ed?timeout=10s\": dial tcp 139.178.94.21:6443: connect: connection refused" interval="200ms" Sep 12 17:55:05.875648 kubelet[2649]: W0912 17:55:05.875623 2649 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.21:6443: connect: connection refused Sep 12 17:55:05.875872 kubelet[2649]: E0912 17:55:05.875855 2649 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:55:05.875913 kubelet[2649]: I0912 17:55:05.875879 2649 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:55:05.876054 kubelet[2649]: E0912 17:55:05.875721 2649 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.21:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:55:05.877810 kubelet[2649]: I0912 17:55:05.877786 2649 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:55:05.877810 kubelet[2649]: I0912 17:55:05.877809 2649 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:55:05.878679 kubelet[2649]: E0912 17:55:05.877677 2649 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.21:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-a-7e79e463ed.18649a91de434d59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-7e79e463ed,UID:ci-4081.3.6-a-7e79e463ed,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-7e79e463ed,},FirstTimestamp:2025-09-12 17:55:05.871224153 +0000 UTC m=+0.390379970,LastTimestamp:2025-09-12 17:55:05.871224153 +0000 UTC m=+0.390379970,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-7e79e463ed,}" Sep 12 17:55:05.884373 kubelet[2649]: I0912 17:55:05.884347 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:55:05.884911 kubelet[2649]: I0912 17:55:05.884900 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:55:05.884940 kubelet[2649]: I0912 17:55:05.884915 2649 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:55:05.884940 kubelet[2649]: I0912 17:55:05.884927 2649 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:55:05.884975 kubelet[2649]: E0912 17:55:05.884947 2649 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:55:05.885196 kubelet[2649]: W0912 17:55:05.885176 2649 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.21:6443: connect: connection refused Sep 12 17:55:05.885222 kubelet[2649]: E0912 17:55:05.885203 2649 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.21:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:55:05.973349 kubelet[2649]: E0912 17:55:05.973265 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:05.985862 kubelet[2649]: E0912 17:55:05.985748 2649 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:55:06.021197 kubelet[2649]: I0912 17:55:06.021096 2649 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:55:06.021197 kubelet[2649]: I0912 17:55:06.021140 2649 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:55:06.021197 kubelet[2649]: I0912 17:55:06.021185 2649 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:55:06.023445 kubelet[2649]: I0912 17:55:06.023394 2649 policy_none.go:49] "None policy: Start" Sep 12 17:55:06.023940 kubelet[2649]: I0912 17:55:06.023884 2649 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:55:06.023940 kubelet[2649]: I0912 17:55:06.023913 2649 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:55:06.027716 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:55:06.040729 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:55:06.042315 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:55:06.055090 kubelet[2649]: I0912 17:55:06.055043 2649 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:55:06.055178 kubelet[2649]: I0912 17:55:06.055134 2649 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:55:06.055178 kubelet[2649]: I0912 17:55:06.055141 2649 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:55:06.055289 kubelet[2649]: I0912 17:55:06.055281 2649 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:55:06.055802 kubelet[2649]: E0912 17:55:06.055789 2649 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:06.077377 kubelet[2649]: E0912 17:55:06.077180 2649 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-7e79e463ed?timeout=10s\": dial tcp 139.178.94.21:6443: connect: connection refused" interval="400ms" Sep 12 17:55:06.157275 kubelet[2649]: I0912 17:55:06.157213 2649 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.157545 kubelet[2649]: E0912 17:55:06.157491 2649 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.21:6443/api/v1/nodes\": dial tcp 139.178.94.21:6443: connect: connection refused" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.211328 systemd[1]: Created slice kubepods-burstable-podeebba37c6ded1ba507a807f945383753.slice - libcontainer container kubepods-burstable-podeebba37c6ded1ba507a807f945383753.slice. Sep 12 17:55:06.243566 systemd[1]: Created slice kubepods-burstable-pode01c38fd02563e133ad7b03f7ab9a92a.slice - libcontainer container kubepods-burstable-pode01c38fd02563e133ad7b03f7ab9a92a.slice. Sep 12 17:55:06.265481 systemd[1]: Created slice kubepods-burstable-pod0996564293bb2268f41eca8670d91052.slice - libcontainer container kubepods-burstable-pod0996564293bb2268f41eca8670d91052.slice. Sep 12 17:55:06.276175 kubelet[2649]: I0912 17:55:06.276071 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eebba37c6ded1ba507a807f945383753-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-7e79e463ed\" (UID: \"eebba37c6ded1ba507a807f945383753\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276175 kubelet[2649]: I0912 17:55:06.276159 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eebba37c6ded1ba507a807f945383753-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-7e79e463ed\" (UID: \"eebba37c6ded1ba507a807f945383753\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276509 kubelet[2649]: I0912 17:55:06.276239 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276509 kubelet[2649]: I0912 17:55:06.276341 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276509 kubelet[2649]: I0912 17:55:06.276423 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eebba37c6ded1ba507a807f945383753-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-7e79e463ed\" (UID: \"eebba37c6ded1ba507a807f945383753\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276800 kubelet[2649]: I0912 17:55:06.276515 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276800 kubelet[2649]: I0912 17:55:06.276563 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276800 kubelet[2649]: I0912 17:55:06.276611 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.276800 kubelet[2649]: I0912 17:55:06.276661 2649 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0996564293bb2268f41eca8670d91052-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-7e79e463ed\" (UID: \"0996564293bb2268f41eca8670d91052\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.362699 kubelet[2649]: I0912 17:55:06.362497 2649 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.363383 kubelet[2649]: E0912 17:55:06.363305 2649 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.21:6443/api/v1/nodes\": dial tcp 139.178.94.21:6443: connect: connection refused" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.478467 kubelet[2649]: E0912 17:55:06.478295 2649 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-7e79e463ed?timeout=10s\": dial tcp 139.178.94.21:6443: connect: connection refused" interval="800ms" Sep 12 17:55:06.538463 containerd[1819]: time="2025-09-12T17:55:06.538302516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-7e79e463ed,Uid:eebba37c6ded1ba507a807f945383753,Namespace:kube-system,Attempt:0,}" Sep 12 17:55:06.560529 containerd[1819]: time="2025-09-12T17:55:06.560396284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-7e79e463ed,Uid:e01c38fd02563e133ad7b03f7ab9a92a,Namespace:kube-system,Attempt:0,}" Sep 12 17:55:06.570905 containerd[1819]: time="2025-09-12T17:55:06.570863725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-7e79e463ed,Uid:0996564293bb2268f41eca8670d91052,Namespace:kube-system,Attempt:0,}" Sep 12 17:55:06.771076 kubelet[2649]: I0912 17:55:06.771014 2649 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.772029 kubelet[2649]: E0912 17:55:06.771799 2649 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.21:6443/api/v1/nodes\": dial tcp 139.178.94.21:6443: connect: connection refused" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:06.838283 kubelet[2649]: W0912 17:55:06.838241 2649 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.21:6443: connect: connection refused Sep 12 17:55:06.838376 kubelet[2649]: E0912 17:55:06.838287 2649 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.21:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:55:07.006266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637513835.mount: Deactivated successfully. Sep 12 17:55:07.008195 containerd[1819]: time="2025-09-12T17:55:07.008177536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:55:07.008481 containerd[1819]: time="2025-09-12T17:55:07.008440645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:55:07.008807 containerd[1819]: time="2025-09-12T17:55:07.008790610Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:55:07.009209 containerd[1819]: time="2025-09-12T17:55:07.009189683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:55:07.009244 containerd[1819]: time="2025-09-12T17:55:07.009200741Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:55:07.009542 containerd[1819]: time="2025-09-12T17:55:07.009524760Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:55:07.009767 containerd[1819]: time="2025-09-12T17:55:07.009752380Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:55:07.011677 containerd[1819]: time="2025-09-12T17:55:07.011662064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:55:07.012414 containerd[1819]: time="2025-09-12T17:55:07.012400390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 451.853767ms" Sep 12 17:55:07.012771 containerd[1819]: time="2025-09-12T17:55:07.012758473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 474.240364ms" Sep 12 17:55:07.014281 containerd[1819]: time="2025-09-12T17:55:07.014268794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 443.376728ms" Sep 12 17:55:07.109670 containerd[1819]: time="2025-09-12T17:55:07.109578226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:07.109670 containerd[1819]: time="2025-09-12T17:55:07.109611587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:07.109781 containerd[1819]: time="2025-09-12T17:55:07.109658818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:07.109832 containerd[1819]: time="2025-09-12T17:55:07.109627285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:07.109886 containerd[1819]: time="2025-09-12T17:55:07.109874228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:07.109921 containerd[1819]: time="2025-09-12T17:55:07.109900923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:07.109950 containerd[1819]: time="2025-09-12T17:55:07.109915943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:07.109980 containerd[1819]: time="2025-09-12T17:55:07.109948403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:07.109980 containerd[1819]: time="2025-09-12T17:55:07.109957847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:07.109980 containerd[1819]: time="2025-09-12T17:55:07.109960066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:07.110063 containerd[1819]: time="2025-09-12T17:55:07.110017308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:07.110063 containerd[1819]: time="2025-09-12T17:55:07.110014881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:07.143770 systemd[1]: Started cri-containerd-59b0009a3e29e900736f58d050b215339b7d3595927a9ff3dc0627a59a80e5ff.scope - libcontainer container 59b0009a3e29e900736f58d050b215339b7d3595927a9ff3dc0627a59a80e5ff. Sep 12 17:55:07.144447 systemd[1]: Started cri-containerd-5a0908d074bc6da92e54bd4ff40cf7b03683c57a787f052aae406b02c1b57ed6.scope - libcontainer container 5a0908d074bc6da92e54bd4ff40cf7b03683c57a787f052aae406b02c1b57ed6. Sep 12 17:55:07.145155 systemd[1]: Started cri-containerd-6b267ce840d7d3b395868bd32171e0c454a258e5ebb97d64ca215c24d493239f.scope - libcontainer container 6b267ce840d7d3b395868bd32171e0c454a258e5ebb97d64ca215c24d493239f. Sep 12 17:55:07.166303 containerd[1819]: time="2025-09-12T17:55:07.166275909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-7e79e463ed,Uid:0996564293bb2268f41eca8670d91052,Namespace:kube-system,Attempt:0,} returns sandbox id \"59b0009a3e29e900736f58d050b215339b7d3595927a9ff3dc0627a59a80e5ff\"" Sep 12 17:55:07.166470 containerd[1819]: time="2025-09-12T17:55:07.166453536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-7e79e463ed,Uid:e01c38fd02563e133ad7b03f7ab9a92a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a0908d074bc6da92e54bd4ff40cf7b03683c57a787f052aae406b02c1b57ed6\"" Sep 12 17:55:07.167818 containerd[1819]: time="2025-09-12T17:55:07.167802599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-7e79e463ed,Uid:eebba37c6ded1ba507a807f945383753,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b267ce840d7d3b395868bd32171e0c454a258e5ebb97d64ca215c24d493239f\"" Sep 12 17:55:07.167895 containerd[1819]: time="2025-09-12T17:55:07.167883362Z" level=info msg="CreateContainer within sandbox \"5a0908d074bc6da92e54bd4ff40cf7b03683c57a787f052aae406b02c1b57ed6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:55:07.167954 containerd[1819]: time="2025-09-12T17:55:07.167943442Z" level=info msg="CreateContainer within sandbox \"59b0009a3e29e900736f58d050b215339b7d3595927a9ff3dc0627a59a80e5ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:55:07.168638 containerd[1819]: time="2025-09-12T17:55:07.168626222Z" level=info msg="CreateContainer within sandbox \"6b267ce840d7d3b395868bd32171e0c454a258e5ebb97d64ca215c24d493239f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:55:07.189706 containerd[1819]: time="2025-09-12T17:55:07.189655910Z" level=info msg="CreateContainer within sandbox \"5a0908d074bc6da92e54bd4ff40cf7b03683c57a787f052aae406b02c1b57ed6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9708256ed45e4c66ebb4e6437ce5f937793469d8ea45c687237b392a73a1171\"" Sep 12 17:55:07.189972 containerd[1819]: time="2025-09-12T17:55:07.189932299Z" level=info msg="StartContainer for \"c9708256ed45e4c66ebb4e6437ce5f937793469d8ea45c687237b392a73a1171\"" Sep 12 17:55:07.191035 containerd[1819]: time="2025-09-12T17:55:07.190995605Z" level=info msg="CreateContainer within sandbox \"6b267ce840d7d3b395868bd32171e0c454a258e5ebb97d64ca215c24d493239f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f65af9fea7a1d461b90c38f8f30ede2ac72a0dd3cc32a4537437e7d8e791403a\"" Sep 12 17:55:07.191140 containerd[1819]: time="2025-09-12T17:55:07.191103394Z" level=info msg="CreateContainer within sandbox \"59b0009a3e29e900736f58d050b215339b7d3595927a9ff3dc0627a59a80e5ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1123bd60f5de363b8cd5ae6d052e86176daeeee6b09015f8de5fefd22660cc5\"" Sep 12 17:55:07.191166 containerd[1819]: time="2025-09-12T17:55:07.191155349Z" level=info msg="StartContainer for \"f65af9fea7a1d461b90c38f8f30ede2ac72a0dd3cc32a4537437e7d8e791403a\"" Sep 12 17:55:07.191309 containerd[1819]: time="2025-09-12T17:55:07.191275036Z" level=info msg="StartContainer for \"f1123bd60f5de363b8cd5ae6d052e86176daeeee6b09015f8de5fefd22660cc5\"" Sep 12 17:55:07.215741 systemd[1]: Started cri-containerd-c9708256ed45e4c66ebb4e6437ce5f937793469d8ea45c687237b392a73a1171.scope - libcontainer container c9708256ed45e4c66ebb4e6437ce5f937793469d8ea45c687237b392a73a1171. Sep 12 17:55:07.217805 systemd[1]: Started cri-containerd-f1123bd60f5de363b8cd5ae6d052e86176daeeee6b09015f8de5fefd22660cc5.scope - libcontainer container f1123bd60f5de363b8cd5ae6d052e86176daeeee6b09015f8de5fefd22660cc5. Sep 12 17:55:07.218348 systemd[1]: Started cri-containerd-f65af9fea7a1d461b90c38f8f30ede2ac72a0dd3cc32a4537437e7d8e791403a.scope - libcontainer container f65af9fea7a1d461b90c38f8f30ede2ac72a0dd3cc32a4537437e7d8e791403a. Sep 12 17:55:07.218920 kubelet[2649]: W0912 17:55:07.218878 2649 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-7e79e463ed&limit=500&resourceVersion=0": dial tcp 139.178.94.21:6443: connect: connection refused Sep 12 17:55:07.218983 kubelet[2649]: E0912 17:55:07.218934 2649 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-7e79e463ed&limit=500&resourceVersion=0\": dial tcp 139.178.94.21:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:55:07.240466 containerd[1819]: time="2025-09-12T17:55:07.240437097Z" level=info msg="StartContainer for \"c9708256ed45e4c66ebb4e6437ce5f937793469d8ea45c687237b392a73a1171\" returns successfully" Sep 12 17:55:07.241456 containerd[1819]: time="2025-09-12T17:55:07.241423027Z" level=info msg="StartContainer for \"f1123bd60f5de363b8cd5ae6d052e86176daeeee6b09015f8de5fefd22660cc5\" returns successfully" Sep 12 17:55:07.241537 containerd[1819]: time="2025-09-12T17:55:07.241424201Z" level=info msg="StartContainer for \"f65af9fea7a1d461b90c38f8f30ede2ac72a0dd3cc32a4537437e7d8e791403a\" returns successfully" Sep 12 17:55:07.573589 kubelet[2649]: I0912 17:55:07.573574 2649 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:07.796534 kubelet[2649]: E0912 17:55:07.796512 2649 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-a-7e79e463ed\" not found" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:07.898891 kubelet[2649]: I0912 17:55:07.898835 2649 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:07.898891 kubelet[2649]: E0912 17:55:07.898856 2649 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-a-7e79e463ed\": node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:07.903585 kubelet[2649]: E0912 17:55:07.903573 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.004491 kubelet[2649]: E0912 17:55:08.004475 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.105319 kubelet[2649]: E0912 17:55:08.105281 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.206581 kubelet[2649]: E0912 17:55:08.206475 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.307306 kubelet[2649]: E0912 17:55:08.307180 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.408104 kubelet[2649]: E0912 17:55:08.407980 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.509295 kubelet[2649]: E0912 17:55:08.509057 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.610313 kubelet[2649]: E0912 17:55:08.610188 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.711483 kubelet[2649]: E0912 17:55:08.711353 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.812690 kubelet[2649]: E0912 17:55:08.812590 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:08.913659 kubelet[2649]: E0912 17:55:08.913562 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.014689 kubelet[2649]: E0912 17:55:09.014607 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.115793 kubelet[2649]: E0912 17:55:09.115614 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.216795 kubelet[2649]: E0912 17:55:09.216698 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.317656 kubelet[2649]: E0912 17:55:09.317552 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.418589 kubelet[2649]: E0912 17:55:09.418373 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.519228 kubelet[2649]: E0912 17:55:09.519130 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.619782 kubelet[2649]: E0912 17:55:09.619706 2649 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:09.868043 kubelet[2649]: I0912 17:55:09.867930 2649 apiserver.go:52] "Watching apiserver" Sep 12 17:55:09.872674 kubelet[2649]: I0912 17:55:09.872580 2649 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:55:09.906586 kubelet[2649]: W0912 17:55:09.906533 2649 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:55:10.311970 kubelet[2649]: W0912 17:55:10.311912 2649 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:55:10.426830 systemd[1]: Reloading requested from client PID 2967 ('systemctl') (unit session-11.scope)... Sep 12 17:55:10.426838 systemd[1]: Reloading... Sep 12 17:55:10.476567 zram_generator::config[3006]: No configuration found. Sep 12 17:55:10.543051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:55:10.612378 systemd[1]: Reloading finished in 185 ms. Sep 12 17:55:10.644015 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:55:10.652955 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:55:10.653061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:55:10.672807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:55:10.908926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:55:10.913425 (kubelet)[3070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:55:10.934717 kubelet[3070]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:55:10.934717 kubelet[3070]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:55:10.934717 kubelet[3070]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:55:10.934938 kubelet[3070]: I0912 17:55:10.934757 3070 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:55:10.937924 kubelet[3070]: I0912 17:55:10.937885 3070 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:55:10.937924 kubelet[3070]: I0912 17:55:10.937896 3070 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:55:10.938058 kubelet[3070]: I0912 17:55:10.938020 3070 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:55:10.938760 kubelet[3070]: I0912 17:55:10.938751 3070 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:55:10.939893 kubelet[3070]: I0912 17:55:10.939884 3070 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:55:10.941839 kubelet[3070]: E0912 17:55:10.941825 3070 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:55:10.941839 kubelet[3070]: I0912 17:55:10.941839 3070 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:55:10.949423 kubelet[3070]: I0912 17:55:10.949398 3070 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:55:10.949470 kubelet[3070]: I0912 17:55:10.949456 3070 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:55:10.949560 kubelet[3070]: I0912 17:55:10.949517 3070 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:55:10.949662 kubelet[3070]: I0912 17:55:10.949531 3070 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-7e79e463ed","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:55:10.949662 kubelet[3070]: I0912 17:55:10.949629 3070 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:55:10.949662 kubelet[3070]: I0912 17:55:10.949636 3070 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:55:10.949662 kubelet[3070]: I0912 17:55:10.949654 3070 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:55:10.949797 kubelet[3070]: I0912 17:55:10.949705 3070 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:55:10.949797 kubelet[3070]: I0912 17:55:10.949712 3070 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:55:10.949797 kubelet[3070]: I0912 17:55:10.949738 3070 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:55:10.949797 kubelet[3070]: I0912 17:55:10.949744 3070 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:55:10.950061 kubelet[3070]: I0912 17:55:10.950044 3070 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:55:10.950459 kubelet[3070]: I0912 17:55:10.950425 3070 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:55:10.950685 kubelet[3070]: I0912 17:55:10.950675 3070 server.go:1274] "Started kubelet" Sep 12 17:55:10.950733 kubelet[3070]: I0912 17:55:10.950699 3070 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:55:10.950772 kubelet[3070]: I0912 17:55:10.950729 3070 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:55:10.950907 kubelet[3070]: I0912 17:55:10.950897 3070 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:55:10.952126 kubelet[3070]: I0912 17:55:10.952107 3070 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:55:10.952126 kubelet[3070]: I0912 17:55:10.952115 3070 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:55:10.952216 kubelet[3070]: E0912 17:55:10.952180 3070 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-7e79e463ed\" not found" Sep 12 17:55:10.952328 kubelet[3070]: I0912 17:55:10.952190 3070 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:55:10.952389 kubelet[3070]: I0912 17:55:10.952367 3070 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:55:10.952519 kubelet[3070]: I0912 17:55:10.952510 3070 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:55:10.952794 kubelet[3070]: E0912 17:55:10.952669 3070 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:55:10.952794 kubelet[3070]: I0912 17:55:10.952672 3070 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:55:10.954161 kubelet[3070]: I0912 17:55:10.954149 3070 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:55:10.954161 kubelet[3070]: I0912 17:55:10.954160 3070 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:55:10.954232 kubelet[3070]: I0912 17:55:10.954216 3070 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:55:10.957934 kubelet[3070]: I0912 17:55:10.957904 3070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:55:10.958494 kubelet[3070]: I0912 17:55:10.958483 3070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:55:10.958541 kubelet[3070]: I0912 17:55:10.958499 3070 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:55:10.958541 kubelet[3070]: I0912 17:55:10.958513 3070 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:55:10.958594 kubelet[3070]: E0912 17:55:10.958542 3070 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:55:10.968212 kubelet[3070]: I0912 17:55:10.968175 3070 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:55:10.968212 kubelet[3070]: I0912 17:55:10.968184 3070 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:55:10.968212 kubelet[3070]: I0912 17:55:10.968194 3070 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:55:10.968315 kubelet[3070]: I0912 17:55:10.968276 3070 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:55:10.968315 kubelet[3070]: I0912 17:55:10.968283 3070 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:55:10.968315 kubelet[3070]: I0912 17:55:10.968294 3070 policy_none.go:49] "None policy: Start" Sep 12 17:55:10.968552 kubelet[3070]: I0912 17:55:10.968517 3070 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:55:10.968552 kubelet[3070]: I0912 17:55:10.968526 3070 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:55:10.968604 kubelet[3070]: I0912 17:55:10.968587 3070 state_mem.go:75] "Updated machine memory state" Sep 12 17:55:10.970407 kubelet[3070]: I0912 17:55:10.970399 3070 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:55:10.970502 kubelet[3070]: I0912 17:55:10.970495 3070 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:55:10.970526 kubelet[3070]: I0912 17:55:10.970503 3070 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:55:10.970587 kubelet[3070]: I0912 17:55:10.970581 3070 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:55:11.066999 kubelet[3070]: W0912 17:55:11.066934 3070 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:55:11.068060 kubelet[3070]: W0912 17:55:11.068008 3070 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:55:11.068060 kubelet[3070]: W0912 17:55:11.068023 3070 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:55:11.068366 kubelet[3070]: E0912 17:55:11.068180 3070 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.6-a-7e79e463ed\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.068366 kubelet[3070]: E0912 17:55:11.068180 3070 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.078163 kubelet[3070]: I0912 17:55:11.078110 3070 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.087145 kubelet[3070]: I0912 17:55:11.087092 3070 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.087326 kubelet[3070]: I0912 17:55:11.087251 3070 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.253973 kubelet[3070]: I0912 17:55:11.253719 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.253973 kubelet[3070]: I0912 17:55:11.253825 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.253973 kubelet[3070]: I0912 17:55:11.253897 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.253973 kubelet[3070]: I0912 17:55:11.253952 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0996564293bb2268f41eca8670d91052-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-7e79e463ed\" (UID: \"0996564293bb2268f41eca8670d91052\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.254696 kubelet[3070]: I0912 17:55:11.254003 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eebba37c6ded1ba507a807f945383753-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-7e79e463ed\" (UID: \"eebba37c6ded1ba507a807f945383753\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.254696 kubelet[3070]: I0912 17:55:11.254053 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eebba37c6ded1ba507a807f945383753-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-7e79e463ed\" (UID: \"eebba37c6ded1ba507a807f945383753\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.254696 kubelet[3070]: I0912 17:55:11.254181 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.254696 kubelet[3070]: I0912 17:55:11.254311 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eebba37c6ded1ba507a807f945383753-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-7e79e463ed\" (UID: \"eebba37c6ded1ba507a807f945383753\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.254696 kubelet[3070]: I0912 17:55:11.254412 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e01c38fd02563e133ad7b03f7ab9a92a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" (UID: \"e01c38fd02563e133ad7b03f7ab9a92a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.950560 kubelet[3070]: I0912 17:55:11.950542 3070 apiserver.go:52] "Watching apiserver" Sep 12 17:55:11.952808 kubelet[3070]: I0912 17:55:11.952800 3070 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:55:11.964911 kubelet[3070]: W0912 17:55:11.964893 3070 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:55:11.965008 kubelet[3070]: E0912 17:55:11.964934 3070 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.6-a-7e79e463ed\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.965042 kubelet[3070]: W0912 17:55:11.965008 3070 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:55:11.965042 kubelet[3070]: E0912 17:55:11.965033 3070 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.6-a-7e79e463ed\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:11.976996 kubelet[3070]: I0912 17:55:11.976950 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-a-7e79e463ed" podStartSLOduration=0.976925069 podStartE2EDuration="976.925069ms" podCreationTimestamp="2025-09-12 17:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:55:11.976922314 +0000 UTC m=+1.060865657" watchObservedRunningTime="2025-09-12 17:55:11.976925069 +0000 UTC m=+1.060868410" Sep 12 17:55:11.977189 kubelet[3070]: I0912 17:55:11.977040 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-a-7e79e463ed" podStartSLOduration=2.977032797 podStartE2EDuration="2.977032797s" podCreationTimestamp="2025-09-12 17:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:55:11.973101446 +0000 UTC m=+1.057044787" watchObservedRunningTime="2025-09-12 17:55:11.977032797 +0000 UTC m=+1.060976134" Sep 12 17:55:11.985224 kubelet[3070]: I0912 17:55:11.985168 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-7e79e463ed" podStartSLOduration=1.985155782 podStartE2EDuration="1.985155782s" podCreationTimestamp="2025-09-12 17:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:55:11.980739672 +0000 UTC m=+1.064683014" watchObservedRunningTime="2025-09-12 17:55:11.985155782 +0000 UTC m=+1.069099120" Sep 12 17:55:15.741131 kubelet[3070]: I0912 17:55:15.741001 3070 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:55:15.742027 containerd[1819]: time="2025-09-12T17:55:15.741778076Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:55:15.742723 kubelet[3070]: I0912 17:55:15.742160 3070 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:55:16.640227 systemd[1]: Created slice kubepods-besteffort-poda96f0982_bba6_40b2_8efa_944350e66724.slice - libcontainer container kubepods-besteffort-poda96f0982_bba6_40b2_8efa_944350e66724.slice. Sep 12 17:55:16.693630 kubelet[3070]: I0912 17:55:16.693508 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a96f0982-bba6-40b2-8efa-944350e66724-xtables-lock\") pod \"kube-proxy-k69rh\" (UID: \"a96f0982-bba6-40b2-8efa-944350e66724\") " pod="kube-system/kube-proxy-k69rh" Sep 12 17:55:16.693630 kubelet[3070]: I0912 17:55:16.693630 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a96f0982-bba6-40b2-8efa-944350e66724-lib-modules\") pod \"kube-proxy-k69rh\" (UID: \"a96f0982-bba6-40b2-8efa-944350e66724\") " pod="kube-system/kube-proxy-k69rh" Sep 12 17:55:16.693997 kubelet[3070]: I0912 17:55:16.693714 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt7wr\" (UniqueName: \"kubernetes.io/projected/a96f0982-bba6-40b2-8efa-944350e66724-kube-api-access-tt7wr\") pod \"kube-proxy-k69rh\" (UID: \"a96f0982-bba6-40b2-8efa-944350e66724\") " pod="kube-system/kube-proxy-k69rh" Sep 12 17:55:16.693997 kubelet[3070]: I0912 17:55:16.693801 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a96f0982-bba6-40b2-8efa-944350e66724-kube-proxy\") pod \"kube-proxy-k69rh\" (UID: \"a96f0982-bba6-40b2-8efa-944350e66724\") " pod="kube-system/kube-proxy-k69rh" Sep 12 17:55:16.868778 systemd[1]: Created slice kubepods-besteffort-pod5e64be07_cae2_4024_b00b_231ce6950a8d.slice - libcontainer container kubepods-besteffort-pod5e64be07_cae2_4024_b00b_231ce6950a8d.slice. Sep 12 17:55:16.895010 kubelet[3070]: I0912 17:55:16.894758 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgnm\" (UniqueName: \"kubernetes.io/projected/5e64be07-cae2-4024-b00b-231ce6950a8d-kube-api-access-blgnm\") pod \"tigera-operator-58fc44c59b-x82h2\" (UID: \"5e64be07-cae2-4024-b00b-231ce6950a8d\") " pod="tigera-operator/tigera-operator-58fc44c59b-x82h2" Sep 12 17:55:16.895010 kubelet[3070]: I0912 17:55:16.894951 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e64be07-cae2-4024-b00b-231ce6950a8d-var-lib-calico\") pod \"tigera-operator-58fc44c59b-x82h2\" (UID: \"5e64be07-cae2-4024-b00b-231ce6950a8d\") " pod="tigera-operator/tigera-operator-58fc44c59b-x82h2" Sep 12 17:55:16.963092 containerd[1819]: time="2025-09-12T17:55:16.962945333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k69rh,Uid:a96f0982-bba6-40b2-8efa-944350e66724,Namespace:kube-system,Attempt:0,}" Sep 12 17:55:16.973500 containerd[1819]: time="2025-09-12T17:55:16.973459462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:16.973500 containerd[1819]: time="2025-09-12T17:55:16.973489604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:16.973500 containerd[1819]: time="2025-09-12T17:55:16.973497266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:16.973613 containerd[1819]: time="2025-09-12T17:55:16.973541251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:16.990755 systemd[1]: Started cri-containerd-4a8846b145fbe44d491b682cf4b286042045cc336f30938c549aac00ba25db18.scope - libcontainer container 4a8846b145fbe44d491b682cf4b286042045cc336f30938c549aac00ba25db18. Sep 12 17:55:17.004714 containerd[1819]: time="2025-09-12T17:55:17.004653916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k69rh,Uid:a96f0982-bba6-40b2-8efa-944350e66724,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a8846b145fbe44d491b682cf4b286042045cc336f30938c549aac00ba25db18\"" Sep 12 17:55:17.006749 containerd[1819]: time="2025-09-12T17:55:17.006727004Z" level=info msg="CreateContainer within sandbox \"4a8846b145fbe44d491b682cf4b286042045cc336f30938c549aac00ba25db18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:55:17.013419 containerd[1819]: time="2025-09-12T17:55:17.013376704Z" level=info msg="CreateContainer within sandbox \"4a8846b145fbe44d491b682cf4b286042045cc336f30938c549aac00ba25db18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20faac494a374782954ea3614f542722720ae297c45540a8ee583beec6eaf6d7\"" Sep 12 17:55:17.013644 containerd[1819]: time="2025-09-12T17:55:17.013608753Z" level=info msg="StartContainer for \"20faac494a374782954ea3614f542722720ae297c45540a8ee583beec6eaf6d7\"" Sep 12 17:55:17.037636 systemd[1]: Started cri-containerd-20faac494a374782954ea3614f542722720ae297c45540a8ee583beec6eaf6d7.scope - libcontainer container 20faac494a374782954ea3614f542722720ae297c45540a8ee583beec6eaf6d7. Sep 12 17:55:17.054923 containerd[1819]: time="2025-09-12T17:55:17.054891549Z" level=info msg="StartContainer for \"20faac494a374782954ea3614f542722720ae297c45540a8ee583beec6eaf6d7\" returns successfully" Sep 12 17:55:17.174973 containerd[1819]: time="2025-09-12T17:55:17.174743517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-x82h2,Uid:5e64be07-cae2-4024-b00b-231ce6950a8d,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:55:17.185298 containerd[1819]: time="2025-09-12T17:55:17.185256465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:17.185298 containerd[1819]: time="2025-09-12T17:55:17.185287079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:17.185298 containerd[1819]: time="2025-09-12T17:55:17.185294314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:17.185413 containerd[1819]: time="2025-09-12T17:55:17.185377885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:17.202781 systemd[1]: Started cri-containerd-448c556070d9e7264fb967af25aa11cf3c4d5f6be6bdbecf848856946aef01f5.scope - libcontainer container 448c556070d9e7264fb967af25aa11cf3c4d5f6be6bdbecf848856946aef01f5. Sep 12 17:55:17.227181 containerd[1819]: time="2025-09-12T17:55:17.227129086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-x82h2,Uid:5e64be07-cae2-4024-b00b-231ce6950a8d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"448c556070d9e7264fb967af25aa11cf3c4d5f6be6bdbecf848856946aef01f5\"" Sep 12 17:55:17.227903 containerd[1819]: time="2025-09-12T17:55:17.227890567Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:55:17.984093 kubelet[3070]: I0912 17:55:17.984035 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k69rh" podStartSLOduration=1.984024038 podStartE2EDuration="1.984024038s" podCreationTimestamp="2025-09-12 17:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:55:17.983979472 +0000 UTC m=+7.067922819" watchObservedRunningTime="2025-09-12 17:55:17.984024038 +0000 UTC m=+7.067967380" Sep 12 17:55:18.924074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423557798.mount: Deactivated successfully. Sep 12 17:55:19.425783 containerd[1819]: time="2025-09-12T17:55:19.425762990Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:19.426016 containerd[1819]: time="2025-09-12T17:55:19.425995838Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:55:19.426543 containerd[1819]: time="2025-09-12T17:55:19.426519680Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:19.428155 containerd[1819]: time="2025-09-12T17:55:19.428114805Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:19.428606 containerd[1819]: time="2025-09-12T17:55:19.428560077Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.200650956s" Sep 12 17:55:19.428606 containerd[1819]: time="2025-09-12T17:55:19.428575348Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:55:19.429471 containerd[1819]: time="2025-09-12T17:55:19.429459781Z" level=info msg="CreateContainer within sandbox \"448c556070d9e7264fb967af25aa11cf3c4d5f6be6bdbecf848856946aef01f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:55:19.433359 containerd[1819]: time="2025-09-12T17:55:19.433343412Z" level=info msg="CreateContainer within sandbox \"448c556070d9e7264fb967af25aa11cf3c4d5f6be6bdbecf848856946aef01f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"599a48eba03791b7e879e54449e9f93bdde565c0c5c209cd233d9d5434fee086\"" Sep 12 17:55:19.433590 containerd[1819]: time="2025-09-12T17:55:19.433577708Z" level=info msg="StartContainer for \"599a48eba03791b7e879e54449e9f93bdde565c0c5c209cd233d9d5434fee086\"" Sep 12 17:55:19.478912 systemd[1]: Started cri-containerd-599a48eba03791b7e879e54449e9f93bdde565c0c5c209cd233d9d5434fee086.scope - libcontainer container 599a48eba03791b7e879e54449e9f93bdde565c0c5c209cd233d9d5434fee086. Sep 12 17:55:19.531581 containerd[1819]: time="2025-09-12T17:55:19.531543569Z" level=info msg="StartContainer for \"599a48eba03791b7e879e54449e9f93bdde565c0c5c209cd233d9d5434fee086\" returns successfully" Sep 12 17:55:19.997011 kubelet[3070]: I0912 17:55:19.996927 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-x82h2" podStartSLOduration=1.795639302 podStartE2EDuration="3.996901953s" podCreationTimestamp="2025-09-12 17:55:16 +0000 UTC" firstStartedPulling="2025-09-12 17:55:17.227673812 +0000 UTC m=+6.311617154" lastFinishedPulling="2025-09-12 17:55:19.428936464 +0000 UTC m=+8.512879805" observedRunningTime="2025-09-12 17:55:19.996860004 +0000 UTC m=+9.080803345" watchObservedRunningTime="2025-09-12 17:55:19.996901953 +0000 UTC m=+9.080845291" Sep 12 17:55:24.036453 sudo[2089]: pam_unix(sudo:session): session closed for user root Sep 12 17:55:24.037535 sshd[2086]: pam_unix(sshd:session): session closed for user core Sep 12 17:55:24.040107 systemd[1]: sshd@8-139.178.94.21:22-147.75.109.163:48646.service: Deactivated successfully. Sep 12 17:55:24.041512 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:55:24.041663 systemd[1]: session-11.scope: Consumed 3.139s CPU time, 165.3M memory peak, 0B memory swap peak. Sep 12 17:55:24.042173 systemd-logind[1804]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:55:24.042684 systemd-logind[1804]: Removed session 11. Sep 12 17:55:25.474493 update_engine[1809]: I20250912 17:55:25.474453 1809 update_attempter.cc:509] Updating boot flags... Sep 12 17:55:25.501484 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3603) Sep 12 17:55:25.529477 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3599) Sep 12 17:55:25.559445 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3599) Sep 12 17:55:26.464669 systemd[1]: Created slice kubepods-besteffort-pod556acb13_9509_41b9_a1d5_963b71e9fac1.slice - libcontainer container kubepods-besteffort-pod556acb13_9509_41b9_a1d5_963b71e9fac1.slice. Sep 12 17:55:26.556802 kubelet[3070]: I0912 17:55:26.556722 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/556acb13-9509-41b9-a1d5-963b71e9fac1-tigera-ca-bundle\") pod \"calico-typha-5d988495cb-5r2rf\" (UID: \"556acb13-9509-41b9-a1d5-963b71e9fac1\") " pod="calico-system/calico-typha-5d988495cb-5r2rf" Sep 12 17:55:26.557946 kubelet[3070]: I0912 17:55:26.556825 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbcfc\" (UniqueName: \"kubernetes.io/projected/556acb13-9509-41b9-a1d5-963b71e9fac1-kube-api-access-zbcfc\") pod \"calico-typha-5d988495cb-5r2rf\" (UID: \"556acb13-9509-41b9-a1d5-963b71e9fac1\") " pod="calico-system/calico-typha-5d988495cb-5r2rf" Sep 12 17:55:26.557946 kubelet[3070]: I0912 17:55:26.556974 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/556acb13-9509-41b9-a1d5-963b71e9fac1-typha-certs\") pod \"calico-typha-5d988495cb-5r2rf\" (UID: \"556acb13-9509-41b9-a1d5-963b71e9fac1\") " pod="calico-system/calico-typha-5d988495cb-5r2rf" Sep 12 17:55:26.769942 containerd[1819]: time="2025-09-12T17:55:26.769866309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d988495cb-5r2rf,Uid:556acb13-9509-41b9-a1d5-963b71e9fac1,Namespace:calico-system,Attempt:0,}" Sep 12 17:55:26.781205 containerd[1819]: time="2025-09-12T17:55:26.781156639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:26.781205 containerd[1819]: time="2025-09-12T17:55:26.781190371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:26.781205 containerd[1819]: time="2025-09-12T17:55:26.781197997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:26.781338 containerd[1819]: time="2025-09-12T17:55:26.781244897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:26.807561 systemd[1]: Started cri-containerd-bc94d3161a7a37a7398e2a7ae02f47980ffbdd84538bc477bec69b8bc9117aa6.scope - libcontainer container bc94d3161a7a37a7398e2a7ae02f47980ffbdd84538bc477bec69b8bc9117aa6. Sep 12 17:55:26.809124 systemd[1]: Created slice kubepods-besteffort-podc1131fbc_e41c_472e_a92f_7909cc36ac42.slice - libcontainer container kubepods-besteffort-podc1131fbc_e41c_472e_a92f_7909cc36ac42.slice. Sep 12 17:55:26.832454 containerd[1819]: time="2025-09-12T17:55:26.832417852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d988495cb-5r2rf,Uid:556acb13-9509-41b9-a1d5-963b71e9fac1,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc94d3161a7a37a7398e2a7ae02f47980ffbdd84538bc477bec69b8bc9117aa6\"" Sep 12 17:55:26.833101 containerd[1819]: time="2025-09-12T17:55:26.833089088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:55:26.858977 kubelet[3070]: I0912 17:55:26.858931 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1131fbc-e41c-472e-a92f-7909cc36ac42-tigera-ca-bundle\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.858977 kubelet[3070]: I0912 17:55:26.858965 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-cni-net-dir\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859070 kubelet[3070]: I0912 17:55:26.858986 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-flexvol-driver-host\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859070 kubelet[3070]: I0912 17:55:26.859009 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-var-run-calico\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859070 kubelet[3070]: I0912 17:55:26.859039 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5ws\" (UniqueName: \"kubernetes.io/projected/c1131fbc-e41c-472e-a92f-7909cc36ac42-kube-api-access-cd5ws\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859070 kubelet[3070]: I0912 17:55:26.859067 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c1131fbc-e41c-472e-a92f-7909cc36ac42-node-certs\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859173 kubelet[3070]: I0912 17:55:26.859089 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-xtables-lock\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859173 kubelet[3070]: I0912 17:55:26.859110 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-lib-modules\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859173 kubelet[3070]: I0912 17:55:26.859123 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-policysync\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859173 kubelet[3070]: I0912 17:55:26.859135 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-var-lib-calico\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859173 kubelet[3070]: I0912 17:55:26.859146 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-cni-log-dir\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.859274 kubelet[3070]: I0912 17:55:26.859156 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c1131fbc-e41c-472e-a92f-7909cc36ac42-cni-bin-dir\") pod \"calico-node-s9slr\" (UID: \"c1131fbc-e41c-472e-a92f-7909cc36ac42\") " pod="calico-system/calico-node-s9slr" Sep 12 17:55:26.963125 kubelet[3070]: E0912 17:55:26.963070 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:26.963125 kubelet[3070]: W0912 17:55:26.963113 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:26.963600 kubelet[3070]: E0912 17:55:26.963157 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:26.967536 kubelet[3070]: E0912 17:55:26.967483 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:26.967536 kubelet[3070]: W0912 17:55:26.967525 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:26.967807 kubelet[3070]: E0912 17:55:26.967563 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:26.977732 kubelet[3070]: E0912 17:55:26.977636 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:26.977732 kubelet[3070]: W0912 17:55:26.977676 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:26.977732 kubelet[3070]: E0912 17:55:26.977715 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.104122 kubelet[3070]: E0912 17:55:27.103913 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxk5v" podUID="a8e8f24e-2173-44ef-a6cc-5168890274e3" Sep 12 17:55:27.111896 containerd[1819]: time="2025-09-12T17:55:27.111821246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9slr,Uid:c1131fbc-e41c-472e-a92f-7909cc36ac42,Namespace:calico-system,Attempt:0,}" Sep 12 17:55:27.122409 containerd[1819]: time="2025-09-12T17:55:27.122372484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:27.122409 containerd[1819]: time="2025-09-12T17:55:27.122400684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:27.122409 containerd[1819]: time="2025-09-12T17:55:27.122407888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:27.122524 containerd[1819]: time="2025-09-12T17:55:27.122453357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:27.137548 systemd[1]: Started cri-containerd-4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d.scope - libcontainer container 4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d. Sep 12 17:55:27.147441 containerd[1819]: time="2025-09-12T17:55:27.147419110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9slr,Uid:c1131fbc-e41c-472e-a92f-7909cc36ac42,Namespace:calico-system,Attempt:0,} returns sandbox id \"4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d\"" Sep 12 17:55:27.156666 kubelet[3070]: E0912 17:55:27.156649 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.156666 kubelet[3070]: W0912 17:55:27.156663 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.156770 kubelet[3070]: E0912 17:55:27.156678 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.156844 kubelet[3070]: E0912 17:55:27.156830 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.156844 kubelet[3070]: W0912 17:55:27.156839 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.156917 kubelet[3070]: E0912 17:55:27.156849 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.156974 kubelet[3070]: E0912 17:55:27.156967 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157010 kubelet[3070]: W0912 17:55:27.156974 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157010 kubelet[3070]: E0912 17:55:27.156986 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157107 kubelet[3070]: E0912 17:55:27.157100 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157143 kubelet[3070]: W0912 17:55:27.157107 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157143 kubelet[3070]: E0912 17:55:27.157115 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157227 kubelet[3070]: E0912 17:55:27.157220 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157227 kubelet[3070]: W0912 17:55:27.157226 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157292 kubelet[3070]: E0912 17:55:27.157234 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157334 kubelet[3070]: E0912 17:55:27.157327 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157367 kubelet[3070]: W0912 17:55:27.157334 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157367 kubelet[3070]: E0912 17:55:27.157342 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157443 kubelet[3070]: E0912 17:55:27.157439 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157476 kubelet[3070]: W0912 17:55:27.157446 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157476 kubelet[3070]: E0912 17:55:27.157454 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157558 kubelet[3070]: E0912 17:55:27.157552 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157598 kubelet[3070]: W0912 17:55:27.157558 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157598 kubelet[3070]: E0912 17:55:27.157566 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157683 kubelet[3070]: E0912 17:55:27.157677 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157731 kubelet[3070]: W0912 17:55:27.157684 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157731 kubelet[3070]: E0912 17:55:27.157692 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157807 kubelet[3070]: E0912 17:55:27.157800 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157807 kubelet[3070]: W0912 17:55:27.157807 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157873 kubelet[3070]: E0912 17:55:27.157814 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.157916 kubelet[3070]: E0912 17:55:27.157909 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.157952 kubelet[3070]: W0912 17:55:27.157915 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.157952 kubelet[3070]: E0912 17:55:27.157923 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158019 kubelet[3070]: E0912 17:55:27.158012 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158052 kubelet[3070]: W0912 17:55:27.158018 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158052 kubelet[3070]: E0912 17:55:27.158026 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158126 kubelet[3070]: E0912 17:55:27.158119 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158126 kubelet[3070]: W0912 17:55:27.158126 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158186 kubelet[3070]: E0912 17:55:27.158133 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158228 kubelet[3070]: E0912 17:55:27.158222 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158265 kubelet[3070]: W0912 17:55:27.158228 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158265 kubelet[3070]: E0912 17:55:27.158236 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158330 kubelet[3070]: E0912 17:55:27.158323 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158367 kubelet[3070]: W0912 17:55:27.158330 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158367 kubelet[3070]: E0912 17:55:27.158338 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158441 kubelet[3070]: E0912 17:55:27.158427 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158475 kubelet[3070]: W0912 17:55:27.158441 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158475 kubelet[3070]: E0912 17:55:27.158450 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158553 kubelet[3070]: E0912 17:55:27.158546 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158591 kubelet[3070]: W0912 17:55:27.158554 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158591 kubelet[3070]: E0912 17:55:27.158562 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158658 kubelet[3070]: E0912 17:55:27.158651 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158695 kubelet[3070]: W0912 17:55:27.158658 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158695 kubelet[3070]: E0912 17:55:27.158666 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158767 kubelet[3070]: E0912 17:55:27.158761 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158800 kubelet[3070]: W0912 17:55:27.158767 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158800 kubelet[3070]: E0912 17:55:27.158775 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.158871 kubelet[3070]: E0912 17:55:27.158864 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.158903 kubelet[3070]: W0912 17:55:27.158871 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.158903 kubelet[3070]: E0912 17:55:27.158879 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.162127 kubelet[3070]: E0912 17:55:27.162114 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.162127 kubelet[3070]: W0912 17:55:27.162124 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.162205 kubelet[3070]: E0912 17:55:27.162135 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.162205 kubelet[3070]: I0912 17:55:27.162158 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhv8v\" (UniqueName: \"kubernetes.io/projected/a8e8f24e-2173-44ef-a6cc-5168890274e3-kube-api-access-zhv8v\") pod \"csi-node-driver-mxk5v\" (UID: \"a8e8f24e-2173-44ef-a6cc-5168890274e3\") " pod="calico-system/csi-node-driver-mxk5v" Sep 12 17:55:27.162302 kubelet[3070]: E0912 17:55:27.162293 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.162334 kubelet[3070]: W0912 17:55:27.162301 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.162334 kubelet[3070]: E0912 17:55:27.162312 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.162334 kubelet[3070]: I0912 17:55:27.162323 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a8e8f24e-2173-44ef-a6cc-5168890274e3-socket-dir\") pod \"csi-node-driver-mxk5v\" (UID: \"a8e8f24e-2173-44ef-a6cc-5168890274e3\") " pod="calico-system/csi-node-driver-mxk5v" Sep 12 17:55:27.162458 kubelet[3070]: E0912 17:55:27.162449 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.162458 kubelet[3070]: W0912 17:55:27.162456 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.162520 kubelet[3070]: E0912 17:55:27.162464 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.162520 kubelet[3070]: I0912 17:55:27.162474 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e8f24e-2173-44ef-a6cc-5168890274e3-kubelet-dir\") pod \"csi-node-driver-mxk5v\" (UID: \"a8e8f24e-2173-44ef-a6cc-5168890274e3\") " pod="calico-system/csi-node-driver-mxk5v" Sep 12 17:55:27.162578 kubelet[3070]: E0912 17:55:27.162566 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.162578 kubelet[3070]: W0912 17:55:27.162571 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.162578 kubelet[3070]: E0912 17:55:27.162577 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.162636 kubelet[3070]: I0912 17:55:27.162592 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a8e8f24e-2173-44ef-a6cc-5168890274e3-registration-dir\") pod \"csi-node-driver-mxk5v\" (UID: \"a8e8f24e-2173-44ef-a6cc-5168890274e3\") " pod="calico-system/csi-node-driver-mxk5v" Sep 12 17:55:27.162705 kubelet[3070]: E0912 17:55:27.162695 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.162731 kubelet[3070]: W0912 17:55:27.162704 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.162731 kubelet[3070]: E0912 17:55:27.162715 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.162814 kubelet[3070]: E0912 17:55:27.162808 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.162834 kubelet[3070]: W0912 17:55:27.162814 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.162834 kubelet[3070]: E0912 17:55:27.162821 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.162943 kubelet[3070]: E0912 17:55:27.162937 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.162943 kubelet[3070]: W0912 17:55:27.162942 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.162997 kubelet[3070]: E0912 17:55:27.162949 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.163050 kubelet[3070]: E0912 17:55:27.163045 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.163073 kubelet[3070]: W0912 17:55:27.163050 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.163073 kubelet[3070]: E0912 17:55:27.163057 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.163165 kubelet[3070]: E0912 17:55:27.163158 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.163165 kubelet[3070]: W0912 17:55:27.163163 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.163238 kubelet[3070]: E0912 17:55:27.163170 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.163281 kubelet[3070]: E0912 17:55:27.163271 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.163281 kubelet[3070]: W0912 17:55:27.163276 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.163348 kubelet[3070]: E0912 17:55:27.163282 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.163444 kubelet[3070]: E0912 17:55:27.163425 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.163444 kubelet[3070]: W0912 17:55:27.163443 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.163536 kubelet[3070]: E0912 17:55:27.163457 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.163536 kubelet[3070]: I0912 17:55:27.163477 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a8e8f24e-2173-44ef-a6cc-5168890274e3-varrun\") pod \"csi-node-driver-mxk5v\" (UID: \"a8e8f24e-2173-44ef-a6cc-5168890274e3\") " pod="calico-system/csi-node-driver-mxk5v" Sep 12 17:55:27.163644 kubelet[3070]: E0912 17:55:27.163634 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.163644 kubelet[3070]: W0912 17:55:27.163643 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.163723 kubelet[3070]: E0912 17:55:27.163654 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.163777 kubelet[3070]: E0912 17:55:27.163759 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.163777 kubelet[3070]: W0912 17:55:27.163767 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.163844 kubelet[3070]: E0912 17:55:27.163779 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.163916 kubelet[3070]: E0912 17:55:27.163910 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.163916 kubelet[3070]: W0912 17:55:27.163916 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.163966 kubelet[3070]: E0912 17:55:27.163922 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.164020 kubelet[3070]: E0912 17:55:27.164014 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.164040 kubelet[3070]: W0912 17:55:27.164020 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.164040 kubelet[3070]: E0912 17:55:27.164025 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.265457 kubelet[3070]: E0912 17:55:27.265350 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.265457 kubelet[3070]: W0912 17:55:27.265400 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.265457 kubelet[3070]: E0912 17:55:27.265463 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.266163 kubelet[3070]: E0912 17:55:27.266076 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.266163 kubelet[3070]: W0912 17:55:27.266115 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.266163 kubelet[3070]: E0912 17:55:27.266159 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.266876 kubelet[3070]: E0912 17:55:27.266789 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.266876 kubelet[3070]: W0912 17:55:27.266829 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.266876 kubelet[3070]: E0912 17:55:27.266883 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.267620 kubelet[3070]: E0912 17:55:27.267424 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.267620 kubelet[3070]: W0912 17:55:27.267474 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.267620 kubelet[3070]: E0912 17:55:27.267511 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.268294 kubelet[3070]: E0912 17:55:27.268074 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.268294 kubelet[3070]: W0912 17:55:27.268118 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.268294 kubelet[3070]: E0912 17:55:27.268237 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.268788 kubelet[3070]: E0912 17:55:27.268744 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.268959 kubelet[3070]: W0912 17:55:27.268786 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.268959 kubelet[3070]: E0912 17:55:27.268892 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.269653 kubelet[3070]: E0912 17:55:27.269599 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.269653 kubelet[3070]: W0912 17:55:27.269646 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.270084 kubelet[3070]: E0912 17:55:27.269762 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.270507 kubelet[3070]: E0912 17:55:27.270426 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.270666 kubelet[3070]: W0912 17:55:27.270514 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.270666 kubelet[3070]: E0912 17:55:27.270592 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.271203 kubelet[3070]: E0912 17:55:27.271170 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.271356 kubelet[3070]: W0912 17:55:27.271205 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.271356 kubelet[3070]: E0912 17:55:27.271275 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.271927 kubelet[3070]: E0912 17:55:27.271891 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.272077 kubelet[3070]: W0912 17:55:27.271936 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.272077 kubelet[3070]: E0912 17:55:27.272018 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.272568 kubelet[3070]: E0912 17:55:27.272528 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.272568 kubelet[3070]: W0912 17:55:27.272565 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.272920 kubelet[3070]: E0912 17:55:27.272664 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.273178 kubelet[3070]: E0912 17:55:27.273130 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.273178 kubelet[3070]: W0912 17:55:27.273170 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.273660 kubelet[3070]: E0912 17:55:27.273262 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.273963 kubelet[3070]: E0912 17:55:27.273750 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.273963 kubelet[3070]: W0912 17:55:27.273792 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.273963 kubelet[3070]: E0912 17:55:27.273890 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.274511 kubelet[3070]: E0912 17:55:27.274403 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.274511 kubelet[3070]: W0912 17:55:27.274479 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.274768 kubelet[3070]: E0912 17:55:27.274589 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.275192 kubelet[3070]: E0912 17:55:27.275144 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.275361 kubelet[3070]: W0912 17:55:27.275198 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.275361 kubelet[3070]: E0912 17:55:27.275302 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.275918 kubelet[3070]: E0912 17:55:27.275873 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.275918 kubelet[3070]: W0912 17:55:27.275916 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.276276 kubelet[3070]: E0912 17:55:27.276006 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.276592 kubelet[3070]: E0912 17:55:27.276550 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.276774 kubelet[3070]: W0912 17:55:27.276596 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.276774 kubelet[3070]: E0912 17:55:27.276680 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.277287 kubelet[3070]: E0912 17:55:27.277254 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.277462 kubelet[3070]: W0912 17:55:27.277288 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.277462 kubelet[3070]: E0912 17:55:27.277356 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.278014 kubelet[3070]: E0912 17:55:27.277970 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.278217 kubelet[3070]: W0912 17:55:27.278013 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.278217 kubelet[3070]: E0912 17:55:27.278096 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.278678 kubelet[3070]: E0912 17:55:27.278628 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.278678 kubelet[3070]: W0912 17:55:27.278673 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.279035 kubelet[3070]: E0912 17:55:27.278771 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.279298 kubelet[3070]: E0912 17:55:27.279254 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.279546 kubelet[3070]: W0912 17:55:27.279305 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.279546 kubelet[3070]: E0912 17:55:27.279386 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.280034 kubelet[3070]: E0912 17:55:27.279986 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.280034 kubelet[3070]: W0912 17:55:27.280026 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.280527 kubelet[3070]: E0912 17:55:27.280111 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.280767 kubelet[3070]: E0912 17:55:27.280532 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.280767 kubelet[3070]: W0912 17:55:27.280569 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.280767 kubelet[3070]: E0912 17:55:27.280636 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.281265 kubelet[3070]: E0912 17:55:27.281223 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.281462 kubelet[3070]: W0912 17:55:27.281260 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.281462 kubelet[3070]: E0912 17:55:27.281325 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.282001 kubelet[3070]: E0912 17:55:27.281955 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.282134 kubelet[3070]: W0912 17:55:27.282000 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.282134 kubelet[3070]: E0912 17:55:27.282051 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:27.288085 kubelet[3070]: E0912 17:55:27.288070 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:27.288085 kubelet[3070]: W0912 17:55:27.288084 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:27.288167 kubelet[3070]: E0912 17:55:27.288103 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:28.560161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663069757.mount: Deactivated successfully. Sep 12 17:55:28.959171 kubelet[3070]: E0912 17:55:28.959108 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxk5v" podUID="a8e8f24e-2173-44ef-a6cc-5168890274e3" Sep 12 17:55:29.191208 containerd[1819]: time="2025-09-12T17:55:29.191182905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:29.191467 containerd[1819]: time="2025-09-12T17:55:29.191380929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:55:29.191755 containerd[1819]: time="2025-09-12T17:55:29.191719337Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:29.192682 containerd[1819]: time="2025-09-12T17:55:29.192664837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:29.193399 containerd[1819]: time="2025-09-12T17:55:29.193385751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.360278598s" Sep 12 17:55:29.193431 containerd[1819]: time="2025-09-12T17:55:29.193400388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:55:29.193873 containerd[1819]: time="2025-09-12T17:55:29.193862457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:55:29.196730 containerd[1819]: time="2025-09-12T17:55:29.196716014Z" level=info msg="CreateContainer within sandbox \"bc94d3161a7a37a7398e2a7ae02f47980ffbdd84538bc477bec69b8bc9117aa6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:55:29.201134 containerd[1819]: time="2025-09-12T17:55:29.201076157Z" level=info msg="CreateContainer within sandbox \"bc94d3161a7a37a7398e2a7ae02f47980ffbdd84538bc477bec69b8bc9117aa6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"75319c9081ed66f2879d2a44cf11cde80aa8d4d12fcd23dddb4b353752ee07be\"" Sep 12 17:55:29.201303 containerd[1819]: time="2025-09-12T17:55:29.201292751Z" level=info msg="StartContainer for \"75319c9081ed66f2879d2a44cf11cde80aa8d4d12fcd23dddb4b353752ee07be\"" Sep 12 17:55:29.226737 systemd[1]: Started cri-containerd-75319c9081ed66f2879d2a44cf11cde80aa8d4d12fcd23dddb4b353752ee07be.scope - libcontainer container 75319c9081ed66f2879d2a44cf11cde80aa8d4d12fcd23dddb4b353752ee07be. Sep 12 17:55:29.265721 containerd[1819]: time="2025-09-12T17:55:29.265690903Z" level=info msg="StartContainer for \"75319c9081ed66f2879d2a44cf11cde80aa8d4d12fcd23dddb4b353752ee07be\" returns successfully" Sep 12 17:55:30.029628 kubelet[3070]: I0912 17:55:30.029505 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d988495cb-5r2rf" podStartSLOduration=1.6686090390000001 podStartE2EDuration="4.029462349s" podCreationTimestamp="2025-09-12 17:55:26 +0000 UTC" firstStartedPulling="2025-09-12 17:55:26.832949081 +0000 UTC m=+15.916892424" lastFinishedPulling="2025-09-12 17:55:29.193802394 +0000 UTC m=+18.277745734" observedRunningTime="2025-09-12 17:55:30.028673295 +0000 UTC m=+19.112616706" watchObservedRunningTime="2025-09-12 17:55:30.029462349 +0000 UTC m=+19.113405739" Sep 12 17:55:30.080360 kubelet[3070]: E0912 17:55:30.080266 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.080360 kubelet[3070]: W0912 17:55:30.080324 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.080818 kubelet[3070]: E0912 17:55:30.080385 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.081175 kubelet[3070]: E0912 17:55:30.081101 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.081175 kubelet[3070]: W0912 17:55:30.081137 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.081175 kubelet[3070]: E0912 17:55:30.081174 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.081755 kubelet[3070]: E0912 17:55:30.081711 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.081755 kubelet[3070]: W0912 17:55:30.081741 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.082037 kubelet[3070]: E0912 17:55:30.081771 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.082338 kubelet[3070]: E0912 17:55:30.082276 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.082338 kubelet[3070]: W0912 17:55:30.082314 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.082577 kubelet[3070]: E0912 17:55:30.082357 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.082998 kubelet[3070]: E0912 17:55:30.082915 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.082998 kubelet[3070]: W0912 17:55:30.082956 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.083276 kubelet[3070]: E0912 17:55:30.083002 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.083661 kubelet[3070]: E0912 17:55:30.083612 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.083661 kubelet[3070]: W0912 17:55:30.083643 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.083934 kubelet[3070]: E0912 17:55:30.083673 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.084164 kubelet[3070]: E0912 17:55:30.084108 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.084164 kubelet[3070]: W0912 17:55:30.084133 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.084164 kubelet[3070]: E0912 17:55:30.084158 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.084637 kubelet[3070]: E0912 17:55:30.084586 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.084637 kubelet[3070]: W0912 17:55:30.084614 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.084854 kubelet[3070]: E0912 17:55:30.084642 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.085100 kubelet[3070]: E0912 17:55:30.085072 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.085100 kubelet[3070]: W0912 17:55:30.085098 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.085298 kubelet[3070]: E0912 17:55:30.085124 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.085539 kubelet[3070]: E0912 17:55:30.085483 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.085539 kubelet[3070]: W0912 17:55:30.085526 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.085753 kubelet[3070]: E0912 17:55:30.085553 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.086054 kubelet[3070]: E0912 17:55:30.085991 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.086054 kubelet[3070]: W0912 17:55:30.086014 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.086054 kubelet[3070]: E0912 17:55:30.086036 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.086526 kubelet[3070]: E0912 17:55:30.086468 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.086526 kubelet[3070]: W0912 17:55:30.086494 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.086526 kubelet[3070]: E0912 17:55:30.086517 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.087024 kubelet[3070]: E0912 17:55:30.086959 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.087024 kubelet[3070]: W0912 17:55:30.086984 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.087024 kubelet[3070]: E0912 17:55:30.087009 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.087397 kubelet[3070]: E0912 17:55:30.087362 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.087397 kubelet[3070]: W0912 17:55:30.087386 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.087621 kubelet[3070]: E0912 17:55:30.087409 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.087888 kubelet[3070]: E0912 17:55:30.087824 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.087888 kubelet[3070]: W0912 17:55:30.087849 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.087888 kubelet[3070]: E0912 17:55:30.087874 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.094476 kubelet[3070]: E0912 17:55:30.094384 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.094476 kubelet[3070]: W0912 17:55:30.094422 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.094782 kubelet[3070]: E0912 17:55:30.094491 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.095111 kubelet[3070]: E0912 17:55:30.095022 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.095111 kubelet[3070]: W0912 17:55:30.095062 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.095111 kubelet[3070]: E0912 17:55:30.095107 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.095866 kubelet[3070]: E0912 17:55:30.095769 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.095866 kubelet[3070]: W0912 17:55:30.095811 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.095866 kubelet[3070]: E0912 17:55:30.095857 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.096482 kubelet[3070]: E0912 17:55:30.096421 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.096482 kubelet[3070]: W0912 17:55:30.096477 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.096729 kubelet[3070]: E0912 17:55:30.096524 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.097151 kubelet[3070]: E0912 17:55:30.097072 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.097151 kubelet[3070]: W0912 17:55:30.097101 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.097469 kubelet[3070]: E0912 17:55:30.097192 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.097610 kubelet[3070]: E0912 17:55:30.097565 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.097610 kubelet[3070]: W0912 17:55:30.097590 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.097795 kubelet[3070]: E0912 17:55:30.097676 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.098169 kubelet[3070]: E0912 17:55:30.098099 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.098169 kubelet[3070]: W0912 17:55:30.098126 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.098394 kubelet[3070]: E0912 17:55:30.098212 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.098745 kubelet[3070]: E0912 17:55:30.098684 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.098745 kubelet[3070]: W0912 17:55:30.098724 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.098975 kubelet[3070]: E0912 17:55:30.098772 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.099426 kubelet[3070]: E0912 17:55:30.099368 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.099426 kubelet[3070]: W0912 17:55:30.099407 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.099674 kubelet[3070]: E0912 17:55:30.099468 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.100021 kubelet[3070]: E0912 17:55:30.099965 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.100021 kubelet[3070]: W0912 17:55:30.099993 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.100227 kubelet[3070]: E0912 17:55:30.100026 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.100571 kubelet[3070]: E0912 17:55:30.100515 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.100571 kubelet[3070]: W0912 17:55:30.100541 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.100807 kubelet[3070]: E0912 17:55:30.100573 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.101165 kubelet[3070]: E0912 17:55:30.101112 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.101165 kubelet[3070]: W0912 17:55:30.101138 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.101381 kubelet[3070]: E0912 17:55:30.101226 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.101661 kubelet[3070]: E0912 17:55:30.101601 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.101661 kubelet[3070]: W0912 17:55:30.101627 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.101887 kubelet[3070]: E0912 17:55:30.101714 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.102148 kubelet[3070]: E0912 17:55:30.102095 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.102148 kubelet[3070]: W0912 17:55:30.102120 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.102363 kubelet[3070]: E0912 17:55:30.102150 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.102706 kubelet[3070]: E0912 17:55:30.102650 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.102706 kubelet[3070]: W0912 17:55:30.102678 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.102932 kubelet[3070]: E0912 17:55:30.102712 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.103318 kubelet[3070]: E0912 17:55:30.103258 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.103318 kubelet[3070]: W0912 17:55:30.103282 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.103559 kubelet[3070]: E0912 17:55:30.103382 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.103828 kubelet[3070]: E0912 17:55:30.103764 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.103828 kubelet[3070]: W0912 17:55:30.103790 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.103828 kubelet[3070]: E0912 17:55:30.103815 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.104452 kubelet[3070]: E0912 17:55:30.104373 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:30.104452 kubelet[3070]: W0912 17:55:30.104410 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:30.104661 kubelet[3070]: E0912 17:55:30.104475 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:30.960323 kubelet[3070]: E0912 17:55:30.960230 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxk5v" podUID="a8e8f24e-2173-44ef-a6cc-5168890274e3" Sep 12 17:55:31.011635 kubelet[3070]: I0912 17:55:31.011578 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:55:31.096795 kubelet[3070]: E0912 17:55:31.096694 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.096795 kubelet[3070]: W0912 17:55:31.096748 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.096795 kubelet[3070]: E0912 17:55:31.096793 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.097884 kubelet[3070]: E0912 17:55:31.097482 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.097884 kubelet[3070]: W0912 17:55:31.097519 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.097884 kubelet[3070]: E0912 17:55:31.097562 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.098165 kubelet[3070]: E0912 17:55:31.098111 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.098165 kubelet[3070]: W0912 17:55:31.098148 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.098364 kubelet[3070]: E0912 17:55:31.098188 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.098894 kubelet[3070]: E0912 17:55:31.098808 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.098894 kubelet[3070]: W0912 17:55:31.098848 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.098894 kubelet[3070]: E0912 17:55:31.098885 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.099562 kubelet[3070]: E0912 17:55:31.099482 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.099562 kubelet[3070]: W0912 17:55:31.099513 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.099562 kubelet[3070]: E0912 17:55:31.099546 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.100119 kubelet[3070]: E0912 17:55:31.100046 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.100119 kubelet[3070]: W0912 17:55:31.100075 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.100119 kubelet[3070]: E0912 17:55:31.100103 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.100727 kubelet[3070]: E0912 17:55:31.100667 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.100727 kubelet[3070]: W0912 17:55:31.100695 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.100727 kubelet[3070]: E0912 17:55:31.100722 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.101216 kubelet[3070]: E0912 17:55:31.101185 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.101216 kubelet[3070]: W0912 17:55:31.101214 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.101416 kubelet[3070]: E0912 17:55:31.101242 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.101835 kubelet[3070]: E0912 17:55:31.101801 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.101835 kubelet[3070]: W0912 17:55:31.101831 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.102081 kubelet[3070]: E0912 17:55:31.101859 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.102361 kubelet[3070]: E0912 17:55:31.102332 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.102478 kubelet[3070]: W0912 17:55:31.102361 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.102478 kubelet[3070]: E0912 17:55:31.102388 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.102972 kubelet[3070]: E0912 17:55:31.102913 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.102972 kubelet[3070]: W0912 17:55:31.102941 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.102972 kubelet[3070]: E0912 17:55:31.102968 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.103453 kubelet[3070]: E0912 17:55:31.103409 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.103578 kubelet[3070]: W0912 17:55:31.103462 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.103578 kubelet[3070]: E0912 17:55:31.103495 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.104079 kubelet[3070]: E0912 17:55:31.104023 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.104079 kubelet[3070]: W0912 17:55:31.104053 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.104308 kubelet[3070]: E0912 17:55:31.104080 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.104653 kubelet[3070]: E0912 17:55:31.104600 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.104653 kubelet[3070]: W0912 17:55:31.104631 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.104879 kubelet[3070]: E0912 17:55:31.104658 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.105191 kubelet[3070]: E0912 17:55:31.105142 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.105191 kubelet[3070]: W0912 17:55:31.105171 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.105538 kubelet[3070]: E0912 17:55:31.105198 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.106064 kubelet[3070]: E0912 17:55:31.106011 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.106064 kubelet[3070]: W0912 17:55:31.106040 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.106320 kubelet[3070]: E0912 17:55:31.106072 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.106745 kubelet[3070]: E0912 17:55:31.106684 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.106745 kubelet[3070]: W0912 17:55:31.106725 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.106991 kubelet[3070]: E0912 17:55:31.106771 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.107413 kubelet[3070]: E0912 17:55:31.107344 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.107413 kubelet[3070]: W0912 17:55:31.107385 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.107797 kubelet[3070]: E0912 17:55:31.107455 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.108033 kubelet[3070]: E0912 17:55:31.107996 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.108033 kubelet[3070]: W0912 17:55:31.108027 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.108412 kubelet[3070]: E0912 17:55:31.108068 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.108568 kubelet[3070]: E0912 17:55:31.108513 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.108568 kubelet[3070]: W0912 17:55:31.108540 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.108779 kubelet[3070]: E0912 17:55:31.108605 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.109061 kubelet[3070]: E0912 17:55:31.109022 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.109061 kubelet[3070]: W0912 17:55:31.109052 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.109303 kubelet[3070]: E0912 17:55:31.109102 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.109475 kubelet[3070]: E0912 17:55:31.109424 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.109475 kubelet[3070]: W0912 17:55:31.109464 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.109731 kubelet[3070]: E0912 17:55:31.109558 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.109917 kubelet[3070]: E0912 17:55:31.109869 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.109917 kubelet[3070]: W0912 17:55:31.109898 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.110108 kubelet[3070]: E0912 17:55:31.109929 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.110430 kubelet[3070]: E0912 17:55:31.110388 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.110598 kubelet[3070]: W0912 17:55:31.110428 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.110598 kubelet[3070]: E0912 17:55:31.110495 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.110940 kubelet[3070]: E0912 17:55:31.110913 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.111055 kubelet[3070]: W0912 17:55:31.110944 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.111055 kubelet[3070]: E0912 17:55:31.110986 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.111417 kubelet[3070]: E0912 17:55:31.111389 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.111535 kubelet[3070]: W0912 17:55:31.111418 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.111535 kubelet[3070]: E0912 17:55:31.111498 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.111915 kubelet[3070]: E0912 17:55:31.111858 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.111915 kubelet[3070]: W0912 17:55:31.111889 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.112094 kubelet[3070]: E0912 17:55:31.111940 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.112259 kubelet[3070]: E0912 17:55:31.112234 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.112259 kubelet[3070]: W0912 17:55:31.112253 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.112426 kubelet[3070]: E0912 17:55:31.112307 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.112680 kubelet[3070]: E0912 17:55:31.112643 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.112782 kubelet[3070]: W0912 17:55:31.112675 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.112782 kubelet[3070]: E0912 17:55:31.112718 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.113161 kubelet[3070]: E0912 17:55:31.113131 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.113161 kubelet[3070]: W0912 17:55:31.113157 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.113308 kubelet[3070]: E0912 17:55:31.113183 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.113637 kubelet[3070]: E0912 17:55:31.113607 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.113637 kubelet[3070]: W0912 17:55:31.113629 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.113864 kubelet[3070]: E0912 17:55:31.113657 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.114059 kubelet[3070]: E0912 17:55:31.114037 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.114059 kubelet[3070]: W0912 17:55:31.114055 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.114174 kubelet[3070]: E0912 17:55:31.114077 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.114353 kubelet[3070]: E0912 17:55:31.114338 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:55:31.114420 kubelet[3070]: W0912 17:55:31.114354 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:55:31.114420 kubelet[3070]: E0912 17:55:31.114369 3070 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:55:31.217048 containerd[1819]: time="2025-09-12T17:55:31.216961663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:31.217245 containerd[1819]: time="2025-09-12T17:55:31.217208771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:55:31.217617 containerd[1819]: time="2025-09-12T17:55:31.217572941Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:31.218586 containerd[1819]: time="2025-09-12T17:55:31.218545186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:31.219019 containerd[1819]: time="2025-09-12T17:55:31.218976746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.025099671s" Sep 12 17:55:31.219019 containerd[1819]: time="2025-09-12T17:55:31.218993911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:55:31.219912 containerd[1819]: time="2025-09-12T17:55:31.219871705Z" level=info msg="CreateContainer within sandbox \"4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:55:31.224817 containerd[1819]: time="2025-09-12T17:55:31.224769221Z" level=info msg="CreateContainer within sandbox \"4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751\"" Sep 12 17:55:31.224996 containerd[1819]: time="2025-09-12T17:55:31.224979190Z" level=info msg="StartContainer for \"9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751\"" Sep 12 17:55:31.253690 systemd[1]: Started cri-containerd-9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751.scope - libcontainer container 9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751. Sep 12 17:55:31.269375 containerd[1819]: time="2025-09-12T17:55:31.269319095Z" level=info msg="StartContainer for \"9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751\" returns successfully" Sep 12 17:55:31.277061 systemd[1]: cri-containerd-9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751.scope: Deactivated successfully. Sep 12 17:55:31.296845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751-rootfs.mount: Deactivated successfully. Sep 12 17:55:31.768906 containerd[1819]: time="2025-09-12T17:55:31.768872396Z" level=info msg="shim disconnected" id=9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751 namespace=k8s.io Sep 12 17:55:31.769007 containerd[1819]: time="2025-09-12T17:55:31.768903960Z" level=warning msg="cleaning up after shim disconnected" id=9044a6ae12f5175606e91275e949a141dee21e309c028a2fdc45b072063a1751 namespace=k8s.io Sep 12 17:55:31.769007 containerd[1819]: time="2025-09-12T17:55:31.768913983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:55:32.019567 containerd[1819]: time="2025-09-12T17:55:32.019310261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:55:32.959341 kubelet[3070]: E0912 17:55:32.959289 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxk5v" podUID="a8e8f24e-2173-44ef-a6cc-5168890274e3" Sep 12 17:55:34.959691 kubelet[3070]: E0912 17:55:34.959663 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mxk5v" podUID="a8e8f24e-2173-44ef-a6cc-5168890274e3" Sep 12 17:55:35.318198 containerd[1819]: time="2025-09-12T17:55:35.318169747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:35.318406 containerd[1819]: time="2025-09-12T17:55:35.318354604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:55:35.318699 containerd[1819]: time="2025-09-12T17:55:35.318658541Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:35.319780 containerd[1819]: time="2025-09-12T17:55:35.319739572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:35.320179 containerd[1819]: time="2025-09-12T17:55:35.320139157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.300764948s" Sep 12 17:55:35.320179 containerd[1819]: time="2025-09-12T17:55:35.320153163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:55:35.321161 containerd[1819]: time="2025-09-12T17:55:35.321148026Z" level=info msg="CreateContainer within sandbox \"4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:55:35.325950 containerd[1819]: time="2025-09-12T17:55:35.325902189Z" level=info msg="CreateContainer within sandbox \"4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891\"" Sep 12 17:55:35.326156 containerd[1819]: time="2025-09-12T17:55:35.326110410Z" level=info msg="StartContainer for \"6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891\"" Sep 12 17:55:35.348732 systemd[1]: Started cri-containerd-6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891.scope - libcontainer container 6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891. Sep 12 17:55:35.361988 containerd[1819]: time="2025-09-12T17:55:35.361963984Z" level=info msg="StartContainer for \"6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891\" returns successfully" Sep 12 17:55:35.954248 containerd[1819]: time="2025-09-12T17:55:35.954223115Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:55:35.955095 systemd[1]: cri-containerd-6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891.scope: Deactivated successfully. Sep 12 17:55:35.956397 kubelet[3070]: I0912 17:55:35.956384 3070 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:55:35.965385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891-rootfs.mount: Deactivated successfully. Sep 12 17:55:35.969978 systemd[1]: Created slice kubepods-besteffort-pod36fe7463_201c_480a_92fa_70e3b0e14442.slice - libcontainer container kubepods-besteffort-pod36fe7463_201c_480a_92fa_70e3b0e14442.slice. Sep 12 17:55:35.972647 systemd[1]: Created slice kubepods-burstable-pod99abd614_f027_426b_a5d7_84601fcd4b39.slice - libcontainer container kubepods-burstable-pod99abd614_f027_426b_a5d7_84601fcd4b39.slice. Sep 12 17:55:35.975728 systemd[1]: Created slice kubepods-besteffort-podf5372128_8327_44f1_8a1c_68eda7b4a892.slice - libcontainer container kubepods-besteffort-podf5372128_8327_44f1_8a1c_68eda7b4a892.slice. Sep 12 17:55:35.978345 systemd[1]: Created slice kubepods-besteffort-podf0e34300_52fa_4b2c_a580_7e7738d631f0.slice - libcontainer container kubepods-besteffort-podf0e34300_52fa_4b2c_a580_7e7738d631f0.slice. Sep 12 17:55:35.980800 systemd[1]: Created slice kubepods-burstable-pod054fb7c1_c456_47f2_811b_49f3435a8e35.slice - libcontainer container kubepods-burstable-pod054fb7c1_c456_47f2_811b_49f3435a8e35.slice. Sep 12 17:55:35.983552 systemd[1]: Created slice kubepods-besteffort-podb34914f4_887a_4ea2_b72a_10c982892d18.slice - libcontainer container kubepods-besteffort-podb34914f4_887a_4ea2_b72a_10c982892d18.slice. Sep 12 17:55:35.985921 systemd[1]: Created slice kubepods-besteffort-poda444f57a_b1d6_4798_858d_e3a3c511da85.slice - libcontainer container kubepods-besteffort-poda444f57a_b1d6_4798_858d_e3a3c511da85.slice. Sep 12 17:55:36.147979 kubelet[3070]: I0912 17:55:36.147860 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0e34300-52fa-4b2c-a580-7e7738d631f0-goldmane-ca-bundle\") pod \"goldmane-7988f88666-87fsc\" (UID: \"f0e34300-52fa-4b2c-a580-7e7738d631f0\") " pod="calico-system/goldmane-7988f88666-87fsc" Sep 12 17:55:36.147979 kubelet[3070]: I0912 17:55:36.147979 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f0e34300-52fa-4b2c-a580-7e7738d631f0-goldmane-key-pair\") pod \"goldmane-7988f88666-87fsc\" (UID: \"f0e34300-52fa-4b2c-a580-7e7738d631f0\") " pod="calico-system/goldmane-7988f88666-87fsc" Sep 12 17:55:36.161514 kubelet[3070]: I0912 17:55:36.148040 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqlgg\" (UniqueName: \"kubernetes.io/projected/f5372128-8327-44f1-8a1c-68eda7b4a892-kube-api-access-fqlgg\") pod \"calico-apiserver-79f6dbf598-j6bmc\" (UID: \"f5372128-8327-44f1-8a1c-68eda7b4a892\") " pod="calico-apiserver/calico-apiserver-79f6dbf598-j6bmc" Sep 12 17:55:36.161514 kubelet[3070]: I0912 17:55:36.148138 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-ca-bundle\") pod \"whisker-5b8df47b84-wskdw\" (UID: \"b34914f4-887a-4ea2-b72a-10c982892d18\") " pod="calico-system/whisker-5b8df47b84-wskdw" Sep 12 17:55:36.161514 kubelet[3070]: I0912 17:55:36.148197 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36fe7463-201c-480a-92fa-70e3b0e14442-tigera-ca-bundle\") pod \"calico-kube-controllers-7cf7c9b989-n5gtx\" (UID: \"36fe7463-201c-480a-92fa-70e3b0e14442\") " pod="calico-system/calico-kube-controllers-7cf7c9b989-n5gtx" Sep 12 17:55:36.161514 kubelet[3070]: I0912 17:55:36.148250 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e34300-52fa-4b2c-a580-7e7738d631f0-config\") pod \"goldmane-7988f88666-87fsc\" (UID: \"f0e34300-52fa-4b2c-a580-7e7738d631f0\") " pod="calico-system/goldmane-7988f88666-87fsc" Sep 12 17:55:36.161514 kubelet[3070]: I0912 17:55:36.148304 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmvb4\" (UniqueName: \"kubernetes.io/projected/b34914f4-887a-4ea2-b72a-10c982892d18-kube-api-access-zmvb4\") pod \"whisker-5b8df47b84-wskdw\" (UID: \"b34914f4-887a-4ea2-b72a-10c982892d18\") " pod="calico-system/whisker-5b8df47b84-wskdw" Sep 12 17:55:36.162113 kubelet[3070]: I0912 17:55:36.148361 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5372128-8327-44f1-8a1c-68eda7b4a892-calico-apiserver-certs\") pod \"calico-apiserver-79f6dbf598-j6bmc\" (UID: \"f5372128-8327-44f1-8a1c-68eda7b4a892\") " pod="calico-apiserver/calico-apiserver-79f6dbf598-j6bmc" Sep 12 17:55:36.162113 kubelet[3070]: I0912 17:55:36.148483 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a444f57a-b1d6-4798-858d-e3a3c511da85-calico-apiserver-certs\") pod \"calico-apiserver-79f6dbf598-7fz7j\" (UID: \"a444f57a-b1d6-4798-858d-e3a3c511da85\") " pod="calico-apiserver/calico-apiserver-79f6dbf598-7fz7j" Sep 12 17:55:36.162113 kubelet[3070]: I0912 17:55:36.148541 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdk8c\" (UniqueName: \"kubernetes.io/projected/f0e34300-52fa-4b2c-a580-7e7738d631f0-kube-api-access-zdk8c\") pod \"goldmane-7988f88666-87fsc\" (UID: \"f0e34300-52fa-4b2c-a580-7e7738d631f0\") " pod="calico-system/goldmane-7988f88666-87fsc" Sep 12 17:55:36.162113 kubelet[3070]: I0912 17:55:36.148594 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w6f2\" (UniqueName: \"kubernetes.io/projected/a444f57a-b1d6-4798-858d-e3a3c511da85-kube-api-access-6w6f2\") pod \"calico-apiserver-79f6dbf598-7fz7j\" (UID: \"a444f57a-b1d6-4798-858d-e3a3c511da85\") " pod="calico-apiserver/calico-apiserver-79f6dbf598-7fz7j" Sep 12 17:55:36.162113 kubelet[3070]: I0912 17:55:36.148651 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-backend-key-pair\") pod \"whisker-5b8df47b84-wskdw\" (UID: \"b34914f4-887a-4ea2-b72a-10c982892d18\") " pod="calico-system/whisker-5b8df47b84-wskdw" Sep 12 17:55:36.162647 kubelet[3070]: I0912 17:55:36.148745 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqqs\" (UniqueName: \"kubernetes.io/projected/054fb7c1-c456-47f2-811b-49f3435a8e35-kube-api-access-cjqqs\") pod \"coredns-7c65d6cfc9-kbj9g\" (UID: \"054fb7c1-c456-47f2-811b-49f3435a8e35\") " pod="kube-system/coredns-7c65d6cfc9-kbj9g" Sep 12 17:55:36.162647 kubelet[3070]: I0912 17:55:36.148888 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99abd614-f027-426b-a5d7-84601fcd4b39-config-volume\") pod \"coredns-7c65d6cfc9-nkll2\" (UID: \"99abd614-f027-426b-a5d7-84601fcd4b39\") " pod="kube-system/coredns-7c65d6cfc9-nkll2" Sep 12 17:55:36.162647 kubelet[3070]: I0912 17:55:36.148980 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmd6\" (UniqueName: \"kubernetes.io/projected/36fe7463-201c-480a-92fa-70e3b0e14442-kube-api-access-jrmd6\") pod \"calico-kube-controllers-7cf7c9b989-n5gtx\" (UID: \"36fe7463-201c-480a-92fa-70e3b0e14442\") " pod="calico-system/calico-kube-controllers-7cf7c9b989-n5gtx" Sep 12 17:55:36.162647 kubelet[3070]: I0912 17:55:36.149081 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6r2h\" (UniqueName: \"kubernetes.io/projected/99abd614-f027-426b-a5d7-84601fcd4b39-kube-api-access-f6r2h\") pod \"coredns-7c65d6cfc9-nkll2\" (UID: \"99abd614-f027-426b-a5d7-84601fcd4b39\") " pod="kube-system/coredns-7c65d6cfc9-nkll2" Sep 12 17:55:36.162647 kubelet[3070]: I0912 17:55:36.149182 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/054fb7c1-c456-47f2-811b-49f3435a8e35-config-volume\") pod \"coredns-7c65d6cfc9-kbj9g\" (UID: \"054fb7c1-c456-47f2-811b-49f3435a8e35\") " pod="kube-system/coredns-7c65d6cfc9-kbj9g" Sep 12 17:55:36.278684 containerd[1819]: time="2025-09-12T17:55:36.278575116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-j6bmc,Uid:f5372128-8327-44f1-8a1c-68eda7b4a892,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:55:36.280495 containerd[1819]: time="2025-09-12T17:55:36.280420775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-87fsc,Uid:f0e34300-52fa-4b2c-a580-7e7738d631f0,Namespace:calico-system,Attempt:0,}" Sep 12 17:55:36.283151 containerd[1819]: time="2025-09-12T17:55:36.283065048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbj9g,Uid:054fb7c1-c456-47f2-811b-49f3435a8e35,Namespace:kube-system,Attempt:0,}" Sep 12 17:55:36.285890 containerd[1819]: time="2025-09-12T17:55:36.285822485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8df47b84-wskdw,Uid:b34914f4-887a-4ea2-b72a-10c982892d18,Namespace:calico-system,Attempt:0,}" Sep 12 17:55:36.287666 containerd[1819]: time="2025-09-12T17:55:36.287623675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-7fz7j,Uid:a444f57a-b1d6-4798-858d-e3a3c511da85,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:55:36.364238 containerd[1819]: time="2025-09-12T17:55:36.364206471Z" level=info msg="shim disconnected" id=6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891 namespace=k8s.io Sep 12 17:55:36.364238 containerd[1819]: time="2025-09-12T17:55:36.364233944Z" level=warning msg="cleaning up after shim disconnected" id=6e531aa931926891a4bc0de9cc57e5eb46627f5d803c50d3b04f5a2f587b9891 namespace=k8s.io Sep 12 17:55:36.364238 containerd[1819]: time="2025-09-12T17:55:36.364239202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:55:36.399567 containerd[1819]: time="2025-09-12T17:55:36.399533985Z" level=error msg="Failed to destroy network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.399790 containerd[1819]: time="2025-09-12T17:55:36.399738636Z" level=error msg="encountered an error cleaning up failed sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.399790 containerd[1819]: time="2025-09-12T17:55:36.399770611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-j6bmc,Uid:f5372128-8327-44f1-8a1c-68eda7b4a892,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.399865 containerd[1819]: time="2025-09-12T17:55:36.399814378Z" level=error msg="Failed to destroy network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.399941 kubelet[3070]: E0912 17:55:36.399920 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.399992 kubelet[3070]: E0912 17:55:36.399971 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f6dbf598-j6bmc" Sep 12 17:55:36.399992 kubelet[3070]: E0912 17:55:36.399985 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f6dbf598-j6bmc" Sep 12 17:55:36.400034 kubelet[3070]: E0912 17:55:36.400014 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f6dbf598-j6bmc_calico-apiserver(f5372128-8327-44f1-8a1c-68eda7b4a892)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f6dbf598-j6bmc_calico-apiserver(f5372128-8327-44f1-8a1c-68eda7b4a892)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f6dbf598-j6bmc" podUID="f5372128-8327-44f1-8a1c-68eda7b4a892" Sep 12 17:55:36.400081 containerd[1819]: time="2025-09-12T17:55:36.399981882Z" level=error msg="encountered an error cleaning up failed sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.400081 containerd[1819]: time="2025-09-12T17:55:36.400012223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-87fsc,Uid:f0e34300-52fa-4b2c-a580-7e7738d631f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.400137 kubelet[3070]: E0912 17:55:36.400089 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.400137 kubelet[3070]: E0912 17:55:36.400121 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-87fsc" Sep 12 17:55:36.400182 kubelet[3070]: E0912 17:55:36.400138 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-87fsc" Sep 12 17:55:36.400202 kubelet[3070]: E0912 17:55:36.400168 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-87fsc_calico-system(f0e34300-52fa-4b2c-a580-7e7738d631f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-87fsc_calico-system(f0e34300-52fa-4b2c-a580-7e7738d631f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-87fsc" podUID="f0e34300-52fa-4b2c-a580-7e7738d631f0" Sep 12 17:55:36.401113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c-shm.mount: Deactivated successfully. Sep 12 17:55:36.401182 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45-shm.mount: Deactivated successfully. Sep 12 17:55:36.401469 containerd[1819]: time="2025-09-12T17:55:36.401439404Z" level=error msg="Failed to destroy network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.401638 containerd[1819]: time="2025-09-12T17:55:36.401623869Z" level=error msg="encountered an error cleaning up failed sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.401684 containerd[1819]: time="2025-09-12T17:55:36.401647255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-7fz7j,Uid:a444f57a-b1d6-4798-858d-e3a3c511da85,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.401757 kubelet[3070]: E0912 17:55:36.401739 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.401798 kubelet[3070]: E0912 17:55:36.401766 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f6dbf598-7fz7j" Sep 12 17:55:36.401798 kubelet[3070]: E0912 17:55:36.401778 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f6dbf598-7fz7j" Sep 12 17:55:36.401863 kubelet[3070]: E0912 17:55:36.401800 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f6dbf598-7fz7j_calico-apiserver(a444f57a-b1d6-4798-858d-e3a3c511da85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f6dbf598-7fz7j_calico-apiserver(a444f57a-b1d6-4798-858d-e3a3c511da85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f6dbf598-7fz7j" podUID="a444f57a-b1d6-4798-858d-e3a3c511da85" Sep 12 17:55:36.402013 containerd[1819]: time="2025-09-12T17:55:36.402000510Z" level=error msg="Failed to destroy network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.402151 containerd[1819]: time="2025-09-12T17:55:36.402140069Z" level=error msg="encountered an error cleaning up failed sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.402173 containerd[1819]: time="2025-09-12T17:55:36.402162675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b8df47b84-wskdw,Uid:b34914f4-887a-4ea2-b72a-10c982892d18,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.402231 kubelet[3070]: E0912 17:55:36.402220 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.402258 kubelet[3070]: E0912 17:55:36.402239 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8df47b84-wskdw" Sep 12 17:55:36.402258 kubelet[3070]: E0912 17:55:36.402248 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b8df47b84-wskdw" Sep 12 17:55:36.402295 kubelet[3070]: E0912 17:55:36.402266 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b8df47b84-wskdw_calico-system(b34914f4-887a-4ea2-b72a-10c982892d18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b8df47b84-wskdw_calico-system(b34914f4-887a-4ea2-b72a-10c982892d18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8df47b84-wskdw" podUID="b34914f4-887a-4ea2-b72a-10c982892d18" Sep 12 17:55:36.403072 containerd[1819]: time="2025-09-12T17:55:36.403057579Z" level=error msg="Failed to destroy network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.403196 containerd[1819]: time="2025-09-12T17:55:36.403184748Z" level=error msg="encountered an error cleaning up failed sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.403219 containerd[1819]: time="2025-09-12T17:55:36.403207501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbj9g,Uid:054fb7c1-c456-47f2-811b-49f3435a8e35,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.403281 kubelet[3070]: E0912 17:55:36.403270 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.403303 kubelet[3070]: E0912 17:55:36.403290 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kbj9g" Sep 12 17:55:36.403322 kubelet[3070]: E0912 17:55:36.403302 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kbj9g" Sep 12 17:55:36.403344 kubelet[3070]: E0912 17:55:36.403321 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kbj9g_kube-system(054fb7c1-c456-47f2-811b-49f3435a8e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kbj9g_kube-system(054fb7c1-c456-47f2-811b-49f3435a8e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kbj9g" podUID="054fb7c1-c456-47f2-811b-49f3435a8e35" Sep 12 17:55:36.573664 containerd[1819]: time="2025-09-12T17:55:36.573409339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf7c9b989-n5gtx,Uid:36fe7463-201c-480a-92fa-70e3b0e14442,Namespace:calico-system,Attempt:0,}" Sep 12 17:55:36.575108 containerd[1819]: time="2025-09-12T17:55:36.575092812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkll2,Uid:99abd614-f027-426b-a5d7-84601fcd4b39,Namespace:kube-system,Attempt:0,}" Sep 12 17:55:36.604176 containerd[1819]: time="2025-09-12T17:55:36.604132275Z" level=error msg="Failed to destroy network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604341 containerd[1819]: time="2025-09-12T17:55:36.604327749Z" level=error msg="Failed to destroy network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604401 containerd[1819]: time="2025-09-12T17:55:36.604389639Z" level=error msg="encountered an error cleaning up failed sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604430 containerd[1819]: time="2025-09-12T17:55:36.604420670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkll2,Uid:99abd614-f027-426b-a5d7-84601fcd4b39,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604591 containerd[1819]: time="2025-09-12T17:55:36.604519167Z" level=error msg="encountered an error cleaning up failed sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604631 containerd[1819]: time="2025-09-12T17:55:36.604598872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf7c9b989-n5gtx,Uid:36fe7463-201c-480a-92fa-70e3b0e14442,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604730 kubelet[3070]: E0912 17:55:36.604612 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604730 kubelet[3070]: E0912 17:55:36.604666 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nkll2" Sep 12 17:55:36.604730 kubelet[3070]: E0912 17:55:36.604698 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nkll2" Sep 12 17:55:36.604730 kubelet[3070]: E0912 17:55:36.604720 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.604841 kubelet[3070]: E0912 17:55:36.604760 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf7c9b989-n5gtx" Sep 12 17:55:36.604841 kubelet[3070]: E0912 17:55:36.604760 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nkll2_kube-system(99abd614-f027-426b-a5d7-84601fcd4b39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nkll2_kube-system(99abd614-f027-426b-a5d7-84601fcd4b39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nkll2" podUID="99abd614-f027-426b-a5d7-84601fcd4b39" Sep 12 17:55:36.604841 kubelet[3070]: E0912 17:55:36.604770 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf7c9b989-n5gtx" Sep 12 17:55:36.604910 kubelet[3070]: E0912 17:55:36.604788 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cf7c9b989-n5gtx_calico-system(36fe7463-201c-480a-92fa-70e3b0e14442)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cf7c9b989-n5gtx_calico-system(36fe7463-201c-480a-92fa-70e3b0e14442)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf7c9b989-n5gtx" podUID="36fe7463-201c-480a-92fa-70e3b0e14442" Sep 12 17:55:36.963326 systemd[1]: Created slice kubepods-besteffort-poda8e8f24e_2173_44ef_a6cc_5168890274e3.slice - libcontainer container kubepods-besteffort-poda8e8f24e_2173_44ef_a6cc_5168890274e3.slice. Sep 12 17:55:36.965188 containerd[1819]: time="2025-09-12T17:55:36.965124660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxk5v,Uid:a8e8f24e-2173-44ef-a6cc-5168890274e3,Namespace:calico-system,Attempt:0,}" Sep 12 17:55:36.995553 containerd[1819]: time="2025-09-12T17:55:36.995489561Z" level=error msg="Failed to destroy network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.995751 containerd[1819]: time="2025-09-12T17:55:36.995708739Z" level=error msg="encountered an error cleaning up failed sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.995751 containerd[1819]: time="2025-09-12T17:55:36.995744495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxk5v,Uid:a8e8f24e-2173-44ef-a6cc-5168890274e3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.995966 kubelet[3070]: E0912 17:55:36.995923 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:36.996009 kubelet[3070]: E0912 17:55:36.995968 3070 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mxk5v" Sep 12 17:55:36.996009 kubelet[3070]: E0912 17:55:36.995985 3070 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mxk5v" Sep 12 17:55:36.996060 kubelet[3070]: E0912 17:55:36.996017 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mxk5v_calico-system(a8e8f24e-2173-44ef-a6cc-5168890274e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mxk5v_calico-system(a8e8f24e-2173-44ef-a6cc-5168890274e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mxk5v" podUID="a8e8f24e-2173-44ef-a6cc-5168890274e3" Sep 12 17:55:37.032262 kubelet[3070]: I0912 17:55:37.032218 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:55:37.033544 containerd[1819]: time="2025-09-12T17:55:37.033464974Z" level=info msg="StopPodSandbox for \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\"" Sep 12 17:55:37.033822 kubelet[3070]: I0912 17:55:37.033774 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:55:37.034077 containerd[1819]: time="2025-09-12T17:55:37.034019680Z" level=info msg="Ensure that sandbox 19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b in task-service has been cleanup successfully" Sep 12 17:55:37.035023 containerd[1819]: time="2025-09-12T17:55:37.034955435Z" level=info msg="StopPodSandbox for \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\"" Sep 12 17:55:37.035403 containerd[1819]: time="2025-09-12T17:55:37.035341121Z" level=info msg="Ensure that sandbox bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde in task-service has been cleanup successfully" Sep 12 17:55:37.035845 kubelet[3070]: I0912 17:55:37.035797 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:55:37.037019 containerd[1819]: time="2025-09-12T17:55:37.036937485Z" level=info msg="StopPodSandbox for \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\"" Sep 12 17:55:37.037499 containerd[1819]: time="2025-09-12T17:55:37.037411815Z" level=info msg="Ensure that sandbox 0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f in task-service has been cleanup successfully" Sep 12 17:55:37.042552 containerd[1819]: time="2025-09-12T17:55:37.042461530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:55:37.042802 kubelet[3070]: I0912 17:55:37.042746 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:55:37.043613 containerd[1819]: time="2025-09-12T17:55:37.043595418Z" level=info msg="StopPodSandbox for \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\"" Sep 12 17:55:37.043731 containerd[1819]: time="2025-09-12T17:55:37.043719242Z" level=info msg="Ensure that sandbox 343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45 in task-service has been cleanup successfully" Sep 12 17:55:37.044034 kubelet[3070]: I0912 17:55:37.044020 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:55:37.044348 containerd[1819]: time="2025-09-12T17:55:37.044323184Z" level=info msg="StopPodSandbox for \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\"" Sep 12 17:55:37.044487 containerd[1819]: time="2025-09-12T17:55:37.044471973Z" level=info msg="Ensure that sandbox 4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5 in task-service has been cleanup successfully" Sep 12 17:55:37.046222 kubelet[3070]: I0912 17:55:37.046204 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:55:37.046600 containerd[1819]: time="2025-09-12T17:55:37.046573485Z" level=info msg="StopPodSandbox for \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\"" Sep 12 17:55:37.046726 containerd[1819]: time="2025-09-12T17:55:37.046713839Z" level=info msg="Ensure that sandbox 73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b in task-service has been cleanup successfully" Sep 12 17:55:37.046784 kubelet[3070]: I0912 17:55:37.046767 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:55:37.047069 containerd[1819]: time="2025-09-12T17:55:37.047053954Z" level=info msg="StopPodSandbox for \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\"" Sep 12 17:55:37.047154 containerd[1819]: time="2025-09-12T17:55:37.047143773Z" level=info msg="Ensure that sandbox dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c in task-service has been cleanup successfully" Sep 12 17:55:37.047279 kubelet[3070]: I0912 17:55:37.047268 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:55:37.047573 containerd[1819]: time="2025-09-12T17:55:37.047551914Z" level=info msg="StopPodSandbox for \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\"" Sep 12 17:55:37.047670 containerd[1819]: time="2025-09-12T17:55:37.047659757Z" level=info msg="Ensure that sandbox d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104 in task-service has been cleanup successfully" Sep 12 17:55:37.059656 containerd[1819]: time="2025-09-12T17:55:37.059624317Z" level=error msg="StopPodSandbox for \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\" failed" error="failed to destroy network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.059769 containerd[1819]: time="2025-09-12T17:55:37.059631323Z" level=error msg="StopPodSandbox for \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\" failed" error="failed to destroy network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.059852 kubelet[3070]: E0912 17:55:37.059825 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:55:37.059915 kubelet[3070]: E0912 17:55:37.059874 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b"} Sep 12 17:55:37.059951 kubelet[3070]: E0912 17:55:37.059935 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36fe7463-201c-480a-92fa-70e3b0e14442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.060020 kubelet[3070]: E0912 17:55:37.059955 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36fe7463-201c-480a-92fa-70e3b0e14442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf7c9b989-n5gtx" podUID="36fe7463-201c-480a-92fa-70e3b0e14442" Sep 12 17:55:37.060020 kubelet[3070]: E0912 17:55:37.059825 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:55:37.060020 kubelet[3070]: E0912 17:55:37.059986 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f"} Sep 12 17:55:37.060020 kubelet[3070]: E0912 17:55:37.060009 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"054fb7c1-c456-47f2-811b-49f3435a8e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.060178 kubelet[3070]: E0912 17:55:37.060022 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"054fb7c1-c456-47f2-811b-49f3435a8e35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kbj9g" podUID="054fb7c1-c456-47f2-811b-49f3435a8e35" Sep 12 17:55:37.060233 containerd[1819]: time="2025-09-12T17:55:37.060139150Z" level=error msg="StopPodSandbox for \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\" failed" error="failed to destroy network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.060597 kubelet[3070]: E0912 17:55:37.060405 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:55:37.060597 kubelet[3070]: E0912 17:55:37.060426 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde"} Sep 12 17:55:37.060597 kubelet[3070]: E0912 17:55:37.060452 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b34914f4-887a-4ea2-b72a-10c982892d18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.060597 kubelet[3070]: E0912 17:55:37.060464 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b34914f4-887a-4ea2-b72a-10c982892d18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b8df47b84-wskdw" podUID="b34914f4-887a-4ea2-b72a-10c982892d18" Sep 12 17:55:37.061345 containerd[1819]: time="2025-09-12T17:55:37.061306055Z" level=error msg="StopPodSandbox for \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\" failed" error="failed to destroy network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.061499 kubelet[3070]: E0912 17:55:37.061475 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:55:37.061545 kubelet[3070]: E0912 17:55:37.061510 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5"} Sep 12 17:55:37.061585 kubelet[3070]: E0912 17:55:37.061540 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99abd614-f027-426b-a5d7-84601fcd4b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.061585 kubelet[3070]: E0912 17:55:37.061563 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99abd614-f027-426b-a5d7-84601fcd4b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nkll2" podUID="99abd614-f027-426b-a5d7-84601fcd4b39" Sep 12 17:55:37.061821 containerd[1819]: time="2025-09-12T17:55:37.061804029Z" level=error msg="StopPodSandbox for \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\" failed" error="failed to destroy network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.061902 kubelet[3070]: E0912 17:55:37.061883 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:55:37.061929 kubelet[3070]: E0912 17:55:37.061908 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45"} Sep 12 17:55:37.061952 kubelet[3070]: E0912 17:55:37.061929 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5372128-8327-44f1-8a1c-68eda7b4a892\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.061991 kubelet[3070]: E0912 17:55:37.061946 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5372128-8327-44f1-8a1c-68eda7b4a892\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f6dbf598-j6bmc" podUID="f5372128-8327-44f1-8a1c-68eda7b4a892" Sep 12 17:55:37.062677 containerd[1819]: time="2025-09-12T17:55:37.062660682Z" level=error msg="StopPodSandbox for \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\" failed" error="failed to destroy network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.062743 kubelet[3070]: E0912 17:55:37.062730 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:55:37.062769 kubelet[3070]: E0912 17:55:37.062749 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104"} Sep 12 17:55:37.062793 kubelet[3070]: E0912 17:55:37.062767 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a444f57a-b1d6-4798-858d-e3a3c511da85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.062793 kubelet[3070]: E0912 17:55:37.062779 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a444f57a-b1d6-4798-858d-e3a3c511da85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f6dbf598-7fz7j" podUID="a444f57a-b1d6-4798-858d-e3a3c511da85" Sep 12 17:55:37.063164 containerd[1819]: time="2025-09-12T17:55:37.063151274Z" level=error msg="StopPodSandbox for \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\" failed" error="failed to destroy network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.063226 kubelet[3070]: E0912 17:55:37.063215 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:55:37.063251 kubelet[3070]: E0912 17:55:37.063229 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b"} Sep 12 17:55:37.063251 kubelet[3070]: E0912 17:55:37.063242 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8e8f24e-2173-44ef-a6cc-5168890274e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.063299 kubelet[3070]: E0912 17:55:37.063252 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8e8f24e-2173-44ef-a6cc-5168890274e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mxk5v" podUID="a8e8f24e-2173-44ef-a6cc-5168890274e3" Sep 12 17:55:37.063886 containerd[1819]: time="2025-09-12T17:55:37.063870284Z" level=error msg="StopPodSandbox for \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\" failed" error="failed to destroy network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:55:37.063968 kubelet[3070]: E0912 17:55:37.063955 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:55:37.063998 kubelet[3070]: E0912 17:55:37.063971 3070 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c"} Sep 12 17:55:37.063998 kubelet[3070]: E0912 17:55:37.063985 3070 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0e34300-52fa-4b2c-a580-7e7738d631f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:55:37.064078 kubelet[3070]: E0912 17:55:37.063995 3070 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0e34300-52fa-4b2c-a580-7e7738d631f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-87fsc" podUID="f0e34300-52fa-4b2c-a580-7e7738d631f0" Sep 12 17:55:37.334810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104-shm.mount: Deactivated successfully. Sep 12 17:55:37.335071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde-shm.mount: Deactivated successfully. Sep 12 17:55:37.335265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f-shm.mount: Deactivated successfully. Sep 12 17:55:37.459495 kubelet[3070]: I0912 17:55:37.459442 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:55:40.321037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938932828.mount: Deactivated successfully. Sep 12 17:55:40.338496 containerd[1819]: time="2025-09-12T17:55:40.338446242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:40.338758 containerd[1819]: time="2025-09-12T17:55:40.338713419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:55:40.339070 containerd[1819]: time="2025-09-12T17:55:40.339025824Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:40.339907 containerd[1819]: time="2025-09-12T17:55:40.339866467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:40.340283 containerd[1819]: time="2025-09-12T17:55:40.340237131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 3.297685735s" Sep 12 17:55:40.340283 containerd[1819]: time="2025-09-12T17:55:40.340251963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:55:40.343735 containerd[1819]: time="2025-09-12T17:55:40.343687301Z" level=info msg="CreateContainer within sandbox \"4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:55:40.349030 containerd[1819]: time="2025-09-12T17:55:40.349012822Z" level=info msg="CreateContainer within sandbox \"4592359dec07b06343494117cc58f21ed9706c1e4c1cc0e71e660fb4a2266f5d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"84ae3e2c771e415ae8b152d2d6a172351b73bd6b752ebc5b58e930b612a702f2\"" Sep 12 17:55:40.349274 containerd[1819]: time="2025-09-12T17:55:40.349236311Z" level=info msg="StartContainer for \"84ae3e2c771e415ae8b152d2d6a172351b73bd6b752ebc5b58e930b612a702f2\"" Sep 12 17:55:40.369609 systemd[1]: Started cri-containerd-84ae3e2c771e415ae8b152d2d6a172351b73bd6b752ebc5b58e930b612a702f2.scope - libcontainer container 84ae3e2c771e415ae8b152d2d6a172351b73bd6b752ebc5b58e930b612a702f2. Sep 12 17:55:40.382985 containerd[1819]: time="2025-09-12T17:55:40.382962506Z" level=info msg="StartContainer for \"84ae3e2c771e415ae8b152d2d6a172351b73bd6b752ebc5b58e930b612a702f2\" returns successfully" Sep 12 17:55:40.444226 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:55:40.444275 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:55:40.480973 containerd[1819]: time="2025-09-12T17:55:40.480945057Z" level=info msg="StopPodSandbox for \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\"" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.518 [INFO][4692] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.519 [INFO][4692] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" iface="eth0" netns="/var/run/netns/cni-df7c1a34-ac7c-6a6c-269b-bda1a0d33501" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.519 [INFO][4692] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" iface="eth0" netns="/var/run/netns/cni-df7c1a34-ac7c-6a6c-269b-bda1a0d33501" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.520 [INFO][4692] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" iface="eth0" netns="/var/run/netns/cni-df7c1a34-ac7c-6a6c-269b-bda1a0d33501" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.520 [INFO][4692] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.520 [INFO][4692] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.562 [INFO][4721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.562 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.562 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.568 [WARNING][4721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.568 [INFO][4721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.569 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:40.573895 containerd[1819]: 2025-09-12 17:55:40.572 [INFO][4692] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:55:40.574350 containerd[1819]: time="2025-09-12T17:55:40.573938583Z" level=info msg="TearDown network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\" successfully" Sep 12 17:55:40.574350 containerd[1819]: time="2025-09-12T17:55:40.573966240Z" level=info msg="StopPodSandbox for \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\" returns successfully" Sep 12 17:55:40.778720 kubelet[3070]: I0912 17:55:40.778607 3070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-ca-bundle\") pod \"b34914f4-887a-4ea2-b72a-10c982892d18\" (UID: \"b34914f4-887a-4ea2-b72a-10c982892d18\") " Sep 12 17:55:40.779596 kubelet[3070]: I0912 17:55:40.778776 3070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-backend-key-pair\") pod \"b34914f4-887a-4ea2-b72a-10c982892d18\" (UID: \"b34914f4-887a-4ea2-b72a-10c982892d18\") " Sep 12 17:55:40.779596 kubelet[3070]: I0912 17:55:40.778857 3070 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmvb4\" (UniqueName: \"kubernetes.io/projected/b34914f4-887a-4ea2-b72a-10c982892d18-kube-api-access-zmvb4\") pod \"b34914f4-887a-4ea2-b72a-10c982892d18\" (UID: \"b34914f4-887a-4ea2-b72a-10c982892d18\") " Sep 12 17:55:40.779806 kubelet[3070]: I0912 17:55:40.779713 3070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b34914f4-887a-4ea2-b72a-10c982892d18" (UID: "b34914f4-887a-4ea2-b72a-10c982892d18"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:55:40.784827 kubelet[3070]: I0912 17:55:40.784709 3070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b34914f4-887a-4ea2-b72a-10c982892d18" (UID: "b34914f4-887a-4ea2-b72a-10c982892d18"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:55:40.785033 kubelet[3070]: I0912 17:55:40.784826 3070 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34914f4-887a-4ea2-b72a-10c982892d18-kube-api-access-zmvb4" (OuterVolumeSpecName: "kube-api-access-zmvb4") pod "b34914f4-887a-4ea2-b72a-10c982892d18" (UID: "b34914f4-887a-4ea2-b72a-10c982892d18"). InnerVolumeSpecName "kube-api-access-zmvb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:55:40.879835 kubelet[3070]: I0912 17:55:40.879587 3070 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-ca-bundle\") on node \"ci-4081.3.6-a-7e79e463ed\" DevicePath \"\"" Sep 12 17:55:40.879835 kubelet[3070]: I0912 17:55:40.879669 3070 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b34914f4-887a-4ea2-b72a-10c982892d18-whisker-backend-key-pair\") on node \"ci-4081.3.6-a-7e79e463ed\" DevicePath \"\"" Sep 12 17:55:40.879835 kubelet[3070]: I0912 17:55:40.879706 3070 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmvb4\" (UniqueName: \"kubernetes.io/projected/b34914f4-887a-4ea2-b72a-10c982892d18-kube-api-access-zmvb4\") on node \"ci-4081.3.6-a-7e79e463ed\" DevicePath \"\"" Sep 12 17:55:40.970986 systemd[1]: Removed slice kubepods-besteffort-podb34914f4_887a_4ea2_b72a_10c982892d18.slice - libcontainer container kubepods-besteffort-podb34914f4_887a_4ea2_b72a_10c982892d18.slice. Sep 12 17:55:41.065911 kubelet[3070]: I0912 17:55:41.065880 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s9slr" podStartSLOduration=1.8732080020000001 podStartE2EDuration="15.065868422s" podCreationTimestamp="2025-09-12 17:55:26 +0000 UTC" firstStartedPulling="2025-09-12 17:55:27.147933311 +0000 UTC m=+16.231876650" lastFinishedPulling="2025-09-12 17:55:40.340593729 +0000 UTC m=+29.424537070" observedRunningTime="2025-09-12 17:55:41.065499226 +0000 UTC m=+30.149442569" watchObservedRunningTime="2025-09-12 17:55:41.065868422 +0000 UTC m=+30.149811761" Sep 12 17:55:41.116271 systemd[1]: Created slice kubepods-besteffort-pod21afbe59_321c_466d_bae5_b0878b5aaf7c.slice - libcontainer container kubepods-besteffort-pod21afbe59_321c_466d_bae5_b0878b5aaf7c.slice. Sep 12 17:55:41.283651 kubelet[3070]: I0912 17:55:41.283543 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21afbe59-321c-466d-bae5-b0878b5aaf7c-whisker-ca-bundle\") pod \"whisker-955d99699-gshfb\" (UID: \"21afbe59-321c-466d-bae5-b0878b5aaf7c\") " pod="calico-system/whisker-955d99699-gshfb" Sep 12 17:55:41.283651 kubelet[3070]: I0912 17:55:41.283649 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21afbe59-321c-466d-bae5-b0878b5aaf7c-whisker-backend-key-pair\") pod \"whisker-955d99699-gshfb\" (UID: \"21afbe59-321c-466d-bae5-b0878b5aaf7c\") " pod="calico-system/whisker-955d99699-gshfb" Sep 12 17:55:41.284079 kubelet[3070]: I0912 17:55:41.283709 3070 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbnf\" (UniqueName: \"kubernetes.io/projected/21afbe59-321c-466d-bae5-b0878b5aaf7c-kube-api-access-tcbnf\") pod \"whisker-955d99699-gshfb\" (UID: \"21afbe59-321c-466d-bae5-b0878b5aaf7c\") " pod="calico-system/whisker-955d99699-gshfb" Sep 12 17:55:41.329475 systemd[1]: run-netns-cni\x2ddf7c1a34\x2dac7c\x2d6a6c\x2d269b\x2dbda1a0d33501.mount: Deactivated successfully. Sep 12 17:55:41.329706 systemd[1]: var-lib-kubelet-pods-b34914f4\x2d887a\x2d4ea2\x2db72a\x2d10c982892d18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzmvb4.mount: Deactivated successfully. Sep 12 17:55:41.329899 systemd[1]: var-lib-kubelet-pods-b34914f4\x2d887a\x2d4ea2\x2db72a\x2d10c982892d18-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:55:41.420845 containerd[1819]: time="2025-09-12T17:55:41.420766084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955d99699-gshfb,Uid:21afbe59-321c-466d-bae5-b0878b5aaf7c,Namespace:calico-system,Attempt:0,}" Sep 12 17:55:41.485284 systemd-networkd[1617]: cali1ea50686a90: Link UP Sep 12 17:55:41.485445 systemd-networkd[1617]: cali1ea50686a90: Gained carrier Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.437 [INFO][4754] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.443 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0 whisker-955d99699- calico-system 21afbe59-321c-466d-bae5-b0878b5aaf7c 858 0 2025-09-12 17:55:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:955d99699 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed whisker-955d99699-gshfb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1ea50686a90 [] [] }} ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.443 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.455 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" HandleID="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.455 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" HandleID="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044f310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"whisker-955d99699-gshfb", "timestamp":"2025-09-12 17:55:41.455830103 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.455 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.455 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.455 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.459 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.462 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.465 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.466 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.467 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.467 [INFO][4778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.468 [INFO][4778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521 Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.475 [INFO][4778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.478 [INFO][4778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.65/26] block=192.168.18.64/26 handle="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.478 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.65/26] handle="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.478 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:41.492911 containerd[1819]: 2025-09-12 17:55:41.478 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.65/26] IPv6=[] ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" HandleID="k8s-pod-network.07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" Sep 12 17:55:41.493410 containerd[1819]: 2025-09-12 17:55:41.480 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0", GenerateName:"whisker-955d99699-", Namespace:"calico-system", SelfLink:"", UID:"21afbe59-321c-466d-bae5-b0878b5aaf7c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955d99699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"whisker-955d99699-gshfb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ea50686a90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:41.493410 containerd[1819]: 2025-09-12 17:55:41.480 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.65/32] ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" Sep 12 17:55:41.493410 containerd[1819]: 2025-09-12 17:55:41.480 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ea50686a90 ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" Sep 12 17:55:41.493410 containerd[1819]: 2025-09-12 17:55:41.485 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" Sep 12 17:55:41.493410 containerd[1819]: 2025-09-12 17:55:41.485 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0", GenerateName:"whisker-955d99699-", Namespace:"calico-system", SelfLink:"", UID:"21afbe59-321c-466d-bae5-b0878b5aaf7c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955d99699", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521", Pod:"whisker-955d99699-gshfb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1ea50686a90", MAC:"22:7c:84:cb:e7:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:41.493410 containerd[1819]: 2025-09-12 17:55:41.491 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521" Namespace="calico-system" Pod="whisker-955d99699-gshfb" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--955d99699--gshfb-eth0" Sep 12 17:55:41.501654 containerd[1819]: time="2025-09-12T17:55:41.501563124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:41.501654 containerd[1819]: time="2025-09-12T17:55:41.501615111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:41.501654 containerd[1819]: time="2025-09-12T17:55:41.501622745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:41.501790 containerd[1819]: time="2025-09-12T17:55:41.501668555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:41.528934 systemd[1]: Started cri-containerd-07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521.scope - libcontainer container 07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521. Sep 12 17:55:41.600669 containerd[1819]: time="2025-09-12T17:55:41.600642379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955d99699-gshfb,Uid:21afbe59-321c-466d-bae5-b0878b5aaf7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521\"" Sep 12 17:55:41.601499 containerd[1819]: time="2025-09-12T17:55:41.601483094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:55:41.689523 kernel: bpftool[4993]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:55:41.847657 systemd-networkd[1617]: vxlan.calico: Link UP Sep 12 17:55:41.847661 systemd-networkd[1617]: vxlan.calico: Gained carrier Sep 12 17:55:42.941783 systemd-networkd[1617]: cali1ea50686a90: Gained IPv6LL Sep 12 17:55:42.966039 kubelet[3070]: I0912 17:55:42.965957 3070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34914f4-887a-4ea2-b72a-10c982892d18" path="/var/lib/kubelet/pods/b34914f4-887a-4ea2-b72a-10c982892d18/volumes" Sep 12 17:55:43.261642 systemd-networkd[1617]: vxlan.calico: Gained IPv6LL Sep 12 17:55:43.313663 containerd[1819]: time="2025-09-12T17:55:43.313641282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:43.313908 containerd[1819]: time="2025-09-12T17:55:43.313886530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:55:43.314229 containerd[1819]: time="2025-09-12T17:55:43.314219460Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:43.315324 containerd[1819]: time="2025-09-12T17:55:43.315312295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:43.315794 containerd[1819]: time="2025-09-12T17:55:43.315756511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.714249455s" Sep 12 17:55:43.315794 containerd[1819]: time="2025-09-12T17:55:43.315773516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:55:43.316764 containerd[1819]: time="2025-09-12T17:55:43.316754156Z" level=info msg="CreateContainer within sandbox \"07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:55:43.320982 containerd[1819]: time="2025-09-12T17:55:43.320938911Z" level=info msg="CreateContainer within sandbox \"07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bf342ac25a705b7260a64279137be5146d6f039b54009bbc2bb6c9cf7cd6b0c6\"" Sep 12 17:55:43.321266 containerd[1819]: time="2025-09-12T17:55:43.321250290Z" level=info msg="StartContainer for \"bf342ac25a705b7260a64279137be5146d6f039b54009bbc2bb6c9cf7cd6b0c6\"" Sep 12 17:55:43.350614 systemd[1]: Started cri-containerd-bf342ac25a705b7260a64279137be5146d6f039b54009bbc2bb6c9cf7cd6b0c6.scope - libcontainer container bf342ac25a705b7260a64279137be5146d6f039b54009bbc2bb6c9cf7cd6b0c6. Sep 12 17:55:43.380781 containerd[1819]: time="2025-09-12T17:55:43.380732329Z" level=info msg="StartContainer for \"bf342ac25a705b7260a64279137be5146d6f039b54009bbc2bb6c9cf7cd6b0c6\" returns successfully" Sep 12 17:55:43.381387 containerd[1819]: time="2025-09-12T17:55:43.381369009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:55:45.415539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107216631.mount: Deactivated successfully. Sep 12 17:55:45.420043 containerd[1819]: time="2025-09-12T17:55:45.419994644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:45.420261 containerd[1819]: time="2025-09-12T17:55:45.420218071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:55:45.420628 containerd[1819]: time="2025-09-12T17:55:45.420587746Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:45.421651 containerd[1819]: time="2025-09-12T17:55:45.421611888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:45.422424 containerd[1819]: time="2025-09-12T17:55:45.422384226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.040994972s" Sep 12 17:55:45.422424 containerd[1819]: time="2025-09-12T17:55:45.422399129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:55:45.423355 containerd[1819]: time="2025-09-12T17:55:45.423311696Z" level=info msg="CreateContainer within sandbox \"07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:55:45.427242 containerd[1819]: time="2025-09-12T17:55:45.427191783Z" level=info msg="CreateContainer within sandbox \"07068d2b5a209aa48ed1480b83b87460de77e30a5288b0ae978ca2613b468521\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0f0ba5050dee0f380facfbeea0cb6ca7107112d4c116de359c9ae140a89f1c34\"" Sep 12 17:55:45.427426 containerd[1819]: time="2025-09-12T17:55:45.427414850Z" level=info msg="StartContainer for \"0f0ba5050dee0f380facfbeea0cb6ca7107112d4c116de359c9ae140a89f1c34\"" Sep 12 17:55:45.459731 systemd[1]: Started cri-containerd-0f0ba5050dee0f380facfbeea0cb6ca7107112d4c116de359c9ae140a89f1c34.scope - libcontainer container 0f0ba5050dee0f380facfbeea0cb6ca7107112d4c116de359c9ae140a89f1c34. Sep 12 17:55:45.495204 containerd[1819]: time="2025-09-12T17:55:45.495170470Z" level=info msg="StartContainer for \"0f0ba5050dee0f380facfbeea0cb6ca7107112d4c116de359c9ae140a89f1c34\" returns successfully" Sep 12 17:55:46.085031 kubelet[3070]: I0912 17:55:46.084992 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-955d99699-gshfb" podStartSLOduration=1.263587182 podStartE2EDuration="5.084979529s" podCreationTimestamp="2025-09-12 17:55:41 +0000 UTC" firstStartedPulling="2025-09-12 17:55:41.601317363 +0000 UTC m=+30.685260709" lastFinishedPulling="2025-09-12 17:55:45.422709716 +0000 UTC m=+34.506653056" observedRunningTime="2025-09-12 17:55:46.084781207 +0000 UTC m=+35.168724556" watchObservedRunningTime="2025-09-12 17:55:46.084979529 +0000 UTC m=+35.168922871" Sep 12 17:55:48.960217 containerd[1819]: time="2025-09-12T17:55:48.960125515Z" level=info msg="StopPodSandbox for \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\"" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.027 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.027 [INFO][5302] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" iface="eth0" netns="/var/run/netns/cni-9fa16b2e-4a02-facb-06c5-5df645891164" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.027 [INFO][5302] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" iface="eth0" netns="/var/run/netns/cni-9fa16b2e-4a02-facb-06c5-5df645891164" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.028 [INFO][5302] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" iface="eth0" netns="/var/run/netns/cni-9fa16b2e-4a02-facb-06c5-5df645891164" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.028 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.028 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.045 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.045 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.045 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.051 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.052 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.053 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:49.056151 containerd[1819]: 2025-09-12 17:55:49.054 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:55:49.056976 containerd[1819]: time="2025-09-12T17:55:49.056249369Z" level=info msg="TearDown network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\" successfully" Sep 12 17:55:49.056976 containerd[1819]: time="2025-09-12T17:55:49.056277476Z" level=info msg="StopPodSandbox for \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\" returns successfully" Sep 12 17:55:49.056976 containerd[1819]: time="2025-09-12T17:55:49.056818893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkll2,Uid:99abd614-f027-426b-a5d7-84601fcd4b39,Namespace:kube-system,Attempt:1,}" Sep 12 17:55:49.058263 systemd[1]: run-netns-cni\x2d9fa16b2e\x2d4a02\x2dfacb\x2d06c5\x2d5df645891164.mount: Deactivated successfully. Sep 12 17:55:49.106222 systemd-networkd[1617]: cali2efe8f351db: Link UP Sep 12 17:55:49.106374 systemd-networkd[1617]: cali2efe8f351db: Gained carrier Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.076 [INFO][5335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0 coredns-7c65d6cfc9- kube-system 99abd614-f027-426b-a5d7-84601fcd4b39 898 0 2025-09-12 17:55:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed coredns-7c65d6cfc9-nkll2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2efe8f351db [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.076 [INFO][5335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.088 [INFO][5358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" HandleID="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.088 [INFO][5358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" HandleID="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000121540), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"coredns-7c65d6cfc9-nkll2", "timestamp":"2025-09-12 17:55:49.088658853 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.088 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.088 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.088 [INFO][5358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.092 [INFO][5358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.094 [INFO][5358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.097 [INFO][5358] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.098 [INFO][5358] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.099 [INFO][5358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.099 [INFO][5358] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.100 [INFO][5358] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03 Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.101 [INFO][5358] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.104 [INFO][5358] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.66/26] block=192.168.18.64/26 handle="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.104 [INFO][5358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.66/26] handle="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.104 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:49.112500 containerd[1819]: 2025-09-12 17:55:49.104 [INFO][5358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.66/26] IPv6=[] ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" HandleID="k8s-pod-network.13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.113106 containerd[1819]: 2025-09-12 17:55:49.105 [INFO][5335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"99abd614-f027-426b-a5d7-84601fcd4b39", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"coredns-7c65d6cfc9-nkll2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2efe8f351db", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:49.113106 containerd[1819]: 2025-09-12 17:55:49.105 [INFO][5335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.66/32] ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.113106 containerd[1819]: 2025-09-12 17:55:49.105 [INFO][5335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2efe8f351db ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.113106 containerd[1819]: 2025-09-12 17:55:49.106 [INFO][5335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.113106 containerd[1819]: 2025-09-12 17:55:49.106 [INFO][5335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"99abd614-f027-426b-a5d7-84601fcd4b39", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03", Pod:"coredns-7c65d6cfc9-nkll2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2efe8f351db", MAC:"52:a9:2a:53:26:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:49.113106 containerd[1819]: 2025-09-12 17:55:49.111 [INFO][5335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nkll2" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:55:49.133747 containerd[1819]: time="2025-09-12T17:55:49.133703332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:49.133970 containerd[1819]: time="2025-09-12T17:55:49.133954528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:49.133991 containerd[1819]: time="2025-09-12T17:55:49.133971248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:49.134025 containerd[1819]: time="2025-09-12T17:55:49.134015717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:49.155626 systemd[1]: Started cri-containerd-13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03.scope - libcontainer container 13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03. Sep 12 17:55:49.185995 containerd[1819]: time="2025-09-12T17:55:49.185967505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nkll2,Uid:99abd614-f027-426b-a5d7-84601fcd4b39,Namespace:kube-system,Attempt:1,} returns sandbox id \"13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03\"" Sep 12 17:55:49.187567 containerd[1819]: time="2025-09-12T17:55:49.187517406Z" level=info msg="CreateContainer within sandbox \"13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:55:49.192151 containerd[1819]: time="2025-09-12T17:55:49.192135987Z" level=info msg="CreateContainer within sandbox \"13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdcfd8f841993f5531e44aaa6fe59e2dcf62286497d46b4088558270145ecc4c\"" Sep 12 17:55:49.192335 containerd[1819]: time="2025-09-12T17:55:49.192322489Z" level=info msg="StartContainer for \"cdcfd8f841993f5531e44aaa6fe59e2dcf62286497d46b4088558270145ecc4c\"" Sep 12 17:55:49.215638 systemd[1]: Started cri-containerd-cdcfd8f841993f5531e44aaa6fe59e2dcf62286497d46b4088558270145ecc4c.scope - libcontainer container cdcfd8f841993f5531e44aaa6fe59e2dcf62286497d46b4088558270145ecc4c. Sep 12 17:55:49.228445 containerd[1819]: time="2025-09-12T17:55:49.228412921Z" level=info msg="StartContainer for \"cdcfd8f841993f5531e44aaa6fe59e2dcf62286497d46b4088558270145ecc4c\" returns successfully" Sep 12 17:55:49.961069 containerd[1819]: time="2025-09-12T17:55:49.960945928Z" level=info msg="StopPodSandbox for \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\"" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:49.994 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:49.994 [INFO][5485] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" iface="eth0" netns="/var/run/netns/cni-3bea40ac-57a4-75d3-b926-63a127830164" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:49.994 [INFO][5485] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" iface="eth0" netns="/var/run/netns/cni-3bea40ac-57a4-75d3-b926-63a127830164" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:49.994 [INFO][5485] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" iface="eth0" netns="/var/run/netns/cni-3bea40ac-57a4-75d3-b926-63a127830164" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:49.994 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:49.994 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:50.004 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:50.004 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:50.004 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:50.008 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:50.008 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:50.009 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:50.010786 containerd[1819]: 2025-09-12 17:55:50.009 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:55:50.011101 containerd[1819]: time="2025-09-12T17:55:50.010840662Z" level=info msg="TearDown network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\" successfully" Sep 12 17:55:50.011101 containerd[1819]: time="2025-09-12T17:55:50.010855881Z" level=info msg="StopPodSandbox for \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\" returns successfully" Sep 12 17:55:50.011368 containerd[1819]: time="2025-09-12T17:55:50.011337228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-7fz7j,Uid:a444f57a-b1d6-4798-858d-e3a3c511da85,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:55:50.059632 systemd[1]: run-netns-cni\x2d3bea40ac\x2d57a4\x2d75d3\x2db926\x2d63a127830164.mount: Deactivated successfully. Sep 12 17:55:50.070347 systemd-networkd[1617]: calidb9e57173dd: Link UP Sep 12 17:55:50.070559 systemd-networkd[1617]: calidb9e57173dd: Gained carrier Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.033 [INFO][5516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0 calico-apiserver-79f6dbf598- calico-apiserver a444f57a-b1d6-4798-858d-e3a3c511da85 908 0 2025-09-12 17:55:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f6dbf598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed calico-apiserver-79f6dbf598-7fz7j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb9e57173dd [] [] }} ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.033 [INFO][5516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.046 [INFO][5538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" HandleID="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.046 [INFO][5538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" HandleID="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"calico-apiserver-79f6dbf598-7fz7j", "timestamp":"2025-09-12 17:55:50.046674099 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.046 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.046 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.046 [INFO][5538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.051 [INFO][5538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.055 [INFO][5538] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.059 [INFO][5538] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.060 [INFO][5538] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.061 [INFO][5538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.061 [INFO][5538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.062 [INFO][5538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.064 [INFO][5538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.067 [INFO][5538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.67/26] block=192.168.18.64/26 handle="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.067 [INFO][5538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.67/26] handle="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.067 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:50.077910 containerd[1819]: 2025-09-12 17:55:50.067 [INFO][5538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.67/26] IPv6=[] ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" HandleID="k8s-pod-network.1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.078540 containerd[1819]: 2025-09-12 17:55:50.069 [INFO][5516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"a444f57a-b1d6-4798-858d-e3a3c511da85", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"calico-apiserver-79f6dbf598-7fz7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb9e57173dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:50.078540 containerd[1819]: 2025-09-12 17:55:50.069 [INFO][5516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.67/32] ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.078540 containerd[1819]: 2025-09-12 17:55:50.069 [INFO][5516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb9e57173dd ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.078540 containerd[1819]: 2025-09-12 17:55:50.070 [INFO][5516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.078540 containerd[1819]: 2025-09-12 17:55:50.071 [INFO][5516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"a444f57a-b1d6-4798-858d-e3a3c511da85", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa", Pod:"calico-apiserver-79f6dbf598-7fz7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb9e57173dd", MAC:"e2:4d:cf:74:0d:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:50.078540 containerd[1819]: 2025-09-12 17:55:50.076 [INFO][5516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-7fz7j" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:55:50.091023 kubelet[3070]: I0912 17:55:50.090965 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nkll2" podStartSLOduration=34.090946705 podStartE2EDuration="34.090946705s" podCreationTimestamp="2025-09-12 17:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:55:50.090805126 +0000 UTC m=+39.174748466" watchObservedRunningTime="2025-09-12 17:55:50.090946705 +0000 UTC m=+39.174890043" Sep 12 17:55:50.094428 containerd[1819]: time="2025-09-12T17:55:50.094378570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:50.094428 containerd[1819]: time="2025-09-12T17:55:50.094414129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:50.094428 containerd[1819]: time="2025-09-12T17:55:50.094421294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:50.094560 containerd[1819]: time="2025-09-12T17:55:50.094478738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:50.117592 systemd[1]: Started cri-containerd-1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa.scope - libcontainer container 1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa. Sep 12 17:55:50.142820 containerd[1819]: time="2025-09-12T17:55:50.142769824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-7fz7j,Uid:a444f57a-b1d6-4798-858d-e3a3c511da85,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa\"" Sep 12 17:55:50.143518 containerd[1819]: time="2025-09-12T17:55:50.143481477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:55:50.301762 systemd-networkd[1617]: cali2efe8f351db: Gained IPv6LL Sep 12 17:55:50.959613 containerd[1819]: time="2025-09-12T17:55:50.959587791Z" level=info msg="StopPodSandbox for \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\"" Sep 12 17:55:50.959708 containerd[1819]: time="2025-09-12T17:55:50.959615324Z" level=info msg="StopPodSandbox for \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\"" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" iface="eth0" netns="/var/run/netns/cni-1ecc5a9a-8261-7ffa-5c9e-9ac02826e8bf" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" iface="eth0" netns="/var/run/netns/cni-1ecc5a9a-8261-7ffa-5c9e-9ac02826e8bf" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" iface="eth0" netns="/var/run/netns/cni-1ecc5a9a-8261-7ffa-5c9e-9ac02826e8bf" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.991 [INFO][5671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.991 [INFO][5671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.991 [INFO][5671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.995 [WARNING][5671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.995 [INFO][5671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.996 [INFO][5671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:50.997923 containerd[1819]: 2025-09-12 17:55:50.997 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:55:50.998387 containerd[1819]: time="2025-09-12T17:55:50.998021938Z" level=info msg="TearDown network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\" successfully" Sep 12 17:55:50.998387 containerd[1819]: time="2025-09-12T17:55:50.998047049Z" level=info msg="StopPodSandbox for \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\" returns successfully" Sep 12 17:55:50.998503 containerd[1819]: time="2025-09-12T17:55:50.998487581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf7c9b989-n5gtx,Uid:36fe7463-201c-480a-92fa-70e3b0e14442,Namespace:calico-system,Attempt:1,}" Sep 12 17:55:50.999852 systemd[1]: run-netns-cni\x2d1ecc5a9a\x2d8261\x2d7ffa\x2d5c9e\x2d9ac02826e8bf.mount: Deactivated successfully. Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.980 [INFO][5637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.980 [INFO][5637] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" iface="eth0" netns="/var/run/netns/cni-15c7006c-0270-c7bd-9148-071d95321e40" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5637] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" iface="eth0" netns="/var/run/netns/cni-15c7006c-0270-c7bd-9148-071d95321e40" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5637] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" iface="eth0" netns="/var/run/netns/cni-15c7006c-0270-c7bd-9148-071d95321e40" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.981 [INFO][5637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.991 [INFO][5669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.991 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.996 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.999 [WARNING][5669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:50.999 [INFO][5669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:51.000 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:51.002355 containerd[1819]: 2025-09-12 17:55:51.001 [INFO][5637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:55:51.002705 containerd[1819]: time="2025-09-12T17:55:51.002412789Z" level=info msg="TearDown network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\" successfully" Sep 12 17:55:51.002705 containerd[1819]: time="2025-09-12T17:55:51.002426053Z" level=info msg="StopPodSandbox for \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\" returns successfully" Sep 12 17:55:51.002884 containerd[1819]: time="2025-09-12T17:55:51.002843678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxk5v,Uid:a8e8f24e-2173-44ef-a6cc-5168890274e3,Namespace:calico-system,Attempt:1,}" Sep 12 17:55:51.056521 systemd-networkd[1617]: cali48cd5100b63: Link UP Sep 12 17:55:51.056665 systemd-networkd[1617]: cali48cd5100b63: Gained carrier Sep 12 17:55:51.060008 systemd[1]: run-netns-cni\x2d15c7006c\x2d0270\x2dc7bd\x2d9148\x2d071d95321e40.mount: Deactivated successfully. Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.020 [INFO][5704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0 calico-kube-controllers-7cf7c9b989- calico-system 36fe7463-201c-480a-92fa-70e3b0e14442 926 0 2025-09-12 17:55:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cf7c9b989 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed calico-kube-controllers-7cf7c9b989-n5gtx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali48cd5100b63 [] [] }} ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.020 [INFO][5704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.034 [INFO][5749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" HandleID="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.034 [INFO][5749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" HandleID="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"calico-kube-controllers-7cf7c9b989-n5gtx", "timestamp":"2025-09-12 17:55:51.034054893 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.034 [INFO][5749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.034 [INFO][5749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.034 [INFO][5749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.038 [INFO][5749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.042 [INFO][5749] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.045 [INFO][5749] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.046 [INFO][5749] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.047 [INFO][5749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.047 [INFO][5749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.049 [INFO][5749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01 Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.051 [INFO][5749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.054 [INFO][5749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.68/26] block=192.168.18.64/26 handle="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.054 [INFO][5749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.68/26] handle="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.054 [INFO][5749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:51.078235 containerd[1819]: 2025-09-12 17:55:51.054 [INFO][5749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.68/26] IPv6=[] ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" HandleID="k8s-pod-network.c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:51.078662 containerd[1819]: 2025-09-12 17:55:51.055 [INFO][5704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0", GenerateName:"calico-kube-controllers-7cf7c9b989-", Namespace:"calico-system", SelfLink:"", UID:"36fe7463-201c-480a-92fa-70e3b0e14442", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf7c9b989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"calico-kube-controllers-7cf7c9b989-n5gtx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48cd5100b63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:51.078662 containerd[1819]: 2025-09-12 17:55:51.055 [INFO][5704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.68/32] ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:51.078662 containerd[1819]: 2025-09-12 17:55:51.055 [INFO][5704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48cd5100b63 ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:51.078662 containerd[1819]: 2025-09-12 17:55:51.056 [INFO][5704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:51.078662 containerd[1819]: 2025-09-12 17:55:51.056 [INFO][5704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0", GenerateName:"calico-kube-controllers-7cf7c9b989-", Namespace:"calico-system", SelfLink:"", UID:"36fe7463-201c-480a-92fa-70e3b0e14442", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf7c9b989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01", Pod:"calico-kube-controllers-7cf7c9b989-n5gtx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48cd5100b63", MAC:"8e:e1:94:21:db:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:51.078662 containerd[1819]: 2025-09-12 17:55:51.077 [INFO][5704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01" Namespace="calico-system" Pod="calico-kube-controllers-7cf7c9b989-n5gtx" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:55:51.086740 containerd[1819]: time="2025-09-12T17:55:51.086700746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:51.086740 containerd[1819]: time="2025-09-12T17:55:51.086732529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:51.086844 containerd[1819]: time="2025-09-12T17:55:51.086743652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:51.087018 containerd[1819]: time="2025-09-12T17:55:51.087005790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:51.112735 systemd[1]: Started cri-containerd-c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01.scope - libcontainer container c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01. Sep 12 17:55:51.143887 containerd[1819]: time="2025-09-12T17:55:51.143865760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf7c9b989-n5gtx,Uid:36fe7463-201c-480a-92fa-70e3b0e14442,Namespace:calico-system,Attempt:1,} returns sandbox id \"c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01\"" Sep 12 17:55:51.152854 systemd-networkd[1617]: calie49132c06ff: Link UP Sep 12 17:55:51.153235 systemd-networkd[1617]: calie49132c06ff: Gained carrier Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.022 [INFO][5714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0 csi-node-driver- calico-system a8e8f24e-2173-44ef-a6cc-5168890274e3 925 0 2025-09-12 17:55:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed csi-node-driver-mxk5v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie49132c06ff [] [] }} ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.023 [INFO][5714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.035 [INFO][5755] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" HandleID="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.035 [INFO][5755] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" HandleID="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"csi-node-driver-mxk5v", "timestamp":"2025-09-12 17:55:51.035620997 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.035 [INFO][5755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.054 [INFO][5755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.054 [INFO][5755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.139 [INFO][5755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.142 [INFO][5755] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.144 [INFO][5755] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.145 [INFO][5755] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.146 [INFO][5755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.146 [INFO][5755] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.147 [INFO][5755] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.148 [INFO][5755] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.151 [INFO][5755] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.69/26] block=192.168.18.64/26 handle="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.151 [INFO][5755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.69/26] handle="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.151 [INFO][5755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:51.158904 containerd[1819]: 2025-09-12 17:55:51.151 [INFO][5755] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.69/26] IPv6=[] ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" HandleID="k8s-pod-network.c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.159306 containerd[1819]: 2025-09-12 17:55:51.152 [INFO][5714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e8f24e-2173-44ef-a6cc-5168890274e3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"csi-node-driver-mxk5v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie49132c06ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:51.159306 containerd[1819]: 2025-09-12 17:55:51.152 [INFO][5714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.69/32] ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.159306 containerd[1819]: 2025-09-12 17:55:51.152 [INFO][5714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie49132c06ff ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.159306 containerd[1819]: 2025-09-12 17:55:51.153 [INFO][5714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.159306 containerd[1819]: 2025-09-12 17:55:51.153 [INFO][5714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e8f24e-2173-44ef-a6cc-5168890274e3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b", Pod:"csi-node-driver-mxk5v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie49132c06ff", MAC:"fa:e5:8c:7e:c8:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:51.159306 containerd[1819]: 2025-09-12 17:55:51.158 [INFO][5714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b" Namespace="calico-system" Pod="csi-node-driver-mxk5v" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:55:51.166541 containerd[1819]: time="2025-09-12T17:55:51.166460640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:51.166541 containerd[1819]: time="2025-09-12T17:55:51.166495943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:51.166690 containerd[1819]: time="2025-09-12T17:55:51.166675879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:51.166730 containerd[1819]: time="2025-09-12T17:55:51.166719606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:51.185610 systemd[1]: Started cri-containerd-c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b.scope - libcontainer container c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b. Sep 12 17:55:51.197333 containerd[1819]: time="2025-09-12T17:55:51.197307733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mxk5v,Uid:a8e8f24e-2173-44ef-a6cc-5168890274e3,Namespace:calico-system,Attempt:1,} returns sandbox id \"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b\"" Sep 12 17:55:51.959982 containerd[1819]: time="2025-09-12T17:55:51.959947113Z" level=info msg="StopPodSandbox for \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\"" Sep 12 17:55:51.960109 containerd[1819]: time="2025-09-12T17:55:51.959947786Z" level=info msg="StopPodSandbox for \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\"" Sep 12 17:55:51.960172 containerd[1819]: time="2025-09-12T17:55:51.959956321Z" level=info msg="StopPodSandbox for \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\"" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.984 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.984 [INFO][5918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" iface="eth0" netns="/var/run/netns/cni-9cd5a2ac-0bb5-6b3d-7072-519be8796a16" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.984 [INFO][5918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" iface="eth0" netns="/var/run/netns/cni-9cd5a2ac-0bb5-6b3d-7072-519be8796a16" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.984 [INFO][5918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" iface="eth0" netns="/var/run/netns/cni-9cd5a2ac-0bb5-6b3d-7072-519be8796a16" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.984 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.984 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.995 [INFO][5966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.995 [INFO][5966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.995 [INFO][5966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.999 [WARNING][5966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.999 [INFO][5966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:51.999 [INFO][5966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:52.001108 containerd[1819]: 2025-09-12 17:55:52.000 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:55:52.001534 containerd[1819]: time="2025-09-12T17:55:52.001183552Z" level=info msg="TearDown network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\" successfully" Sep 12 17:55:52.001534 containerd[1819]: time="2025-09-12T17:55:52.001199574Z" level=info msg="StopPodSandbox for \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\" returns successfully" Sep 12 17:55:52.001567 containerd[1819]: time="2025-09-12T17:55:52.001536012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-j6bmc,Uid:f5372128-8327-44f1-8a1c-68eda7b4a892,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5916] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" iface="eth0" netns="/var/run/netns/cni-a8eb2a9c-5df5-f5e5-faea-944c05689e6b" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5916] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" iface="eth0" netns="/var/run/netns/cni-a8eb2a9c-5df5-f5e5-faea-944c05689e6b" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5916] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" iface="eth0" netns="/var/run/netns/cni-a8eb2a9c-5df5-f5e5-faea-944c05689e6b" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.997 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.997 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:51.999 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:52.003 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:52.003 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:52.004 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:52.005796 containerd[1819]: 2025-09-12 17:55:52.005 [INFO][5916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:55:52.006068 containerd[1819]: time="2025-09-12T17:55:52.005862746Z" level=info msg="TearDown network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\" successfully" Sep 12 17:55:52.006068 containerd[1819]: time="2025-09-12T17:55:52.005876931Z" level=info msg="StopPodSandbox for \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\" returns successfully" Sep 12 17:55:52.006252 containerd[1819]: time="2025-09-12T17:55:52.006237227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbj9g,Uid:054fb7c1-c456-47f2-811b-49f3435a8e35,Namespace:kube-system,Attempt:1,}" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5917] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" iface="eth0" netns="/var/run/netns/cni-edb74be8-8692-62d6-a8fc-879d140e7ba3" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5917] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" iface="eth0" netns="/var/run/netns/cni-edb74be8-8692-62d6-a8fc-879d140e7ba3" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5917] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" iface="eth0" netns="/var/run/netns/cni-edb74be8-8692-62d6-a8fc-879d140e7ba3" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.985 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.998 [INFO][5970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:51.998 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:52.004 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:52.007 [WARNING][5970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:52.007 [INFO][5970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:52.009 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:52.010955 containerd[1819]: 2025-09-12 17:55:52.009 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:55:52.011197 containerd[1819]: time="2025-09-12T17:55:52.011009133Z" level=info msg="TearDown network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\" successfully" Sep 12 17:55:52.011197 containerd[1819]: time="2025-09-12T17:55:52.011021447Z" level=info msg="StopPodSandbox for \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\" returns successfully" Sep 12 17:55:52.011319 containerd[1819]: time="2025-09-12T17:55:52.011304994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-87fsc,Uid:f0e34300-52fa-4b2c-a580-7e7738d631f0,Namespace:calico-system,Attempt:1,}" Sep 12 17:55:52.060322 systemd[1]: run-netns-cni\x2da8eb2a9c\x2d5df5\x2df5e5\x2dfaea\x2d944c05689e6b.mount: Deactivated successfully. Sep 12 17:55:52.060377 systemd[1]: run-netns-cni\x2dedb74be8\x2d8692\x2d62d6\x2da8fc\x2d879d140e7ba3.mount: Deactivated successfully. Sep 12 17:55:52.060411 systemd[1]: run-netns-cni\x2d9cd5a2ac\x2d0bb5\x2d6b3d\x2d7072\x2d519be8796a16.mount: Deactivated successfully. Sep 12 17:55:52.069335 systemd-networkd[1617]: calia3a56b3ce65: Link UP Sep 12 17:55:52.069678 systemd-networkd[1617]: calia3a56b3ce65: Gained carrier Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.031 [INFO][6007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0 calico-apiserver-79f6dbf598- calico-apiserver f5372128-8327-44f1-8a1c-68eda7b4a892 940 0 2025-09-12 17:55:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f6dbf598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed calico-apiserver-79f6dbf598-j6bmc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3a56b3ce65 [] [] }} ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.031 [INFO][6007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" HandleID="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" HandleID="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000786a70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"calico-apiserver-79f6dbf598-j6bmc", "timestamp":"2025-09-12 17:55:52.046188616 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.050 [INFO][6076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.053 [INFO][6076] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.058 [INFO][6076] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.059 [INFO][6076] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.061 [INFO][6076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.061 [INFO][6076] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.062 [INFO][6076] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498 Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.064 [INFO][6076] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.067 [INFO][6076] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.70/26] block=192.168.18.64/26 handle="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.067 [INFO][6076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.70/26] handle="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.067 [INFO][6076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:52.074531 containerd[1819]: 2025-09-12 17:55:52.067 [INFO][6076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.70/26] IPv6=[] ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" HandleID="k8s-pod-network.3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.074981 containerd[1819]: 2025-09-12 17:55:52.068 [INFO][6007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5372128-8327-44f1-8a1c-68eda7b4a892", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"calico-apiserver-79f6dbf598-j6bmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3a56b3ce65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:52.074981 containerd[1819]: 2025-09-12 17:55:52.068 [INFO][6007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.70/32] ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.074981 containerd[1819]: 2025-09-12 17:55:52.068 [INFO][6007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3a56b3ce65 ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.074981 containerd[1819]: 2025-09-12 17:55:52.069 [INFO][6007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.074981 containerd[1819]: 2025-09-12 17:55:52.069 [INFO][6007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5372128-8327-44f1-8a1c-68eda7b4a892", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498", Pod:"calico-apiserver-79f6dbf598-j6bmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3a56b3ce65", MAC:"3a:79:d3:54:44:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:52.074981 containerd[1819]: 2025-09-12 17:55:52.073 [INFO][6007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498" Namespace="calico-apiserver" Pod="calico-apiserver-79f6dbf598-j6bmc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:55:52.082746 containerd[1819]: time="2025-09-12T17:55:52.082674838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:52.082746 containerd[1819]: time="2025-09-12T17:55:52.082739340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:52.082845 containerd[1819]: time="2025-09-12T17:55:52.082755264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:52.082845 containerd[1819]: time="2025-09-12T17:55:52.082802391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:52.093495 systemd-networkd[1617]: calidb9e57173dd: Gained IPv6LL Sep 12 17:55:52.098587 systemd[1]: Started cri-containerd-3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498.scope - libcontainer container 3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498. Sep 12 17:55:52.122378 containerd[1819]: time="2025-09-12T17:55:52.122357584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f6dbf598-j6bmc,Uid:f5372128-8327-44f1-8a1c-68eda7b4a892,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498\"" Sep 12 17:55:52.169147 systemd-networkd[1617]: calied429ea2600: Link UP Sep 12 17:55:52.169321 systemd-networkd[1617]: calied429ea2600: Gained carrier Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.031 [INFO][6014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0 coredns-7c65d6cfc9- kube-system 054fb7c1-c456-47f2-811b-49f3435a8e35 941 0 2025-09-12 17:55:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed coredns-7c65d6cfc9-kbj9g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied429ea2600 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.031 [INFO][6014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" HandleID="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" HandleID="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"coredns-7c65d6cfc9-kbj9g", "timestamp":"2025-09-12 17:55:52.046191382 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.046 [INFO][6074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.067 [INFO][6074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.067 [INFO][6074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.151 [INFO][6074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.154 [INFO][6074] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.158 [INFO][6074] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.159 [INFO][6074] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.161 [INFO][6074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.161 [INFO][6074] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.162 [INFO][6074] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.164 [INFO][6074] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.167 [INFO][6074] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.71/26] block=192.168.18.64/26 handle="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.167 [INFO][6074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.71/26] handle="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.167 [INFO][6074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:52.176060 containerd[1819]: 2025-09-12 17:55:52.167 [INFO][6074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.71/26] IPv6=[] ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" HandleID="k8s-pod-network.3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.176472 containerd[1819]: 2025-09-12 17:55:52.168 [INFO][6014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"054fb7c1-c456-47f2-811b-49f3435a8e35", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"coredns-7c65d6cfc9-kbj9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied429ea2600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:52.176472 containerd[1819]: 2025-09-12 17:55:52.168 [INFO][6014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.71/32] ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.176472 containerd[1819]: 2025-09-12 17:55:52.168 [INFO][6014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied429ea2600 ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.176472 containerd[1819]: 2025-09-12 17:55:52.169 [INFO][6014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.176472 containerd[1819]: 2025-09-12 17:55:52.169 [INFO][6014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"054fb7c1-c456-47f2-811b-49f3435a8e35", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f", Pod:"coredns-7c65d6cfc9-kbj9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied429ea2600", MAC:"32:0e:7e:7c:2d:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:52.176472 containerd[1819]: 2025-09-12 17:55:52.175 [INFO][6014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kbj9g" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:55:52.184862 containerd[1819]: time="2025-09-12T17:55:52.184647758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:52.184862 containerd[1819]: time="2025-09-12T17:55:52.184853380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:52.184862 containerd[1819]: time="2025-09-12T17:55:52.184860975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:52.184957 containerd[1819]: time="2025-09-12T17:55:52.184901053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:52.205550 systemd[1]: Started cri-containerd-3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f.scope - libcontainer container 3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f. Sep 12 17:55:52.228075 containerd[1819]: time="2025-09-12T17:55:52.228051300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kbj9g,Uid:054fb7c1-c456-47f2-811b-49f3435a8e35,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f\"" Sep 12 17:55:52.229198 containerd[1819]: time="2025-09-12T17:55:52.229181405Z" level=info msg="CreateContainer within sandbox \"3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:55:52.243036 containerd[1819]: time="2025-09-12T17:55:52.242990238Z" level=info msg="CreateContainer within sandbox \"3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9984149e13bcc11d829618dca92bc1949fa1ab15620386ec89900d4b2fe22f3\"" Sep 12 17:55:52.243283 containerd[1819]: time="2025-09-12T17:55:52.243245518Z" level=info msg="StartContainer for \"f9984149e13bcc11d829618dca92bc1949fa1ab15620386ec89900d4b2fe22f3\"" Sep 12 17:55:52.267970 systemd[1]: Started cri-containerd-f9984149e13bcc11d829618dca92bc1949fa1ab15620386ec89900d4b2fe22f3.scope - libcontainer container f9984149e13bcc11d829618dca92bc1949fa1ab15620386ec89900d4b2fe22f3. Sep 12 17:55:52.280399 systemd-networkd[1617]: cali97390b45778: Link UP Sep 12 17:55:52.280529 systemd-networkd[1617]: cali97390b45778: Gained carrier Sep 12 17:55:52.282301 containerd[1819]: time="2025-09-12T17:55:52.282279231Z" level=info msg="StartContainer for \"f9984149e13bcc11d829618dca92bc1949fa1ab15620386ec89900d4b2fe22f3\" returns successfully" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.037 [INFO][6038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0 goldmane-7988f88666- calico-system f0e34300-52fa-4b2c-a580-7e7738d631f0 942 0 2025-09-12 17:55:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-a-7e79e463ed goldmane-7988f88666-87fsc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali97390b45778 [] [] }} ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.037 [INFO][6038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.050 [INFO][6086] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" HandleID="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.050 [INFO][6086] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" HandleID="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-7e79e463ed", "pod":"goldmane-7988f88666-87fsc", "timestamp":"2025-09-12 17:55:52.050678032 +0000 UTC"}, Hostname:"ci-4081.3.6-a-7e79e463ed", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.050 [INFO][6086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.167 [INFO][6086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.167 [INFO][6086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-7e79e463ed' Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.251 [INFO][6086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.254 [INFO][6086] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.259 [INFO][6086] ipam/ipam.go 511: Trying affinity for 192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.268 [INFO][6086] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.270 [INFO][6086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.270 [INFO][6086] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.271 [INFO][6086] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.274 [INFO][6086] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.277 [INFO][6086] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.18.72/26] block=192.168.18.64/26 handle="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.277 [INFO][6086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.72/26] handle="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" host="ci-4081.3.6-a-7e79e463ed" Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.277 [INFO][6086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:55:52.286233 containerd[1819]: 2025-09-12 17:55:52.277 [INFO][6086] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.72/26] IPv6=[] ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" HandleID="k8s-pod-network.ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.286947 containerd[1819]: 2025-09-12 17:55:52.279 [INFO][6038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f0e34300-52fa-4b2c-a580-7e7738d631f0", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"", Pod:"goldmane-7988f88666-87fsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97390b45778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:52.286947 containerd[1819]: 2025-09-12 17:55:52.279 [INFO][6038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.72/32] ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.286947 containerd[1819]: 2025-09-12 17:55:52.279 [INFO][6038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97390b45778 ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.286947 containerd[1819]: 2025-09-12 17:55:52.280 [INFO][6038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.286947 containerd[1819]: 2025-09-12 17:55:52.280 [INFO][6038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f0e34300-52fa-4b2c-a580-7e7738d631f0", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca", Pod:"goldmane-7988f88666-87fsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97390b45778", MAC:"5e:77:c3:ba:cb:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:55:52.286947 containerd[1819]: 2025-09-12 17:55:52.285 [INFO][6038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca" Namespace="calico-system" Pod="goldmane-7988f88666-87fsc" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:55:52.295252 containerd[1819]: time="2025-09-12T17:55:52.295206399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:55:52.295252 containerd[1819]: time="2025-09-12T17:55:52.295240484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:55:52.295363 containerd[1819]: time="2025-09-12T17:55:52.295253961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:52.295363 containerd[1819]: time="2025-09-12T17:55:52.295302136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:55:52.316598 systemd[1]: Started cri-containerd-ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca.scope - libcontainer container ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca. Sep 12 17:55:52.340745 containerd[1819]: time="2025-09-12T17:55:52.340723181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-87fsc,Uid:f0e34300-52fa-4b2c-a580-7e7738d631f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca\"" Sep 12 17:55:52.413706 systemd-networkd[1617]: cali48cd5100b63: Gained IPv6LL Sep 12 17:55:52.414255 systemd-networkd[1617]: calie49132c06ff: Gained IPv6LL Sep 12 17:55:52.925471 containerd[1819]: time="2025-09-12T17:55:52.925446110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:52.925610 containerd[1819]: time="2025-09-12T17:55:52.925593539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:55:52.925929 containerd[1819]: time="2025-09-12T17:55:52.925916221Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:52.927003 containerd[1819]: time="2025-09-12T17:55:52.926987046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:52.927452 containerd[1819]: time="2025-09-12T17:55:52.927431406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.783929949s" Sep 12 17:55:52.927476 containerd[1819]: time="2025-09-12T17:55:52.927457244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:55:52.927970 containerd[1819]: time="2025-09-12T17:55:52.927959971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:55:52.928503 containerd[1819]: time="2025-09-12T17:55:52.928489443Z" level=info msg="CreateContainer within sandbox \"1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:55:52.932478 containerd[1819]: time="2025-09-12T17:55:52.932462184Z" level=info msg="CreateContainer within sandbox \"1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5ab10d845494d94ff82cf4d4e8478418ab48ea1725ad058256b1f5ab7d0bc9e2\"" Sep 12 17:55:52.932668 containerd[1819]: time="2025-09-12T17:55:52.932656225Z" level=info msg="StartContainer for \"5ab10d845494d94ff82cf4d4e8478418ab48ea1725ad058256b1f5ab7d0bc9e2\"" Sep 12 17:55:52.953763 systemd[1]: Started cri-containerd-5ab10d845494d94ff82cf4d4e8478418ab48ea1725ad058256b1f5ab7d0bc9e2.scope - libcontainer container 5ab10d845494d94ff82cf4d4e8478418ab48ea1725ad058256b1f5ab7d0bc9e2. Sep 12 17:55:52.977780 containerd[1819]: time="2025-09-12T17:55:52.977757854Z" level=info msg="StartContainer for \"5ab10d845494d94ff82cf4d4e8478418ab48ea1725ad058256b1f5ab7d0bc9e2\" returns successfully" Sep 12 17:55:53.098422 kubelet[3070]: I0912 17:55:53.098386 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kbj9g" podStartSLOduration=37.098373972 podStartE2EDuration="37.098373972s" podCreationTimestamp="2025-09-12 17:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:55:53.098196889 +0000 UTC m=+42.182140232" watchObservedRunningTime="2025-09-12 17:55:53.098373972 +0000 UTC m=+42.182317354" Sep 12 17:55:53.103507 kubelet[3070]: I0912 17:55:53.103454 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f6dbf598-7fz7j" podStartSLOduration=26.318857678 podStartE2EDuration="29.103441004s" podCreationTimestamp="2025-09-12 17:55:24 +0000 UTC" firstStartedPulling="2025-09-12 17:55:50.143325374 +0000 UTC m=+39.227268717" lastFinishedPulling="2025-09-12 17:55:52.927908703 +0000 UTC m=+42.011852043" observedRunningTime="2025-09-12 17:55:53.103249349 +0000 UTC m=+42.187192694" watchObservedRunningTime="2025-09-12 17:55:53.103441004 +0000 UTC m=+42.187384343" Sep 12 17:55:53.437623 systemd-networkd[1617]: calia3a56b3ce65: Gained IPv6LL Sep 12 17:55:53.501577 systemd-networkd[1617]: cali97390b45778: Gained IPv6LL Sep 12 17:55:53.885819 systemd-networkd[1617]: calied429ea2600: Gained IPv6LL Sep 12 17:55:54.096092 kubelet[3070]: I0912 17:55:54.096044 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:55:55.504210 containerd[1819]: time="2025-09-12T17:55:55.504155051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:55.504423 containerd[1819]: time="2025-09-12T17:55:55.504348251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:55:55.504771 containerd[1819]: time="2025-09-12T17:55:55.504731318Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:55.505670 containerd[1819]: time="2025-09-12T17:55:55.505628619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:55.506106 containerd[1819]: time="2025-09-12T17:55:55.506065979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.578091845s" Sep 12 17:55:55.506106 containerd[1819]: time="2025-09-12T17:55:55.506080988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:55:55.506555 containerd[1819]: time="2025-09-12T17:55:55.506544420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:55:55.509478 containerd[1819]: time="2025-09-12T17:55:55.509459045Z" level=info msg="CreateContainer within sandbox \"c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:55:55.530964 containerd[1819]: time="2025-09-12T17:55:55.530916326Z" level=info msg="CreateContainer within sandbox \"c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3ad81613fcc1474e60323d01c4c4514cf55cf06fefc122d903614cedac73de8e\"" Sep 12 17:55:55.531155 containerd[1819]: time="2025-09-12T17:55:55.531125595Z" level=info msg="StartContainer for \"3ad81613fcc1474e60323d01c4c4514cf55cf06fefc122d903614cedac73de8e\"" Sep 12 17:55:55.554664 systemd[1]: Started cri-containerd-3ad81613fcc1474e60323d01c4c4514cf55cf06fefc122d903614cedac73de8e.scope - libcontainer container 3ad81613fcc1474e60323d01c4c4514cf55cf06fefc122d903614cedac73de8e. Sep 12 17:55:55.585453 containerd[1819]: time="2025-09-12T17:55:55.585418299Z" level=info msg="StartContainer for \"3ad81613fcc1474e60323d01c4c4514cf55cf06fefc122d903614cedac73de8e\" returns successfully" Sep 12 17:55:56.177673 kubelet[3070]: I0912 17:55:56.177634 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7cf7c9b989-n5gtx" podStartSLOduration=24.815626625 podStartE2EDuration="29.177615658s" podCreationTimestamp="2025-09-12 17:55:27 +0000 UTC" firstStartedPulling="2025-09-12 17:55:51.144497811 +0000 UTC m=+40.228441151" lastFinishedPulling="2025-09-12 17:55:55.506486844 +0000 UTC m=+44.590430184" observedRunningTime="2025-09-12 17:55:56.114861133 +0000 UTC m=+45.198804504" watchObservedRunningTime="2025-09-12 17:55:56.177615658 +0000 UTC m=+45.261559001" Sep 12 17:55:57.066105 containerd[1819]: time="2025-09-12T17:55:57.066082271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:57.066350 containerd[1819]: time="2025-09-12T17:55:57.066287367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:55:57.066674 containerd[1819]: time="2025-09-12T17:55:57.066663608Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:57.067606 containerd[1819]: time="2025-09-12T17:55:57.067596069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:57.068335 containerd[1819]: time="2025-09-12T17:55:57.068299315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.561738488s" Sep 12 17:55:57.068335 containerd[1819]: time="2025-09-12T17:55:57.068315169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:55:57.068787 containerd[1819]: time="2025-09-12T17:55:57.068777758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:55:57.069390 containerd[1819]: time="2025-09-12T17:55:57.069376551Z" level=info msg="CreateContainer within sandbox \"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:55:57.074940 containerd[1819]: time="2025-09-12T17:55:57.074895267Z" level=info msg="CreateContainer within sandbox \"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4e44a5f52ab93157aea587d641eb6a54632e721c3b5fcd0033d1797dab872a2f\"" Sep 12 17:55:57.075123 containerd[1819]: time="2025-09-12T17:55:57.075085427Z" level=info msg="StartContainer for \"4e44a5f52ab93157aea587d641eb6a54632e721c3b5fcd0033d1797dab872a2f\"" Sep 12 17:55:57.103510 systemd[1]: Started cri-containerd-4e44a5f52ab93157aea587d641eb6a54632e721c3b5fcd0033d1797dab872a2f.scope - libcontainer container 4e44a5f52ab93157aea587d641eb6a54632e721c3b5fcd0033d1797dab872a2f. Sep 12 17:55:57.116371 containerd[1819]: time="2025-09-12T17:55:57.116347518Z" level=info msg="StartContainer for \"4e44a5f52ab93157aea587d641eb6a54632e721c3b5fcd0033d1797dab872a2f\" returns successfully" Sep 12 17:55:57.473353 containerd[1819]: time="2025-09-12T17:55:57.473265659Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:55:57.473526 containerd[1819]: time="2025-09-12T17:55:57.473462344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:55:57.474910 containerd[1819]: time="2025-09-12T17:55:57.474868662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 406.077236ms" Sep 12 17:55:57.474910 containerd[1819]: time="2025-09-12T17:55:57.474884481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:55:57.476412 containerd[1819]: time="2025-09-12T17:55:57.476122517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:55:57.476579 containerd[1819]: time="2025-09-12T17:55:57.476525871Z" level=info msg="CreateContainer within sandbox \"3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:55:57.480950 containerd[1819]: time="2025-09-12T17:55:57.480931891Z" level=info msg="CreateContainer within sandbox \"3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f7b1156e30542c1248cd39950c3f6ee7860da71d3ca955f41a669e112e8dbba2\"" Sep 12 17:55:57.481271 containerd[1819]: time="2025-09-12T17:55:57.481235686Z" level=info msg="StartContainer for \"f7b1156e30542c1248cd39950c3f6ee7860da71d3ca955f41a669e112e8dbba2\"" Sep 12 17:55:57.504757 systemd[1]: Started cri-containerd-f7b1156e30542c1248cd39950c3f6ee7860da71d3ca955f41a669e112e8dbba2.scope - libcontainer container f7b1156e30542c1248cd39950c3f6ee7860da71d3ca955f41a669e112e8dbba2. Sep 12 17:55:57.534453 containerd[1819]: time="2025-09-12T17:55:57.534419423Z" level=info msg="StartContainer for \"f7b1156e30542c1248cd39950c3f6ee7860da71d3ca955f41a669e112e8dbba2\" returns successfully" Sep 12 17:55:58.114090 kubelet[3070]: I0912 17:55:58.114028 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f6dbf598-j6bmc" podStartSLOduration=28.761518443 podStartE2EDuration="34.114017257s" podCreationTimestamp="2025-09-12 17:55:24 +0000 UTC" firstStartedPulling="2025-09-12 17:55:52.122939602 +0000 UTC m=+41.206882942" lastFinishedPulling="2025-09-12 17:55:57.475438415 +0000 UTC m=+46.559381756" observedRunningTime="2025-09-12 17:55:58.113755039 +0000 UTC m=+47.197698382" watchObservedRunningTime="2025-09-12 17:55:58.114017257 +0000 UTC m=+47.197960596" Sep 12 17:55:59.351459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946818937.mount: Deactivated successfully. Sep 12 17:56:00.172730 containerd[1819]: time="2025-09-12T17:56:00.172675423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:56:00.172943 containerd[1819]: time="2025-09-12T17:56:00.172911938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:56:00.173272 containerd[1819]: time="2025-09-12T17:56:00.173234031Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:56:00.174344 containerd[1819]: time="2025-09-12T17:56:00.174303466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:56:00.174799 containerd[1819]: time="2025-09-12T17:56:00.174761563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 2.698617896s" Sep 12 17:56:00.174799 containerd[1819]: time="2025-09-12T17:56:00.174777366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:56:00.175323 containerd[1819]: time="2025-09-12T17:56:00.175312177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:56:00.175879 containerd[1819]: time="2025-09-12T17:56:00.175866388Z" level=info msg="CreateContainer within sandbox \"ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:56:00.180059 containerd[1819]: time="2025-09-12T17:56:00.180044720Z" level=info msg="CreateContainer within sandbox \"ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"48120d1b54707cbd6d87ff3095ad5d21159e5c66859a6304badf8ffc03c7dbf6\"" Sep 12 17:56:00.180304 containerd[1819]: time="2025-09-12T17:56:00.180292434Z" level=info msg="StartContainer for \"48120d1b54707cbd6d87ff3095ad5d21159e5c66859a6304badf8ffc03c7dbf6\"" Sep 12 17:56:00.203709 systemd[1]: Started cri-containerd-48120d1b54707cbd6d87ff3095ad5d21159e5c66859a6304badf8ffc03c7dbf6.scope - libcontainer container 48120d1b54707cbd6d87ff3095ad5d21159e5c66859a6304badf8ffc03c7dbf6. Sep 12 17:56:00.226917 containerd[1819]: time="2025-09-12T17:56:00.226860622Z" level=info msg="StartContainer for \"48120d1b54707cbd6d87ff3095ad5d21159e5c66859a6304badf8ffc03c7dbf6\" returns successfully" Sep 12 17:56:01.125819 kubelet[3070]: I0912 17:56:01.125747 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-87fsc" podStartSLOduration=27.291761476 podStartE2EDuration="35.125728147s" podCreationTimestamp="2025-09-12 17:55:26 +0000 UTC" firstStartedPulling="2025-09-12 17:55:52.34128939 +0000 UTC m=+41.425232736" lastFinishedPulling="2025-09-12 17:56:00.175256067 +0000 UTC m=+49.259199407" observedRunningTime="2025-09-12 17:56:01.125108083 +0000 UTC m=+50.209051447" watchObservedRunningTime="2025-09-12 17:56:01.125728147 +0000 UTC m=+50.209671503" Sep 12 17:56:01.840139 containerd[1819]: time="2025-09-12T17:56:01.840114181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:56:01.840396 containerd[1819]: time="2025-09-12T17:56:01.840333206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:56:01.840790 containerd[1819]: time="2025-09-12T17:56:01.840774674Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:56:01.841761 containerd[1819]: time="2025-09-12T17:56:01.841746692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:56:01.842137 containerd[1819]: time="2025-09-12T17:56:01.842122528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.666794558s" Sep 12 17:56:01.842181 containerd[1819]: time="2025-09-12T17:56:01.842138844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:56:01.843365 containerd[1819]: time="2025-09-12T17:56:01.843330316Z" level=info msg="CreateContainer within sandbox \"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:56:01.848570 containerd[1819]: time="2025-09-12T17:56:01.848522438Z" level=info msg="CreateContainer within sandbox \"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"983e01bf2a140e140d18a0123b763b0e334d8a8400797e6ce757499424764eec\"" Sep 12 17:56:01.848800 containerd[1819]: time="2025-09-12T17:56:01.848788182Z" level=info msg="StartContainer for \"983e01bf2a140e140d18a0123b763b0e334d8a8400797e6ce757499424764eec\"" Sep 12 17:56:01.879776 systemd[1]: Started cri-containerd-983e01bf2a140e140d18a0123b763b0e334d8a8400797e6ce757499424764eec.scope - libcontainer container 983e01bf2a140e140d18a0123b763b0e334d8a8400797e6ce757499424764eec. Sep 12 17:56:01.894069 containerd[1819]: time="2025-09-12T17:56:01.894016204Z" level=info msg="StartContainer for \"983e01bf2a140e140d18a0123b763b0e334d8a8400797e6ce757499424764eec\" returns successfully" Sep 12 17:56:02.002307 kubelet[3070]: I0912 17:56:02.002250 3070 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:56:02.002611 kubelet[3070]: I0912 17:56:02.002341 3070 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:56:02.219923 kubelet[3070]: I0912 17:56:02.219837 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mxk5v" podStartSLOduration=24.575195174 podStartE2EDuration="35.219823656s" podCreationTimestamp="2025-09-12 17:55:27 +0000 UTC" firstStartedPulling="2025-09-12 17:55:51.197972611 +0000 UTC m=+40.281915954" lastFinishedPulling="2025-09-12 17:56:01.842601096 +0000 UTC m=+50.926544436" observedRunningTime="2025-09-12 17:56:02.129285661 +0000 UTC m=+51.213229033" watchObservedRunningTime="2025-09-12 17:56:02.219823656 +0000 UTC m=+51.303766994" Sep 12 17:56:10.968260 containerd[1819]: time="2025-09-12T17:56:10.968232825Z" level=info msg="StopPodSandbox for \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\"" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:10.989 [WARNING][6773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5372128-8327-44f1-8a1c-68eda7b4a892", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498", Pod:"calico-apiserver-79f6dbf598-j6bmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3a56b3ce65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:10.990 [INFO][6773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:10.990 [INFO][6773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" iface="eth0" netns="" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:10.990 [INFO][6773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:10.990 [INFO][6773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:11.000 [INFO][6792] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:11.000 [INFO][6792] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:11.000 [INFO][6792] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:11.004 [WARNING][6792] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:11.004 [INFO][6792] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:11.005 [INFO][6792] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.007511 containerd[1819]: 2025-09-12 17:56:11.006 [INFO][6773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.007860 containerd[1819]: time="2025-09-12T17:56:11.007509857Z" level=info msg="TearDown network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\" successfully" Sep 12 17:56:11.007860 containerd[1819]: time="2025-09-12T17:56:11.007528328Z" level=info msg="StopPodSandbox for \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\" returns successfully" Sep 12 17:56:11.007860 containerd[1819]: time="2025-09-12T17:56:11.007850988Z" level=info msg="RemovePodSandbox for \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\"" Sep 12 17:56:11.007916 containerd[1819]: time="2025-09-12T17:56:11.007868788Z" level=info msg="Forcibly stopping sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\"" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.026 [WARNING][6818] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5372128-8327-44f1-8a1c-68eda7b4a892", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"3bd04e2053f37a5b21ef2ae1d5314d702747eaf220b1ca4b60d3478bd0bbf498", Pod:"calico-apiserver-79f6dbf598-j6bmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3a56b3ce65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.026 [INFO][6818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.026 [INFO][6818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" iface="eth0" netns="" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.026 [INFO][6818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.026 [INFO][6818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.035 [INFO][6837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.035 [INFO][6837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.035 [INFO][6837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.039 [WARNING][6837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.039 [INFO][6837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" HandleID="k8s-pod-network.343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--j6bmc-eth0" Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.040 [INFO][6837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.042221 containerd[1819]: 2025-09-12 17:56:11.041 [INFO][6818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45" Sep 12 17:56:11.042532 containerd[1819]: time="2025-09-12T17:56:11.042248186Z" level=info msg="TearDown network for sandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\" successfully" Sep 12 17:56:11.044547 containerd[1819]: time="2025-09-12T17:56:11.044529877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.044598 containerd[1819]: time="2025-09-12T17:56:11.044575907Z" level=info msg="RemovePodSandbox \"343772909f7d6b025ce26cede40cd47e76fb9d22582426ab6317c448faa0df45\" returns successfully" Sep 12 17:56:11.044932 containerd[1819]: time="2025-09-12T17:56:11.044919426Z" level=info msg="StopPodSandbox for \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\"" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.062 [WARNING][6861] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"99abd614-f027-426b-a5d7-84601fcd4b39", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03", Pod:"coredns-7c65d6cfc9-nkll2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2efe8f351db", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.062 [INFO][6861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.062 [INFO][6861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" iface="eth0" netns="" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.062 [INFO][6861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.062 [INFO][6861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.072 [INFO][6876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.072 [INFO][6876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.072 [INFO][6876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.076 [WARNING][6876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.076 [INFO][6876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.077 [INFO][6876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.079352 containerd[1819]: 2025-09-12 17:56:11.078 [INFO][6861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.079705 containerd[1819]: time="2025-09-12T17:56:11.079375608Z" level=info msg="TearDown network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\" successfully" Sep 12 17:56:11.079705 containerd[1819]: time="2025-09-12T17:56:11.079392505Z" level=info msg="StopPodSandbox for \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\" returns successfully" Sep 12 17:56:11.079705 containerd[1819]: time="2025-09-12T17:56:11.079656801Z" level=info msg="RemovePodSandbox for \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\"" Sep 12 17:56:11.079705 containerd[1819]: time="2025-09-12T17:56:11.079674308Z" level=info msg="Forcibly stopping sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\"" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.097 [WARNING][6901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"99abd614-f027-426b-a5d7-84601fcd4b39", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"13c3dc06f11a28e170faf61abade719dd077d19faa16e2556536c1ef617c5b03", Pod:"coredns-7c65d6cfc9-nkll2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2efe8f351db", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.098 [INFO][6901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.098 [INFO][6901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" iface="eth0" netns="" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.098 [INFO][6901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.098 [INFO][6901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.108 [INFO][6918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.108 [INFO][6918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.108 [INFO][6918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.112 [WARNING][6918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.112 [INFO][6918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" HandleID="k8s-pod-network.4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--nkll2-eth0" Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.113 [INFO][6918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.114865 containerd[1819]: 2025-09-12 17:56:11.114 [INFO][6901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5" Sep 12 17:56:11.114865 containerd[1819]: time="2025-09-12T17:56:11.114860765Z" level=info msg="TearDown network for sandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\" successfully" Sep 12 17:56:11.116437 containerd[1819]: time="2025-09-12T17:56:11.116393304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.116437 containerd[1819]: time="2025-09-12T17:56:11.116422611Z" level=info msg="RemovePodSandbox \"4e77419d6912559c146407cf9873996d588d29dba4ea9a9c58375cd81c0883e5\" returns successfully" Sep 12 17:56:11.116726 containerd[1819]: time="2025-09-12T17:56:11.116690286Z" level=info msg="StopPodSandbox for \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\"" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.134 [WARNING][6941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e8f24e-2173-44ef-a6cc-5168890274e3", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b", Pod:"csi-node-driver-mxk5v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie49132c06ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.134 [INFO][6941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.134 [INFO][6941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" iface="eth0" netns="" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.134 [INFO][6941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.134 [INFO][6941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.145 [INFO][6956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.145 [INFO][6956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.145 [INFO][6956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.148 [WARNING][6956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.148 [INFO][6956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.149 [INFO][6956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.151251 containerd[1819]: 2025-09-12 17:56:11.150 [INFO][6941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.151577 containerd[1819]: time="2025-09-12T17:56:11.151264216Z" level=info msg="TearDown network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\" successfully" Sep 12 17:56:11.151577 containerd[1819]: time="2025-09-12T17:56:11.151289295Z" level=info msg="StopPodSandbox for \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\" returns successfully" Sep 12 17:56:11.151577 containerd[1819]: time="2025-09-12T17:56:11.151562178Z" level=info msg="RemovePodSandbox for \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\"" Sep 12 17:56:11.151631 containerd[1819]: time="2025-09-12T17:56:11.151582725Z" level=info msg="Forcibly stopping sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\"" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.169 [WARNING][6984] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a8e8f24e-2173-44ef-a6cc-5168890274e3", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"c6f6e953cc722ca714590ad72dd0b501ea929f2ac0ef7c83e43e52952eb9017b", Pod:"csi-node-driver-mxk5v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie49132c06ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.169 [INFO][6984] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.169 [INFO][6984] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" iface="eth0" netns="" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.169 [INFO][6984] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.169 [INFO][6984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.179 [INFO][6999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.179 [INFO][6999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.179 [INFO][6999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.183 [WARNING][6999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.183 [INFO][6999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" HandleID="k8s-pod-network.73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-csi--node--driver--mxk5v-eth0" Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.184 [INFO][6999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.186401 containerd[1819]: 2025-09-12 17:56:11.185 [INFO][6984] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b" Sep 12 17:56:11.186713 containerd[1819]: time="2025-09-12T17:56:11.186426930Z" level=info msg="TearDown network for sandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\" successfully" Sep 12 17:56:11.187895 containerd[1819]: time="2025-09-12T17:56:11.187855205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.187895 containerd[1819]: time="2025-09-12T17:56:11.187884092Z" level=info msg="RemovePodSandbox \"73323739d2792e53194858201f12a9f0d4326f8ca00846cb4d5ef25649980a8b\" returns successfully" Sep 12 17:56:11.188183 containerd[1819]: time="2025-09-12T17:56:11.188143414Z" level=info msg="StopPodSandbox for \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\"" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.204 [WARNING][7025] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.204 [INFO][7025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.204 [INFO][7025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" iface="eth0" netns="" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.204 [INFO][7025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.205 [INFO][7025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.215 [INFO][7041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.215 [INFO][7041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.215 [INFO][7041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.218 [WARNING][7041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.218 [INFO][7041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.220 [INFO][7041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.221532 containerd[1819]: 2025-09-12 17:56:11.220 [INFO][7025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.221532 containerd[1819]: time="2025-09-12T17:56:11.221499650Z" level=info msg="TearDown network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\" successfully" Sep 12 17:56:11.221532 containerd[1819]: time="2025-09-12T17:56:11.221515777Z" level=info msg="StopPodSandbox for \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\" returns successfully" Sep 12 17:56:11.221860 containerd[1819]: time="2025-09-12T17:56:11.221798262Z" level=info msg="RemovePodSandbox for \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\"" Sep 12 17:56:11.221860 containerd[1819]: time="2025-09-12T17:56:11.221816167Z" level=info msg="Forcibly stopping sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\"" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.250 [WARNING][7063] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" WorkloadEndpoint="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.250 [INFO][7063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.251 [INFO][7063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" iface="eth0" netns="" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.251 [INFO][7063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.251 [INFO][7063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.298 [INFO][7080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.298 [INFO][7080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.298 [INFO][7080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.306 [WARNING][7080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.306 [INFO][7080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" HandleID="k8s-pod-network.bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Workload="ci--4081.3.6--a--7e79e463ed-k8s-whisker--5b8df47b84--wskdw-eth0" Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.308 [INFO][7080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.312116 containerd[1819]: 2025-09-12 17:56:11.310 [INFO][7063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde" Sep 12 17:56:11.312802 containerd[1819]: time="2025-09-12T17:56:11.312135218Z" level=info msg="TearDown network for sandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\" successfully" Sep 12 17:56:11.314931 containerd[1819]: time="2025-09-12T17:56:11.314916283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.314969 containerd[1819]: time="2025-09-12T17:56:11.314949668Z" level=info msg="RemovePodSandbox \"bf431ee9e5c083308abc4502a5513996d6af5612617ff83dd1c658460898efde\" returns successfully" Sep 12 17:56:11.315212 containerd[1819]: time="2025-09-12T17:56:11.315202435Z" level=info msg="StopPodSandbox for \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\"" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.330 [WARNING][7106] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0", GenerateName:"calico-kube-controllers-7cf7c9b989-", Namespace:"calico-system", SelfLink:"", UID:"36fe7463-201c-480a-92fa-70e3b0e14442", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf7c9b989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01", Pod:"calico-kube-controllers-7cf7c9b989-n5gtx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48cd5100b63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.331 [INFO][7106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.331 [INFO][7106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" iface="eth0" netns="" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.331 [INFO][7106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.331 [INFO][7106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.341 [INFO][7122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.341 [INFO][7122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.341 [INFO][7122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.344 [WARNING][7122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.344 [INFO][7122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.345 [INFO][7122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.346445 containerd[1819]: 2025-09-12 17:56:11.345 [INFO][7106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.346751 containerd[1819]: time="2025-09-12T17:56:11.346469374Z" level=info msg="TearDown network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\" successfully" Sep 12 17:56:11.346751 containerd[1819]: time="2025-09-12T17:56:11.346485833Z" level=info msg="StopPodSandbox for \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\" returns successfully" Sep 12 17:56:11.346785 containerd[1819]: time="2025-09-12T17:56:11.346765763Z" level=info msg="RemovePodSandbox for \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\"" Sep 12 17:56:11.346805 containerd[1819]: time="2025-09-12T17:56:11.346785516Z" level=info msg="Forcibly stopping sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\"" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.362 [WARNING][7148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0", GenerateName:"calico-kube-controllers-7cf7c9b989-", Namespace:"calico-system", SelfLink:"", UID:"36fe7463-201c-480a-92fa-70e3b0e14442", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf7c9b989", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"c2b9f963a2fd0f80e6b4edb4eb76bf07bd2dbfa7281cfc5fb18e1bb0a4e6bd01", Pod:"calico-kube-controllers-7cf7c9b989-n5gtx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali48cd5100b63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.362 [INFO][7148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.362 [INFO][7148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" iface="eth0" netns="" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.362 [INFO][7148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.362 [INFO][7148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.373 [INFO][7163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.373 [INFO][7163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.373 [INFO][7163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.376 [WARNING][7163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.376 [INFO][7163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" HandleID="k8s-pod-network.19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--kube--controllers--7cf7c9b989--n5gtx-eth0" Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.377 [INFO][7163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.378973 containerd[1819]: 2025-09-12 17:56:11.378 [INFO][7148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b" Sep 12 17:56:11.379269 containerd[1819]: time="2025-09-12T17:56:11.378998188Z" level=info msg="TearDown network for sandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\" successfully" Sep 12 17:56:11.380543 containerd[1819]: time="2025-09-12T17:56:11.380506233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.380543 containerd[1819]: time="2025-09-12T17:56:11.380536015Z" level=info msg="RemovePodSandbox \"19da77914d12ba26342852c78603efced5e265b6cbdaa5cd0c8a202dc855357b\" returns successfully" Sep 12 17:56:11.380843 containerd[1819]: time="2025-09-12T17:56:11.380815205Z" level=info msg="StopPodSandbox for \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\"" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.397 [WARNING][7189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"054fb7c1-c456-47f2-811b-49f3435a8e35", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f", Pod:"coredns-7c65d6cfc9-kbj9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied429ea2600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.397 [INFO][7189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.397 [INFO][7189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" iface="eth0" netns="" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.397 [INFO][7189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.397 [INFO][7189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.408 [INFO][7204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.408 [INFO][7204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.408 [INFO][7204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.411 [WARNING][7204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.411 [INFO][7204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.412 [INFO][7204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.414021 containerd[1819]: 2025-09-12 17:56:11.413 [INFO][7189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.414409 containerd[1819]: time="2025-09-12T17:56:11.414048027Z" level=info msg="TearDown network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\" successfully" Sep 12 17:56:11.414409 containerd[1819]: time="2025-09-12T17:56:11.414069265Z" level=info msg="StopPodSandbox for \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\" returns successfully" Sep 12 17:56:11.414409 containerd[1819]: time="2025-09-12T17:56:11.414373940Z" level=info msg="RemovePodSandbox for \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\"" Sep 12 17:56:11.414409 containerd[1819]: time="2025-09-12T17:56:11.414391423Z" level=info msg="Forcibly stopping sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\"" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.431 [WARNING][7231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"054fb7c1-c456-47f2-811b-49f3435a8e35", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"3f9d7bc7829f55e336fd94894942198b2f903b432250bedead833f7820e3510f", Pod:"coredns-7c65d6cfc9-kbj9g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied429ea2600", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.431 [INFO][7231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.431 [INFO][7231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" iface="eth0" netns="" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.431 [INFO][7231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.431 [INFO][7231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.441 [INFO][7250] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.441 [INFO][7250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.441 [INFO][7250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.444 [WARNING][7250] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.444 [INFO][7250] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" HandleID="k8s-pod-network.0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Workload="ci--4081.3.6--a--7e79e463ed-k8s-coredns--7c65d6cfc9--kbj9g-eth0" Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.445 [INFO][7250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.447262 containerd[1819]: 2025-09-12 17:56:11.446 [INFO][7231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f" Sep 12 17:56:11.447590 containerd[1819]: time="2025-09-12T17:56:11.447288141Z" level=info msg="TearDown network for sandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\" successfully" Sep 12 17:56:11.448706 containerd[1819]: time="2025-09-12T17:56:11.448658236Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.448706 containerd[1819]: time="2025-09-12T17:56:11.448686990Z" level=info msg="RemovePodSandbox \"0e00b1316e65a0ab2121a1fedfa7d34db24df1a488ef755812289b35c2a1b23f\" returns successfully" Sep 12 17:56:11.448939 containerd[1819]: time="2025-09-12T17:56:11.448927210Z" level=info msg="StopPodSandbox for \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\"" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.464 [WARNING][7275] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f0e34300-52fa-4b2c-a580-7e7738d631f0", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca", Pod:"goldmane-7988f88666-87fsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97390b45778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.465 [INFO][7275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.465 [INFO][7275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" iface="eth0" netns="" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.465 [INFO][7275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.465 [INFO][7275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.475 [INFO][7294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.475 [INFO][7294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.475 [INFO][7294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.478 [WARNING][7294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.478 [INFO][7294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.479 [INFO][7294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.481056 containerd[1819]: 2025-09-12 17:56:11.480 [INFO][7275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.481056 containerd[1819]: time="2025-09-12T17:56:11.481042682Z" level=info msg="TearDown network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\" successfully" Sep 12 17:56:11.481435 containerd[1819]: time="2025-09-12T17:56:11.481062763Z" level=info msg="StopPodSandbox for \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\" returns successfully" Sep 12 17:56:11.481435 containerd[1819]: time="2025-09-12T17:56:11.481370420Z" level=info msg="RemovePodSandbox for \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\"" Sep 12 17:56:11.481435 containerd[1819]: time="2025-09-12T17:56:11.481390504Z" level=info msg="Forcibly stopping sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\"" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.496 [WARNING][7316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f0e34300-52fa-4b2c-a580-7e7738d631f0", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"ddf375cfe54640b46095ae00ee22c70a14e451b58c71a203c25f34b14fbcd7ca", Pod:"goldmane-7988f88666-87fsc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97390b45778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.497 [INFO][7316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.497 [INFO][7316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" iface="eth0" netns="" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.497 [INFO][7316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.497 [INFO][7316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.506 [INFO][7331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.506 [INFO][7331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.506 [INFO][7331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.510 [WARNING][7331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.510 [INFO][7331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" HandleID="k8s-pod-network.dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Workload="ci--4081.3.6--a--7e79e463ed-k8s-goldmane--7988f88666--87fsc-eth0" Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.510 [INFO][7331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.512238 containerd[1819]: 2025-09-12 17:56:11.511 [INFO][7316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c" Sep 12 17:56:11.512684 containerd[1819]: time="2025-09-12T17:56:11.512268186Z" level=info msg="TearDown network for sandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\" successfully" Sep 12 17:56:11.513912 containerd[1819]: time="2025-09-12T17:56:11.513898414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.513959 containerd[1819]: time="2025-09-12T17:56:11.513936128Z" level=info msg="RemovePodSandbox \"dfb574919c28ab1efb54d8ea13068134d7292c57082e3e447322ec3a0606f40c\" returns successfully" Sep 12 17:56:11.514241 containerd[1819]: time="2025-09-12T17:56:11.514227886Z" level=info msg="StopPodSandbox for \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\"" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.530 [WARNING][7354] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"a444f57a-b1d6-4798-858d-e3a3c511da85", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa", Pod:"calico-apiserver-79f6dbf598-7fz7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb9e57173dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.530 [INFO][7354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.530 [INFO][7354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" iface="eth0" netns="" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.530 [INFO][7354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.530 [INFO][7354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.539 [INFO][7371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.540 [INFO][7371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.540 [INFO][7371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.543 [WARNING][7371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.543 [INFO][7371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.543 [INFO][7371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.545215 containerd[1819]: 2025-09-12 17:56:11.544 [INFO][7354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.545528 containerd[1819]: time="2025-09-12T17:56:11.545240594Z" level=info msg="TearDown network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\" successfully" Sep 12 17:56:11.545528 containerd[1819]: time="2025-09-12T17:56:11.545258512Z" level=info msg="StopPodSandbox for \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\" returns successfully" Sep 12 17:56:11.545568 containerd[1819]: time="2025-09-12T17:56:11.545552041Z" level=info msg="RemovePodSandbox for \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\"" Sep 12 17:56:11.545588 containerd[1819]: time="2025-09-12T17:56:11.545569385Z" level=info msg="Forcibly stopping sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\"" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.561 [WARNING][7393] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0", GenerateName:"calico-apiserver-79f6dbf598-", Namespace:"calico-apiserver", SelfLink:"", UID:"a444f57a-b1d6-4798-858d-e3a3c511da85", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 55, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f6dbf598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-7e79e463ed", ContainerID:"1ff0307bc64a59cf0c646ce05a9723db8596739d1b9bc99c9cd977876f5026aa", Pod:"calico-apiserver-79f6dbf598-7fz7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb9e57173dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.561 [INFO][7393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.561 [INFO][7393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" iface="eth0" netns="" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.561 [INFO][7393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.561 [INFO][7393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.572 [INFO][7412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.572 [INFO][7412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.572 [INFO][7412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.575 [WARNING][7412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.575 [INFO][7412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" HandleID="k8s-pod-network.d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Workload="ci--4081.3.6--a--7e79e463ed-k8s-calico--apiserver--79f6dbf598--7fz7j-eth0" Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.576 [INFO][7412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:56:11.577690 containerd[1819]: 2025-09-12 17:56:11.576 [INFO][7393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104" Sep 12 17:56:11.577690 containerd[1819]: time="2025-09-12T17:56:11.577682837Z" level=info msg="TearDown network for sandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\" successfully" Sep 12 17:56:11.579266 containerd[1819]: time="2025-09-12T17:56:11.579210662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:56:11.579266 containerd[1819]: time="2025-09-12T17:56:11.579239297Z" level=info msg="RemovePodSandbox \"d8b205dcbf77e1a4c353db85981bb8086e08cd6f925e60d2e60b158a00395104\" returns successfully" Sep 12 17:56:33.557819 kubelet[3070]: I0912 17:56:33.557740 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:57:54.521952 update_engine[1809]: I20250912 17:57:54.521709 1809 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:57:54.521952 update_engine[1809]: I20250912 17:57:54.521809 1809 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:57:54.523079 update_engine[1809]: I20250912 17:57:54.522186 1809 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:57:54.523345 update_engine[1809]: I20250912 17:57:54.523275 1809 omaha_request_params.cc:62] Current group set to lts Sep 12 17:57:54.523635 update_engine[1809]: I20250912 17:57:54.523542 1809 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:57:54.523635 update_engine[1809]: I20250912 17:57:54.523576 1809 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:57:54.523635 update_engine[1809]: I20250912 17:57:54.523615 1809 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:57:54.524008 update_engine[1809]: I20250912 17:57:54.523694 1809 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:57:54.524008 update_engine[1809]: I20250912 17:57:54.523850 1809 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:57:54.524008 update_engine[1809]: I20250912 17:57:54.523885 1809 omaha_request_action.cc:272] Request: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: Sep 12 17:57:54.524008 update_engine[1809]: I20250912 17:57:54.523902 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:57:54.525092 locksmithd[1857]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:57:54.526869 update_engine[1809]: I20250912 17:57:54.526858 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:57:54.527062 update_engine[1809]: I20250912 17:57:54.527022 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:57:54.528176 update_engine[1809]: E20250912 17:57:54.528133 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:57:54.528176 update_engine[1809]: I20250912 17:57:54.528164 1809 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:58:04.481838 update_engine[1809]: I20250912 17:58:04.481678 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:58:04.482965 update_engine[1809]: I20250912 17:58:04.482225 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:58:04.482965 update_engine[1809]: I20250912 17:58:04.482776 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:58:04.483543 update_engine[1809]: E20250912 17:58:04.483462 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:58:04.483768 update_engine[1809]: I20250912 17:58:04.483633 1809 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:58:14.473489 update_engine[1809]: I20250912 17:58:14.473289 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:58:14.474507 update_engine[1809]: I20250912 17:58:14.473896 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:58:14.474507 update_engine[1809]: I20250912 17:58:14.474455 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:58:14.475427 update_engine[1809]: E20250912 17:58:14.475299 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:58:14.475641 update_engine[1809]: I20250912 17:58:14.475474 1809 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 17:58:24.476042 update_engine[1809]: I20250912 17:58:24.475885 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:58:24.477076 update_engine[1809]: I20250912 17:58:24.476478 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:58:24.477076 update_engine[1809]: I20250912 17:58:24.477012 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:58:24.478125 update_engine[1809]: E20250912 17:58:24.478010 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:58:24.478337 update_engine[1809]: I20250912 17:58:24.478152 1809 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:58:24.478337 update_engine[1809]: I20250912 17:58:24.478183 1809 omaha_request_action.cc:617] Omaha request response: Sep 12 17:58:24.478618 update_engine[1809]: E20250912 17:58:24.478345 1809 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 12 17:58:24.478618 update_engine[1809]: I20250912 17:58:24.478395 1809 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 12 17:58:24.478618 update_engine[1809]: I20250912 17:58:24.478415 1809 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:58:24.478618 update_engine[1809]: I20250912 17:58:24.478430 1809 update_attempter.cc:306] Processing Done. Sep 12 17:58:24.478618 update_engine[1809]: E20250912 17:58:24.478506 1809 update_attempter.cc:619] Update failed. Sep 12 17:58:24.478618 update_engine[1809]: I20250912 17:58:24.478525 1809 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 12 17:58:24.478618 update_engine[1809]: I20250912 17:58:24.478541 1809 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 12 17:58:24.478618 update_engine[1809]: I20250912 17:58:24.478559 1809 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 12 17:58:24.479325 update_engine[1809]: I20250912 17:58:24.478729 1809 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:58:24.479325 update_engine[1809]: I20250912 17:58:24.478793 1809 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:58:24.479325 update_engine[1809]: I20250912 17:58:24.478814 1809 omaha_request_action.cc:272] Request: Sep 12 17:58:24.479325 update_engine[1809]: Sep 12 17:58:24.479325 update_engine[1809]: Sep 12 17:58:24.479325 update_engine[1809]: Sep 12 17:58:24.479325 update_engine[1809]: Sep 12 17:58:24.479325 update_engine[1809]: Sep 12 17:58:24.479325 update_engine[1809]: Sep 12 17:58:24.479325 update_engine[1809]: I20250912 17:58:24.478832 1809 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:58:24.480243 update_engine[1809]: I20250912 17:58:24.479328 1809 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:58:24.480243 update_engine[1809]: I20250912 17:58:24.479843 1809 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:58:24.480450 locksmithd[1857]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 12 17:58:24.481151 update_engine[1809]: E20250912 17:58:24.480777 1809 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:58:24.481151 update_engine[1809]: I20250912 17:58:24.480906 1809 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:58:24.481151 update_engine[1809]: I20250912 17:58:24.480934 1809 omaha_request_action.cc:617] Omaha request response: Sep 12 17:58:24.481151 update_engine[1809]: I20250912 17:58:24.480953 1809 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:58:24.481151 update_engine[1809]: I20250912 17:58:24.480970 1809 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:58:24.481151 update_engine[1809]: I20250912 17:58:24.480985 1809 update_attempter.cc:306] Processing Done. Sep 12 17:58:24.481151 update_engine[1809]: I20250912 17:58:24.481001 1809 update_attempter.cc:310] Error event sent. Sep 12 17:58:24.481151 update_engine[1809]: I20250912 17:58:24.481025 1809 update_check_scheduler.cc:74] Next update check in 48m8s Sep 12 17:58:24.481904 locksmithd[1857]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 12 18:01:19.606148 systemd[1]: Started sshd@9-139.178.94.21:22-147.75.109.163:58984.service - OpenSSH per-connection server daemon (147.75.109.163:58984). Sep 12 18:01:19.642201 sshd[8689]: Accepted publickey for core from 147.75.109.163 port 58984 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:19.643253 sshd[8689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:19.646853 systemd-logind[1804]: New session 12 of user core. Sep 12 18:01:19.659676 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 18:01:19.815473 sshd[8689]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:19.817546 systemd[1]: sshd@9-139.178.94.21:22-147.75.109.163:58984.service: Deactivated successfully. Sep 12 18:01:19.818740 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 18:01:19.819701 systemd-logind[1804]: Session 12 logged out. Waiting for processes to exit. Sep 12 18:01:19.820443 systemd-logind[1804]: Removed session 12. Sep 12 18:01:24.834475 systemd[1]: Started sshd@10-139.178.94.21:22-147.75.109.163:52772.service - OpenSSH per-connection server daemon (147.75.109.163:52772). Sep 12 18:01:24.899719 sshd[8769]: Accepted publickey for core from 147.75.109.163 port 52772 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:24.900389 sshd[8769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:24.902903 systemd-logind[1804]: New session 13 of user core. Sep 12 18:01:24.912753 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 18:01:25.058132 sshd[8769]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:25.060104 systemd[1]: sshd@10-139.178.94.21:22-147.75.109.163:52772.service: Deactivated successfully. Sep 12 18:01:25.061045 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 18:01:25.061402 systemd-logind[1804]: Session 13 logged out. Waiting for processes to exit. Sep 12 18:01:25.061990 systemd-logind[1804]: Removed session 13. Sep 12 18:01:30.072943 systemd[1]: Started sshd@11-139.178.94.21:22-147.75.109.163:36190.service - OpenSSH per-connection server daemon (147.75.109.163:36190). Sep 12 18:01:30.106218 sshd[8799]: Accepted publickey for core from 147.75.109.163 port 36190 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:30.106925 sshd[8799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:30.109428 systemd-logind[1804]: New session 14 of user core. Sep 12 18:01:30.119682 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 18:01:30.207224 sshd[8799]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:30.208873 systemd[1]: sshd@11-139.178.94.21:22-147.75.109.163:36190.service: Deactivated successfully. Sep 12 18:01:30.209897 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 18:01:30.210723 systemd-logind[1804]: Session 14 logged out. Waiting for processes to exit. Sep 12 18:01:30.211348 systemd-logind[1804]: Removed session 14. Sep 12 18:01:35.225411 systemd[1]: Started sshd@12-139.178.94.21:22-147.75.109.163:36194.service - OpenSSH per-connection server daemon (147.75.109.163:36194). Sep 12 18:01:35.255603 sshd[8858]: Accepted publickey for core from 147.75.109.163 port 36194 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:35.256318 sshd[8858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:35.258971 systemd-logind[1804]: New session 15 of user core. Sep 12 18:01:35.277660 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 18:01:35.402826 sshd[8858]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:35.425100 systemd[1]: sshd@12-139.178.94.21:22-147.75.109.163:36194.service: Deactivated successfully. Sep 12 18:01:35.425876 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 18:01:35.426604 systemd-logind[1804]: Session 15 logged out. Waiting for processes to exit. Sep 12 18:01:35.427154 systemd[1]: Started sshd@13-139.178.94.21:22-147.75.109.163:36206.service - OpenSSH per-connection server daemon (147.75.109.163:36206). Sep 12 18:01:35.427667 systemd-logind[1804]: Removed session 15. Sep 12 18:01:35.457167 sshd[8885]: Accepted publickey for core from 147.75.109.163 port 36206 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:35.457994 sshd[8885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:35.460722 systemd-logind[1804]: New session 16 of user core. Sep 12 18:01:35.484672 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 18:01:35.585286 sshd[8885]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:35.599104 systemd[1]: sshd@13-139.178.94.21:22-147.75.109.163:36206.service: Deactivated successfully. Sep 12 18:01:35.599948 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 18:01:35.600678 systemd-logind[1804]: Session 16 logged out. Waiting for processes to exit. Sep 12 18:01:35.601229 systemd[1]: Started sshd@14-139.178.94.21:22-147.75.109.163:36222.service - OpenSSH per-connection server daemon (147.75.109.163:36222). Sep 12 18:01:35.601686 systemd-logind[1804]: Removed session 16. Sep 12 18:01:35.630791 sshd[8911]: Accepted publickey for core from 147.75.109.163 port 36222 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:35.631477 sshd[8911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:35.633967 systemd-logind[1804]: New session 17 of user core. Sep 12 18:01:35.649611 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 18:01:35.797781 sshd[8911]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:35.800284 systemd[1]: sshd@14-139.178.94.21:22-147.75.109.163:36222.service: Deactivated successfully. Sep 12 18:01:35.801802 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 18:01:35.803034 systemd-logind[1804]: Session 17 logged out. Waiting for processes to exit. Sep 12 18:01:35.804026 systemd-logind[1804]: Removed session 17. Sep 12 18:01:40.839746 systemd[1]: Started sshd@15-139.178.94.21:22-147.75.109.163:53484.service - OpenSSH per-connection server daemon (147.75.109.163:53484). Sep 12 18:01:40.867903 sshd[8943]: Accepted publickey for core from 147.75.109.163 port 53484 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:40.868716 sshd[8943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:40.871505 systemd-logind[1804]: New session 18 of user core. Sep 12 18:01:40.885614 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 18:01:40.966307 sshd[8943]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:40.967910 systemd[1]: sshd@15-139.178.94.21:22-147.75.109.163:53484.service: Deactivated successfully. Sep 12 18:01:40.968836 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 18:01:40.969428 systemd-logind[1804]: Session 18 logged out. Waiting for processes to exit. Sep 12 18:01:40.970066 systemd-logind[1804]: Removed session 18. Sep 12 18:01:45.984807 systemd[1]: Started sshd@16-139.178.94.21:22-147.75.109.163:53494.service - OpenSSH per-connection server daemon (147.75.109.163:53494). Sep 12 18:01:46.018900 sshd[8968]: Accepted publickey for core from 147.75.109.163 port 53494 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:46.019574 sshd[8968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:46.021883 systemd-logind[1804]: New session 19 of user core. Sep 12 18:01:46.041681 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 18:01:46.127222 sshd[8968]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:46.128956 systemd[1]: sshd@16-139.178.94.21:22-147.75.109.163:53494.service: Deactivated successfully. Sep 12 18:01:46.129995 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 18:01:46.130823 systemd-logind[1804]: Session 19 logged out. Waiting for processes to exit. Sep 12 18:01:46.131500 systemd-logind[1804]: Removed session 19. Sep 12 18:01:51.170680 systemd[1]: Started sshd@17-139.178.94.21:22-147.75.109.163:42804.service - OpenSSH per-connection server daemon (147.75.109.163:42804). Sep 12 18:01:51.199481 sshd[8996]: Accepted publickey for core from 147.75.109.163 port 42804 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:51.200245 sshd[8996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:51.203058 systemd-logind[1804]: New session 20 of user core. Sep 12 18:01:51.220693 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 18:01:51.311519 sshd[8996]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:51.313282 systemd[1]: sshd@17-139.178.94.21:22-147.75.109.163:42804.service: Deactivated successfully. Sep 12 18:01:51.314269 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 18:01:51.315036 systemd-logind[1804]: Session 20 logged out. Waiting for processes to exit. Sep 12 18:01:51.315663 systemd-logind[1804]: Removed session 20. Sep 12 18:01:56.341207 systemd[1]: Started sshd@18-139.178.94.21:22-147.75.109.163:42814.service - OpenSSH per-connection server daemon (147.75.109.163:42814). Sep 12 18:01:56.412959 sshd[9073]: Accepted publickey for core from 147.75.109.163 port 42814 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:56.413928 sshd[9073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:56.417339 systemd-logind[1804]: New session 21 of user core. Sep 12 18:01:56.429591 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 18:01:56.570182 sshd[9073]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:56.579548 systemd[1]: sshd@18-139.178.94.21:22-147.75.109.163:42814.service: Deactivated successfully. Sep 12 18:01:56.580626 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 18:01:56.581442 systemd-logind[1804]: Session 21 logged out. Waiting for processes to exit. Sep 12 18:01:56.582285 systemd[1]: Started sshd@19-139.178.94.21:22-147.75.109.163:42816.service - OpenSSH per-connection server daemon (147.75.109.163:42816). Sep 12 18:01:56.582870 systemd-logind[1804]: Removed session 21. Sep 12 18:01:56.616383 sshd[9100]: Accepted publickey for core from 147.75.109.163 port 42816 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:56.617190 sshd[9100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:56.620190 systemd-logind[1804]: New session 22 of user core. Sep 12 18:01:56.633719 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 18:01:56.762698 sshd[9100]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:56.785995 systemd[1]: sshd@19-139.178.94.21:22-147.75.109.163:42816.service: Deactivated successfully. Sep 12 18:01:56.787609 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 18:01:56.788881 systemd-logind[1804]: Session 22 logged out. Waiting for processes to exit. Sep 12 18:01:56.790333 systemd[1]: Started sshd@20-139.178.94.21:22-147.75.109.163:42826.service - OpenSSH per-connection server daemon (147.75.109.163:42826). Sep 12 18:01:56.791354 systemd-logind[1804]: Removed session 22. Sep 12 18:01:56.836947 sshd[9124]: Accepted publickey for core from 147.75.109.163 port 42826 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:56.838150 sshd[9124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:56.842219 systemd-logind[1804]: New session 23 of user core. Sep 12 18:01:56.868638 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 18:01:57.882140 sshd[9124]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:57.900280 systemd[1]: sshd@20-139.178.94.21:22-147.75.109.163:42826.service: Deactivated successfully. Sep 12 18:01:57.901172 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 18:01:57.901908 systemd-logind[1804]: Session 23 logged out. Waiting for processes to exit. Sep 12 18:01:57.902598 systemd[1]: Started sshd@21-139.178.94.21:22-147.75.109.163:42830.service - OpenSSH per-connection server daemon (147.75.109.163:42830). Sep 12 18:01:57.903163 systemd-logind[1804]: Removed session 23. Sep 12 18:01:57.934615 sshd[9166]: Accepted publickey for core from 147.75.109.163 port 42830 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:57.935452 sshd[9166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:57.938248 systemd-logind[1804]: New session 24 of user core. Sep 12 18:01:57.948649 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 18:01:58.150221 sshd[9166]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:58.167039 systemd[1]: sshd@21-139.178.94.21:22-147.75.109.163:42830.service: Deactivated successfully. Sep 12 18:01:58.171161 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 18:01:58.174366 systemd-logind[1804]: Session 24 logged out. Waiting for processes to exit. Sep 12 18:01:58.189203 systemd[1]: Started sshd@22-139.178.94.21:22-147.75.109.163:42836.service - OpenSSH per-connection server daemon (147.75.109.163:42836). Sep 12 18:01:58.192397 systemd-logind[1804]: Removed session 24. Sep 12 18:01:58.247090 sshd[9191]: Accepted publickey for core from 147.75.109.163 port 42836 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:01:58.248257 sshd[9191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:01:58.252026 systemd-logind[1804]: New session 25 of user core. Sep 12 18:01:58.275670 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 18:01:58.404664 sshd[9191]: pam_unix(sshd:session): session closed for user core Sep 12 18:01:58.406317 systemd[1]: sshd@22-139.178.94.21:22-147.75.109.163:42836.service: Deactivated successfully. Sep 12 18:01:58.407262 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 18:01:58.408011 systemd-logind[1804]: Session 25 logged out. Waiting for processes to exit. Sep 12 18:01:58.408566 systemd-logind[1804]: Removed session 25. Sep 12 18:02:03.421271 systemd[1]: Started sshd@23-139.178.94.21:22-147.75.109.163:46484.service - OpenSSH per-connection server daemon (147.75.109.163:46484). Sep 12 18:02:03.477550 sshd[9289]: Accepted publickey for core from 147.75.109.163 port 46484 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:02:03.481149 sshd[9289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:02:03.491694 systemd-logind[1804]: New session 26 of user core. Sep 12 18:02:03.513888 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 18:02:03.606587 sshd[9289]: pam_unix(sshd:session): session closed for user core Sep 12 18:02:03.608359 systemd[1]: sshd@23-139.178.94.21:22-147.75.109.163:46484.service: Deactivated successfully. Sep 12 18:02:03.609365 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 18:02:03.610096 systemd-logind[1804]: Session 26 logged out. Waiting for processes to exit. Sep 12 18:02:03.610746 systemd-logind[1804]: Removed session 26. Sep 12 18:02:08.640661 systemd[1]: Started sshd@24-139.178.94.21:22-147.75.109.163:46500.service - OpenSSH per-connection server daemon (147.75.109.163:46500). Sep 12 18:02:08.670008 sshd[9315]: Accepted publickey for core from 147.75.109.163 port 46500 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:02:08.670914 sshd[9315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:02:08.674033 systemd-logind[1804]: New session 27 of user core. Sep 12 18:02:08.674750 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 18:02:08.758403 sshd[9315]: pam_unix(sshd:session): session closed for user core Sep 12 18:02:08.759894 systemd[1]: sshd@24-139.178.94.21:22-147.75.109.163:46500.service: Deactivated successfully. Sep 12 18:02:08.760831 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 18:02:08.761450 systemd-logind[1804]: Session 27 logged out. Waiting for processes to exit. Sep 12 18:02:08.762009 systemd-logind[1804]: Removed session 27. Sep 12 18:02:13.796751 systemd[1]: Started sshd@25-139.178.94.21:22-147.75.109.163:34588.service - OpenSSH per-connection server daemon (147.75.109.163:34588). Sep 12 18:02:13.824912 sshd[9343]: Accepted publickey for core from 147.75.109.163 port 34588 ssh2: RSA SHA256:6p41YxiFESxrZTnbcy95UBNnL9kP2MVm2sysusSZqw8 Sep 12 18:02:13.825582 sshd[9343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 18:02:13.828026 systemd-logind[1804]: New session 28 of user core. Sep 12 18:02:13.847719 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 18:02:13.933992 sshd[9343]: pam_unix(sshd:session): session closed for user core Sep 12 18:02:13.935625 systemd[1]: sshd@25-139.178.94.21:22-147.75.109.163:34588.service: Deactivated successfully. Sep 12 18:02:13.936697 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 18:02:13.937491 systemd-logind[1804]: Session 28 logged out. Waiting for processes to exit. Sep 12 18:02:13.938285 systemd-logind[1804]: Removed session 28.