### Test report for 4081.3.6+nightly-20260424-2100 / amd64

**Platforms tested** : azure 

🟢 ok **bpf.ig**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " _bpf.ig/ig (0.32s)"
      L2: "ig.go:50: creating node: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5883bba/providers/Microsoft.Networ?k/publicIPAddresses/ip-62438ccf95"
      L3: "--------------------------------------------------------------------------------"
      L4: "RESPONSE 409: 409 Conflict"
      L5: "ERROR CODE: ResourceGroupBeingDeleted"
      L6: "--------------------------------------------------------------------------------"
      L7: "{"
      L8: "__error__: {"
      L9: "__code__: __ResourceGroupBeingDeleted__,"
      L10: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L11: "}"
      L12: "}"
      L13: "--------------------------------------------------------------------------------_"
      L14: " "
  ```


</details>


🟢 ok **cl.basic**; Succeeded: azure (1)

🟢 ok **cl.cloudinit.basic**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-7739bd3878"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.cloudinit.multipart-mime**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-b9addca590"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.cloudinit.script**; Succeeded: azure (1)

🟢 ok **cl.etcd-member.discovery**; Succeeded: azure (1)

🟢 ok **cl.etcd-member.etcdctlv3**; Succeeded: azure (1)

🟢 ok **cl.etcd-member.v2-backup-restore**; Succeeded: azure (1)

🟢 ok **cl.flannel.udp**; Succeeded: azure (1)

🟢 ok **cl.flannel.vxlan**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-0da04e85c4"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.kargs**; Succeeded: azure (1)

🟢 ok **cl.ignition.luks**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-f9b4ede94e"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.misc.empty**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-0e4daf0580"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.symlink**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-04e044f16d"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.translation**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.btrfsroot**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: waiting for machine to become active: GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-?cluster-image-9fd5883bba/providers/Microsoft.Compute/virtualMachines/ci-4081.3.6-n-d2cc968ba4"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 404: 404 Not Found"
      L4: "ERROR CODE: NotFound"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __NotFound__,"
      L9: "__message__: __The entity was not found in this Azure location.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v1.ext4root**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PollUntilDone(ci-4081.3.6-n-f96be78ed2): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/providers/Microso?ft.Compute/locations/northeurope/operations/2dd4bf5a-d803-4457-b3fe-99aa07cb4d44"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:32:59.3080908+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.7910001+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __2dd4bf5a-d803-4457-b3fe-99aa07cb4d44__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **cl.ignition.v1.groups**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-3b2dd083f2"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v1.noop**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-209d00dfde"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v1.once**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-0ce7d25342"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v1.users**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.xfsroot**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-727b7fc6ee"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2.btrfsroot**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-456882cb2a"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2.ext4root**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-e4dfc1a968"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2.noop**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2.users**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-eda7ee5f04"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2.xfsroot**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-9770e9bf1e"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2_1.ext4checkexisting**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-c010603d03"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2_1.swap**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2_1.vfat**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-a2c0ae037a"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.internet**; Succeeded: azure (1)

🟢 ok **cl.locksmith.cluster**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PollUntilDone(ci-4081.3.6-n-de543dddb3): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/providers/Microso?ft.Compute/locations/northeurope/operations/dd14dbbe-2cbf-4df2-b14f-8ec2fa38ab5b"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:32:51.5539245+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.5436583+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __dd14dbbe-2cbf-4df2-b14f-8ec2fa38ab5b__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **cl.metadata.azure**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PollUntilDone(ci-4081.3.6-n-6c5b1a9603): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/providers/Microso?ft.Compute/locations/northeurope/operations/eae90a70-d05a-454a-a6e0-3cfea1df3d9d"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:32:51.127317+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.6535924+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __eae90a70-d05a-454a-a6e0-3cfea1df3d9d__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **cl.network.initramfs.second-boot**; Succeeded: azure (1)

❌ not ok **cl.network.iptables**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PollUntilDone(ci-4081.3.6-n-e2f6f89649): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/providers/Microso?ft.Compute/locations/northeurope/operations/b40fd4d3-318d-490a-b60d-537956a35956"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:32:26.1686291+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.5436583+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __b40fd4d3-318d-490a-b60d-537956a35956__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **cl.network.wireguard**; Succeeded: azure (1)

🟢 ok **cl.osreset.ignition-rerun**; Succeeded: azure (1)

🟢 ok **cl.overlay.cleanup**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-b3e7936f3a"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.swap_activation**; Succeeded: azure (1)

🟢 ok **cl.toolbox.dnf-install**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-987541d0f6"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.update.badverity**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating nic: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5883bba?/providers/Microsoft.Network/networkInterfaces/nic-02176ca18e"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.update.reboot**; Succeeded: azure (1)

🟢 ok **cl.users.shells**; Succeeded: azure (1)

🟢 ok **cl.verity**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b94fddb2b9/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.6-n-37f19bd511"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PollUntilDone(ci-4081.3.6-n-ea108ec1f8): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/providers/Microso?ft.Compute/locations/northeurope/operations/f51612f3-7fbe-483e-a6fe-8cae135552f9"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:33:05.7564893+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.7910001+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __f51612f3-7fbe-483e-a6fe-8cae135552f9__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **coreos.auth.verify**; Succeeded: azure (1)

🟢 ok **coreos.ignition.groups**; Succeeded: azure (1)

🟢 ok **coreos.ignition.once**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-8d5a61bb1d"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.ignition.resource.local**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-c50d512dda"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.ignition.resource.remote**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-39aaedc2fa"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.ignition.security.tls**; Succeeded: azure (1)

❌ not ok **coreos.ignition.ssh.key**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.6-n-400e960a26__ failed to start: ssh journalctl failed: time limit exceeded: ssh: handshake failed: ssh: unable to authen?ticate, attempted methods [none publickey], no supported methods remain_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.6-n-b0c4bed929__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 104.46.14.188:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.6-n-ca82212820__ failed to start: ssh journalctl failed: time limit exceeded: ssh: handshake failed: ssh: unable to authen?ticate, attempted methods [none publickey], no supported methods remain_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.6-n-ae2c12a044__ failed to start: ssh journalctl failed: time limit exceeded: ssh: handshake failed: ssh: unable to authen?ticate, attempted methods [none publickey], no supported methods remain_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-b2e2230eaf"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.ignition.systemd.enable-service**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-06833f7850"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.locksmith.reboot**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _locksmith.go:141: failed to check rebooted machine: ssh unreachable or system not ready: context deadline exceeded_"
      L2: " "
  ```


</details>


🟢 ok **coreos.locksmith.tls**; Succeeded: azure (1)

🟢 ok **coreos.selinux.boolean**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-e9fbc97bb2"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.selinux.enforce**; Succeeded: azure (1)

🟢 ok **coreos.tls.fetch-urls**; Succeeded: azure (1)

🟢 ok **coreos.update.badusr**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b94fddb2b9/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.6-n-7a111a839b"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PollUntilDone(ci-4081.3.6-n-475e858fd9): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/providers/Microso?ft.Compute/locations/northeurope/operations/1ab2daf9-5c2f-4b53-a247-6587c1dd55fe"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:32:20.6885893+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.7910001+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __1ab2daf9-5c2f-4b53-a247-6587c1dd55fe__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **docker.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " _docker.base/networks-reliably (52.12s)"
      L2: "cluster.go:125: #1 [internal] load build definition from Dockerfile"
      L3: "cluster.go:125: #1 transferring dockerfile: 108B done"
      L4: "cluster.go:125: #1 DONE 0.0s"
      L5: "cluster.go:125: "
      L6: "cluster.go:125: #2 [internal] load .dockerignore"
      L7: "cluster.go:125: #2 transferring context: 2B done"
      L8: "cluster.go:125: #2 DONE 0.1s"
      L9: "cluster.go:125: "
      L10: "cluster.go:125: #3 [internal] load build context"
      L11: "cluster.go:125: #3 transferring context: 4.04MB 0.0s done"
      L12: "cluster.go:125: #3 DONE 0.1s"
      L13: "cluster.go:125: "
      L14: "cluster.go:125: #4 [1/1] COPY . /"
      L15: "cluster.go:125: #4 DONE 0.0s"
      L16: "cluster.go:125: "
      L17: "cluster.go:125: #5 exporting to image"
      L18: "cluster.go:125: #5 exporting layers 0.1s done"
      L19: "cluster.go:125: #5 writing image sha256:69e33aa7989053e019c65e4c6420bc418f23d216ff8d41c38b49dfe1c43bc03d done"
      L20: "cluster.go:125: #5 naming to docker.io/library/ping 0.0s done"
      L21: "cluster.go:125: #5 DONE 0.1s"
      L22: "cluster.go:145: __for i in $(seq 1 100); do_n_t_techo -n ___$i: ____n_t_tdocker run --rm ping sh -c _ping -i 0.2 172.17.0.1 -w 1 _/dev/null && echo PASS || echo FAIL__n_tdone__ failed: output 1: PASS"
      L23: "2: PASS"
      L24: "3: PASS"
      L25: "4: PASS"
      L26: "5: PASS"
      L27: "6: PASS"
      L28: "7: PASS"
      L29: "8: PASS"
      L30: "9: PASS"
      L31: "10: PASS"
      L32: "11: PASS"
      L33: "12: PASS"
      L34: "13: PASS"
      L35: "14: PASS"
      L36: "15: PASS"
      L37: "16: PASS"
      L38: "17: PASS"
      L39: "18: PASS"
      L40: "19: PASS"
      L41: "20: PASS"
      L42: "21: PASS"
      L43: "22: PASS"
      L44: "23: PASS"
      L45: "24: PASS"
      L46: "25: PASS"
      L47: "26: PASS"
      L48: "27: PASS"
      L49: "28: PASS"
      L50: "29: PASS"
      L51: "30: PASS"
      L52: "31:, status wait: remote command exited without exit status or exit signal"
      L53: "--- FAIL: docker.base/user-no-caps (0.05s)"
      L54: "cluster.go:145: __tmpdir=$(mktemp -d); cd $tmpdir; echo -e ___FROM scratch__nCOPY . /___ _ Dockerfile;_n_t        b=$(which capsh sh grep cat ls); libs=$(sudo ldd $b | grep -o /lib_[^ ]*_ | sort -u);_?n_t        sudo rsync -av --relative --copy-links $b $libs ./;_n_t        sudo docker build -t captest .__ failed: output , status ssh: handshake failed: read tcp 10.200.1.4:33802-_20.223.150.222:22: ?read: connection reset by peer"
      L55: "--- FAIL: docker.base/ownership (0.05s)"
      L56: "cluster.go:145: __docker run --name ownership ghcr.io/flatcar/nginx stat -c ___%u/%g___ /etc/shadow__ failed: output , status ssh: handshake failed: read tcp 10.200.1.4:33816-_20.223.150.222:22: read:? connection reset by peer_"
      L57: " "
  ```


</details>


🟢 ok **docker.btrfs-storage**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-90edf3f85b"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.containerd-restart**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-90d1b0a928"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.enable-service.sysext**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: waiting for machine to become active: GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-?cluster-image-9fd5883bba/providers/Microsoft.Compute/virtualMachines/ci-4081.3.6-n-0f610b0e3e"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 404: 404 Not Found"
      L4: "ERROR CODE: NotFound"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __NotFound__,"
      L9: "__message__: __The entity was not found in this Azure location.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.lib-coreos-dockerd-compat**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PollUntilDone(ci-4081.3.6-n-42506c0512): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/providers/Microso?ft.Compute/locations/northeurope/operations/0ea5c857-2561-4bc0-ac20-02a79968476f"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:32:52.702866+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:40.3628852+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __0ea5c857-2561-4bc0-ac20-02a79968476f__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **docker.network-openbsd-nc**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-1e7845b6ee"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.selinux**; Succeeded: azure (1)

🟢 ok **docker.userns**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-edc1cc55d5"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


❌ not ok **extra-test.[Standard_NC6s_v3].cl.internet**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-f2e738b3b2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.6-n-0fd9ea5453"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b56b597dcf/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.6-n-530d997c88"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-752cee28f7/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.6-n-77d8e9f51c"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-050487394b/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.6-n-d4b7705682"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-771dfd7a34/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.6-n-4ecf9d5a1a"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


❌ not ok **extra-test.[Standard_NC6s_v3].cl.misc.nvidia**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-f2e738b3b2/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.6-n-1d916312a9"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b56b597dcf/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.6-n-a72a9e9dbe"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-752cee28f7/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.6-n-4198b0f963"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-050487394b/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.6-n-4f7f708fd9"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-771dfd7a34/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.6-n-92d76b5054"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **extra-test.[V1].cl.internet**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.33.8.calico.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0424 23:33:14.343846    2388 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "kubeadm.go:197: unable to setup cluster: unable to run master script: wait: remote command exited without exit status or exit signal_"
      L6: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.calico.cgroupv1.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola?-cluster-image-9fd5883bba/providers/Microsoft.Network/publicIPAddresses/ip-eddf3b77d9"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.cilium.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b94fd?db2b9/providers/Microsoft.Compute/virtualMachines/ci-4081.3.6-n-de85244b61"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola?-cluster-image-9fd5883bba/providers/Microsoft.Network/publicIPAddresses/ip-8108c0fb48"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.cilium.cgroupv1.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0424 23:56:17.487552    2674 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0424 23:56:30.257789    2894 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-a6922b52a8__ could not be reached"
      L14: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-a6922b52a8__: lookup ci-4081.3.6-n-a6922b52a8 on 168.63.129.16:53: no such host"
      L15: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L16: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L17: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L18: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L19: "cluster.go:125: W0424 23:56:30.445053    2894 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L20: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L21: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L23: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.6-n-a6922b52a8 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.30]"
      L24: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L26: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L31: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L32: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L33: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L34: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L38: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.001774369s"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.30:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.400173078s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.269334781s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.002447211s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-a6922b52a8 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-a6922b52a8 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: 1y67bk.mhdwgt15hry4wz9h"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.0.30:6443 --token 1y67bk.mhdwgt15hry4wz9h _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:8a5a137e18c44b34b4f73bba3248c0b714d06efdfdada4ef427026033e811d13 "
      L92: "cluster.go:125: i  Using Cilium version 1.12.5"
      L93: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L94: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L95: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L96: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L97: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L98: "cluster.go:125: ? Created CA in secret cilium-ca"
      L99: "cluster.go:125: ? Generating certificates for Hubble..."
      L100: "cluster.go:125: ? Creating Service accounts..."
      L101: "cluster.go:125: ? Creating Cluster roles..."
      L102: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L103: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L104: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L105: "cluster.go:125: ? Creating Agent DaemonSet..."
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L109: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L110: "cluster.go:125: ? Creating Operator Deployment..."
      L111: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L112: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L113: "cluster.go:125: ?[33m    /??_"
      L114: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L115: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L116: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L117: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L118: "cluster.go:125: ?[34m    ___/"
      L119: "cluster.go:125: ?[0m"
      L120: "cluster.go:125: Deployment       cilium-operator    "
      L121: "cluster.go:125: DaemonSet        cilium             "
      L122: "cluster.go:125: Containers:      cilium             "
      L123: "cluster.go:125:                  cilium-operator    "
      L124: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L125: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b94fddb2b9/?providers/Microsoft.Compute/virtualMachines/ci-4081.3.6-n-1e44dbb4c3"
      L126: "--------------------------------------------------------------------------------"
      L127: "RESPONSE 409: 409 Conflict"
      L128: "ERROR CODE: OperationNotAllowed"
      L129: "--------------------------------------------------------------------------------"
      L130: "{"
      L131: "__error__: {"
      L132: "__code__: __OperationNotAllowed__,"
      L133: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L134: "}"
      L135: "}"
      L136: "--------------------------------------------------------------------------------_"
      L137: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0424 23:32:20.875527    2643 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0424 23:32:34.874823    2865 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-495a71b429__ could not be reached"
      L14: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-495a71b429__: lookup ci-4081.3.6-n-495a71b429 on 168.63.129.16:53: no such host"
      L15: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L16: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L17: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L18: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L19: "cluster.go:125: W0424 23:32:35.067401    2865 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L20: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L21: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L23: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.6-n-495a71b429 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.27]"
      L24: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L26: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L31: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L32: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L33: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L34: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L38: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 501.937545ms"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.27:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.54856253s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.314370322s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.001777498s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-495a71b429 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-495a71b429 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: rono8k.usr39dgmdee4j2jm"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.0.27:6443 --token rono8k.usr39dgmdee4j2jm _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:317b87545802334ee17d022683d7342dd48a78187263c5ff6c91b02792f5ce29 "
      L92: "cluster.go:125: i  Using Cilium version 1.12.5"
      L93: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L94: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L95: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L96: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L97: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L98: "cluster.go:125: ? Created CA in secret cilium-ca"
      L99: "cluster.go:125: ? Generating certificates for Hubble..."
      L100: "cluster.go:125: ? Creating Service accounts..."
      L101: "cluster.go:125: ? Creating Cluster roles..."
      L102: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L103: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L104: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L105: "cluster.go:125: ? Creating Agent DaemonSet..."
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L109: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L110: "cluster.go:125: ? Creating Operator Deployment..."
      L111: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L112: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L113: "cluster.go:125: ?[33m    /??_"
      L114: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L115: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L116: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L117: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L118: "cluster.go:125: ?[34m    ___/"
      L119: "cluster.go:125: ?[0m"
      L120: "cluster.go:125: Deployment       cilium-operator    "
      L121: "cluster.go:125: DaemonSet        cilium             "
      L122: "cluster.go:125: Containers:      cilium             "
      L123: "cluster.go:125:                  cilium-operator    "
      L124: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L125: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PollUntilDone(ci-4081.3.6-n-845761880b): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/prov?iders/Microsoft.Compute/locations/northeurope/operations/b23e45a9-8b16-4414-a86e-7f9b8efbba6f"
      L126: "--------------------------------------------------------------------------------"
      L127: "RESPONSE 200: 200 OK"
      L128: "ERROR CODE: OperationPreempted"
      L129: "--------------------------------------------------------------------------------"
      L130: "{"
      L131: "__startTime__: __2026-04-24T23:32:50.9058504+00:00__,"
      L132: "__endTime__: __2026-04-24T23:33:33.4293582+00:00__,"
      L133: "__status__: __Canceled__,"
      L134: "__error__: {"
      L135: "__code__: __OperationPreempted__,"
      L136: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L137: "},"
      L138: "__name__: __b23e45a9-8b16-4414-a86e-7f9b8efbba6f__"
      L139: "}"
      L140: "--------------------------------------------------------------------------------_"
      L141: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.flannel.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola?-cluster-image-9fd5883bba/providers/Microsoft.Network/publicIPAddresses/ip-eef5811f9a"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.flannel.cgroupv1.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0424 23:56:15.496531    2590 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0424 23:56:29.378830    2810 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-5f57914aa6__ could not be reached"
      L14: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-5f57914aa6__: lookup ci-4081.3.6-n-5f57914aa6 on 168.63.129.16:53: no such host"
      L15: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L16: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L17: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L18: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L19: "cluster.go:125: W0424 23:56:29.560773    2810 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L20: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L21: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L23: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.6-n-5f57914aa6 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.26]"
      L24: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L26: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L31: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L32: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L33: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L34: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L38: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 500.792039ms"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.26:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.838772162s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.575988689s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.501450889s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-5f57914aa6 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-5f57914aa6 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: poh0zz.ez7th1oqxbt6es54"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.0.26:6443 --token poh0zz.ez7th1oqxbt6es54 _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:af76cf721556d337636c97e43b7f5fdf4fb5dac5ff9348c6c52eb04ca1446fc9 "
      L92: "cluster.go:125: namespace/kube-flannel created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L95: "cluster.go:125: serviceaccount/flannel created"
      L96: "cluster.go:125: configmap/kube-flannel-cfg created"
      L97: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L98: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b94fddb2b9/?providers/Microsoft.Compute/virtualMachines/ci-4081.3.6-n-46f458eb1a"
      L99: "--------------------------------------------------------------------------------"
      L100: "RESPONSE 409: 409 Conflict"
      L101: "ERROR CODE: OperationNotAllowed"
      L102: "--------------------------------------------------------------------------------"
      L103: "{"
      L104: "__error__: {"
      L105: "__code__: __OperationNotAllowed__,"
      L106: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L107: "}"
      L108: "}"
      L109: "--------------------------------------------------------------------------------_"
      L110: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola?-cluster-image-9fd5883bba/providers/Microsoft.Network/publicIPAddresses/ip-5a5369fa66"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.calico.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: PollUntilDone(ci-4081.3.6-n-923ad2bf3e): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2?e/providers/Microsoft.Compute/locations/northeurope/operations/815ff40b-77a5-46a8-8d58-092f881707a3"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:33:09.9287572+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.3057873+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __815ff40b-77a5-46a8-8d58-092f881707a3__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.cilium.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0424 23:56:17.113717    2503 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0424 23:56:28.872826    2718 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-dfa893db8f__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-dfa893db8f__: lookup ci-4081.3.6-n-dfa893db8f on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0424 23:56:29.026207    2718 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.6-n-dfa893db8f kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.27]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L44: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L45: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L46: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L47: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L48: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L49: "cluster.go:125: [kubelet-check] The kubelet is healthy after 500.969634ms"
      L50: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L51: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.27:6443/livez"
      L52: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L53: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L54: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.929541468s"
      L55: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.852750453s"
      L56: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.50193911s"
      L57: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L58: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L59: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-dfa893db8f as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L61: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-dfa893db8f as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L62: "cluster.go:125: [bootstrap-token] Using token: jyz7om.8sk5lw2dyjw658x6"
      L63: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L67: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L68: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L69: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L70: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L71: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L74: "cluster.go:125: "
      L75: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L76: "cluster.go:125: "
      L77: "cluster.go:125:   mkdir -p $HOME/.kube"
      L78: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L79: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L80: "cluster.go:125: "
      L81: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L82: "cluster.go:125: "
      L83: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L84: "cluster.go:125: "
      L85: "cluster.go:125: You should now deploy a pod network to the cluster."
      L86: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L87: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L90: "cluster.go:125: "
      L91: "cluster.go:125: kubeadm join 10.0.0.27:6443 --token jyz7om.8sk5lw2dyjw658x6 _"
      L92: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:ab40a76819f4a61f5df53afa83f8c026cb1018a501ca9fa0e2489f74a45f77d8 "
      L93: "cluster.go:125: i  Using Cilium version 1.12.5"
      L94: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L95: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L96: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L97: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L98: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L99: "cluster.go:125: ? Created CA in secret cilium-ca"
      L100: "cluster.go:125: ? Generating certificates for Hubble..."
      L101: "cluster.go:125: ? Creating Service accounts..."
      L102: "cluster.go:125: ? Creating Cluster roles..."
      L103: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L104: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L105: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L106: "cluster.go:125: ? Creating Agent DaemonSet..."
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L109: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L110: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L111: "cluster.go:125: ? Creating Operator Deployment..."
      L112: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L113: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L114: "cluster.go:125: ?[33m    /??_"
      L115: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L116: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L117: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L118: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L119: "cluster.go:125: ?[34m    ___/"
      L120: "cluster.go:125: ?[0m"
      L121: "cluster.go:125: Deployment       cilium-operator    "
      L122: "cluster.go:125: DaemonSet        cilium             "
      L123: "cluster.go:125: Containers:      cilium             "
      L124: "cluster.go:125:                  cilium-operator    "
      L125: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L126: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b94fddb2b9/?providers/Microsoft.Compute/virtualMachines/ci-4081.3.6-n-f7c5a743cf"
      L127: "--------------------------------------------------------------------------------"
      L128: "RESPONSE 409: 409 Conflict"
      L129: "ERROR CODE: OperationNotAllowed"
      L130: "--------------------------------------------------------------------------------"
      L131: "{"
      L132: "__error__: {"
      L133: "__code__: __OperationNotAllowed__,"
      L134: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L135: "}"
      L136: "}"
      L137: "--------------------------------------------------------------------------------_"
      L138: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0424 23:32:21.613174    2529 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0424 23:32:34.820768    2747 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-1c86d6ea5a__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-1c86d6ea5a__: lookup ci-4081.3.6-n-1c86d6ea5a on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0424 23:32:35.362973    2747 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.6-n-1c86d6ea5a kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.28]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L44: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L45: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L46: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L47: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L48: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L49: "cluster.go:125: [kubelet-check] The kubelet is healthy after 501.344917ms"
      L50: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L51: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.28:6443/livez"
      L52: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L53: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L54: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.511250582s"
      L55: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.269972539s"
      L56: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.002378282s"
      L57: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L58: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L59: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-1c86d6ea5a as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L61: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-1c86d6ea5a as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L62: "cluster.go:125: [bootstrap-token] Using token: 5ry9t4.0ira480x1n5p1k5p"
      L63: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L67: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L68: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L69: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L70: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L71: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L74: "cluster.go:125: "
      L75: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L76: "cluster.go:125: "
      L77: "cluster.go:125:   mkdir -p $HOME/.kube"
      L78: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L79: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L80: "cluster.go:125: "
      L81: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L82: "cluster.go:125: "
      L83: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L84: "cluster.go:125: "
      L85: "cluster.go:125: You should now deploy a pod network to the cluster."
      L86: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L87: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L90: "cluster.go:125: "
      L91: "cluster.go:125: kubeadm join 10.0.0.28:6443 --token 5ry9t4.0ira480x1n5p1k5p _"
      L92: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:818930c77557909277c09b3a750ff8ce53866ee001c751c2c199cca7dc91ac86 "
      L93: "cluster.go:125: i  Using Cilium version 1.12.5"
      L94: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L95: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L96: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L97: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L98: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L99: "cluster.go:125: ? Created CA in secret cilium-ca"
      L100: "cluster.go:125: ? Generating certificates for Hubble..."
      L101: "cluster.go:125: ? Creating Service accounts..."
      L102: "cluster.go:125: ? Creating Cluster roles..."
      L103: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L104: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L105: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L106: "cluster.go:125: ? Creating Agent DaemonSet..."
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L109: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L110: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L111: "cluster.go:125: ? Creating Operator Deployment..."
      L112: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L113: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L114: "cluster.go:125: ?[33m    /??_"
      L115: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L116: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L117: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L118: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L119: "cluster.go:125: ?[34m    ___/"
      L120: "cluster.go:125: ?[0m"
      L121: "cluster.go:125: Deployment       cilium-operator    "
      L122: "cluster.go:125: DaemonSet        cilium             "
      L123: "cluster.go:125: Containers:      cilium-operator    "
      L124: "cluster.go:125:                  cilium             "
      L125: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L126: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PollUntilDone(ci-4081.3.6-n-afe3eba3dd): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/prov?iders/Microsoft.Compute/locations/northeurope/operations/01772464-e2df-4916-b768-4ad6e1b7e5a3"
      L127: "--------------------------------------------------------------------------------"
      L128: "RESPONSE 200: 200 OK"
      L129: "ERROR CODE: OperationPreempted"
      L130: "--------------------------------------------------------------------------------"
      L131: "{"
      L132: "__startTime__: __2026-04-24T23:32:54.1479419+00:00__,"
      L133: "__endTime__: __2026-04-24T23:33:33.7910001+00:00__,"
      L134: "__status__: __Canceled__,"
      L135: "__error__: {"
      L136: "__code__: __OperationPreempted__,"
      L137: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L138: "},"
      L139: "__name__: __01772464-e2df-4916-b768-4ad6e1b7e5a3__"
      L140: "}"
      L141: "--------------------------------------------------------------------------------_"
      L142: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.flannel.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola?-cluster-image-9fd5883bba/providers/Microsoft.Network/publicIPAddresses/ip-b37d7d1359"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.35.1.calico.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: PollUntilDone(ci-4081.3.6-n-685414ff41): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2?e/providers/Microsoft.Compute/locations/northeurope/operations/87ecbc40-db87-4b64-a439-c602ecfc1f19"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 200: 200 OK"
      L4: "ERROR CODE: OperationPreempted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__startTime__: __2026-04-24T23:32:52.6621443+00:00__,"
      L8: "__endTime__: __2026-04-24T23:33:33.7910001+00:00__,"
      L9: "__status__: __Canceled__,"
      L10: "__error__: {"
      L11: "__code__: __OperationPreempted__,"
      L12: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L13: "},"
      L14: "__name__: __87ecbc40-db87-4b64-a439-c602ecfc1f19__"
      L15: "}"
      L16: "--------------------------------------------------------------------------------_"
      L17: " "
  ```


</details>


🟢 ok **kubeadm.v1.35.1.cilium.base**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola?-cluster-image-9fd5883bba/providers/Microsoft.Network/publicIPAddresses/ip-c021567e70"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.35.1.flannel.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b94fd?db2b9/providers/Microsoft.Compute/virtualMachines/ci-4081.3.6-n-6d538f6717"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0424 23:32:19.021434    2461 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.35"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.35.4"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.35.4"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.35.4"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.35.4"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.13.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.6-0"
      L9: "cluster.go:125: I0424 23:32:32.420102    2671 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.35"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.35.4"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING ContainerRuntimeVersion]: You must update your container runtime to a version that supports the CRI method RuntimeConfig. Falling back to using cgroupDriver from kubelet conf?ig will be removed in 1.36. For more information, see https://git.k8s.io/enhancements/keps/sig-node/4033-group-driver-detection-over-cri"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-7db0bcb296__ could not be reached"
      L14: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.6-n-7db0bcb296__: lookup ci-4081.3.6-n-7db0bcb296 on 168.63.129.16:53: no such host"
      L15: "cluster.go:125:  [WARNING Service-kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L16: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L17: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L18: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L19: "cluster.go:125: W0424 23:32:32.611497    2671 checks.go:906] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L20: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L21: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L23: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.6-n-7db0bcb296 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.26]"
      L24: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L26: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L31: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L32: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L33: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L34: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L38: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L45: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L46: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L47: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L48: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L49: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L50: "cluster.go:125: [kubelet-check] The kubelet is healthy after 500.839287ms"
      L51: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L52: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.26:6443/livez"
      L53: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L54: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L55: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 4.508526281s"
      L56: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 5.187866927s"
      L57: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 7.001485914s"
      L58: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L59: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L60: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L61: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-7db0bcb296 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L62: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.6-n-7db0bcb296 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L63: "cluster.go:125: [bootstrap-token] Using token: ddpijz.0rvt9z36frgn5q49"
      L64: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L67: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L68: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L69: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L70: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L71: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L72: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L75: "cluster.go:125: "
      L76: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L77: "cluster.go:125: "
      L78: "cluster.go:125:   mkdir -p $HOME/.kube"
      L79: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L80: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L83: "cluster.go:125: "
      L84: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: You should now deploy a pod network to the cluster."
      L87: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L88: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L91: "cluster.go:125: "
      L92: "cluster.go:125: kubeadm join 10.0.0.26:6443 --token ddpijz.0rvt9z36frgn5q49 _"
      L93: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:c3f4a6a907020887014711a34e45e1a30569b2363477ae29b18470eeaff880cd "
      L94: "cluster.go:125: namespace/kube-flannel created"
      L95: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L96: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L97: "cluster.go:125: serviceaccount/flannel created"
      L98: "cluster.go:125: configmap/kube-flannel-cfg created"
      L99: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L100: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PollUntilDone(ci-4081.3.6-n-24ffd5f9a9): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/prov?iders/Microsoft.Compute/locations/northeurope/operations/5b3a10c6-e2b1-488c-8897-06b17344c072"
      L101: "--------------------------------------------------------------------------------"
      L102: "RESPONSE 200: 200 OK"
      L103: "ERROR CODE: OperationPreempted"
      L104: "--------------------------------------------------------------------------------"
      L105: "{"
      L106: "__startTime__: __2026-04-24T23:32:47.5930289+00:00__,"
      L107: "__endTime__: __2026-04-24T23:33:33.3057873+00:00__,"
      L108: "__status__: __Canceled__,"
      L109: "__error__: {"
      L110: "__code__: __OperationPreempted__,"
      L111: "__message__: __Operation execution has been preempted by a more recent operation.__"
      L112: "},"
      L113: "__name__: __5b3a10c6-e2b1-488c-8897-06b17344c072__"
      L114: "}"
      L115: "--------------------------------------------------------------------------------_"
      L116: " "
  ```


</details>


🟢 ok **sysext.custom-docker.sysext**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-77e9fe3f16"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **sysext.disable-containerd**; Succeeded: azure (1)

🟢 ok **sysext.disable-docker**; Succeeded: azure (1)

🟢 ok **sysext.simple**; Succeeded: azure (1)

🟢 ok **systemd.journal.user**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: creating public ip: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-9fd5?883bba/providers/Microsoft.Network/publicIPAddresses/ip-475e0fb30b"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: ResourceGroupBeingDeleted"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __ResourceGroupBeingDeleted__,"
      L9: "__message__: __The resource group _kola-cluster-image-9fd5883bba_ is in deprovisioning state and cannot perform this operation.__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **systemd.sysusers.gshadow**; Succeeded: azure (1)
