### Test report for 4081.3.7 / amd64

**Platforms tested** : azure 

🟢 ok **bpf.ig**; Succeeded: azure (1)

🟢 ok **cl.basic**; Succeeded: azure (1)

🟢 ok **cl.cloudinit.basic**; Succeeded: azure (1)

🟢 ok **cl.cloudinit.multipart-mime**; Succeeded: azure (1)

🟢 ok **cl.cloudinit.script**; Succeeded: azure (1)

🟢 ok **cl.etcd-member.discovery**; Succeeded: azure (1)

🟢 ok **cl.etcd-member.etcdctlv3**; Succeeded: azure (1)

🟢 ok **cl.etcd-member.v2-backup-restore**; Succeeded: azure (1)

🟢 ok **cl.flannel.udp**; Succeeded: azure (1)

🟢 ok **cl.flannel.vxlan**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-969c92ec46"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.kargs**; Succeeded: azure (1)

🟢 ok **cl.ignition.luks**; Succeeded: azure (1)

🟢 ok **cl.ignition.misc.empty**; Succeeded: azure (1)

🟢 ok **cl.ignition.symlink**; Succeeded: azure (1)

🟢 ok **cl.ignition.translation**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.btrfsroot**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.ext4root**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.groups**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.noop**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.once**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.users**; Succeeded: azure (1)

🟢 ok **cl.ignition.v1.xfsroot**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2.btrfsroot**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-275293be23"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2.ext4root**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2.noop**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-a2e8218059"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.ignition.v2.users**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2.xfsroot**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2_1.ext4checkexisting**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2_1.swap**; Succeeded: azure (1)

🟢 ok **cl.ignition.v2_1.vfat**; Succeeded: azure (1)

🟢 ok **cl.internet**; Succeeded: azure (1)

🟢 ok **cl.locksmith.cluster**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _locksmith.go:184: [0] ssh unreachable or system not ready: context deadline exceeded [1] ssh unreachable or system not ready: context deadline exceeded [2] ssh unreachable or system not ready:? context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-2366a0aff5"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.metadata.azure**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-23f186f8c6"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.network.initramfs.second-boot**; Succeeded: azure (1)

❌ not ok **cl.network.iptables**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:152: + sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_"
      L2: "cluster.go:156: cmd sudo nft --json list ruleset | jq _.nftables[] | select(.rule) | .rule.expr[0].match.right_ did not output 80_"
      L3: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-cdb1bbb3a7"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.network.wireguard**; Succeeded: azure (1)

🟢 ok **cl.osreset.ignition-rerun**; Succeeded: azure (1)

🟢 ok **cl.overlay.cleanup**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-ac18a0daf3"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **cl.swap_activation**; Succeeded: azure (1)

🟢 ok **cl.toolbox.dnf-install**; Succeeded: azure (1)

🟢 ok **cl.update.badverity**; Succeeded: azure (1)

🟢 ok **cl.update.reboot**; Succeeded: azure (1)

🟢 ok **cl.users.shells**; Succeeded: azure (1)

🟢 ok **cl.verity**; Succeeded: azure (1)

🟢 ok **coreos.auth.verify**; Succeeded: azure (1)

🟢 ok **coreos.ignition.groups**; Succeeded: azure (1)

🟢 ok **coreos.ignition.once**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-0033b3a93d"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.ignition.resource.local**; Succeeded: azure (1)

🟢 ok **coreos.ignition.resource.remote**; Succeeded: azure (1)

🟢 ok **coreos.ignition.security.tls**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-6c92e41651"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


❌ not ok **coreos.ignition.ssh.key**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.7-a-0e0011fdcb__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 20.234.106.215:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.7-a-f5324c58b0__ failed to start: ssh journalctl failed: time limit exceeded: ssh: handshake failed: ssh: unable to authen?ticate, attempted methods [none publickey], no supported methods remain_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.7-a-82eda5532f__ failed to start: ssh journalctl failed: time limit exceeded: ssh: handshake failed: ssh: unable to authen?ticate, attempted methods [none publickey], no supported methods remain_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.7-a-a0dd95bf5a__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 40.127.197.50:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __ci-4081.3.7-a-e9dfc428f4__ failed to start: ssh journalctl failed: time limit exceeded: ssh: handshake failed: ssh: unable to authen?ticate, attempted methods [none publickey], no supported methods remain_"
      L2: " "
  ```


</details>


🟢 ok **coreos.ignition.systemd.enable-service**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-caec8e44c9"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.locksmith.reboot**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _locksmith.go:141: failed to check rebooted machine: ssh unreachable or system not ready: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-0361812c42"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **coreos.locksmith.tls**; Succeeded: azure (1)

🟢 ok **coreos.selinux.boolean**; Succeeded: azure (1)

🟢 ok **coreos.selinux.enforce**; Succeeded: azure (1)

🟢 ok **coreos.tls.fetch-urls**; Succeeded: azure (1)

🟢 ok **coreos.update.badusr**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _update.go:168: ssh unreachable or system not ready: failure checking if machine is running: systemctl is-system-running returned stdout: ____, stderr: ____, err: dial tcp 40.115.97.188:22: i/o? timeout, systemctl list-jobs returned stdout: ____, stderr: ____, err: dial tcp 40.115.97.188:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-d27abccc12"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.base**; Succeeded: azure (1)

🟢 ok **docker.btrfs-storage**; Succeeded: azure (1)

🟢 ok **docker.containerd-restart**; Succeeded: azure (1)

🟢 ok **docker.enable-service.sysext**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-7692fd079e"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.lib-coreos-dockerd-compat**; Succeeded: azure (1)

🟢 ok **docker.network-openbsd-nc**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-b53e2f0c5c"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.selinux**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-5f96cb528e"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **docker.userns**; Succeeded: azure (2); Failed: azure (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-318f32fcd2"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


❌ not ok **extra-test.[Standard_NC6s_v3].cl.internet**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-925d7d3577/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-162ee4a450"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-be203d8827/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-34c3f858ce"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-7058e03df8/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-b39ffd09bb"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-95a9cc598b/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-dc034dcaa2"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b49ff382f3/providers/Mic?rosoft.Compute/virtualMachines/ci-4081.3.7-a-7d4acdbc22"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


❌ not ok **extra-test.[Standard_NC6s_v3].cl.misc.nvidia**; Failed: azure (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 5</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-925d7d3577/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.7-a-572425669d"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 4</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-be203d8827/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.7-a-a1b911b143"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 3</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-7058e03df8/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.7-a-9de6a34448"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-95a9cc598b/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.7-a-d9f5025337"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _nvidia.go:192: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-b49ff382f3/providers/Microsoft.Compute/virtualMachines/ci-4?081.3.7-a-52ecda0847"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardNCSv3Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Cur?rent Limit: 0, Current Usage: 0, Additional Required: 6, (Minimum) New Limit Required: 6. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a req?uest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22command?%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardNCSv3Family%22,%22quotaRequest%22:%7B%22proper?ties%22:%7B%22limit%22:6,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardNCSv3Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succe?ed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **extra-test.[V1].cl.internet**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.33.8.calico.base**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.33.8.calico.cgroupv1.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:32:17.748312    2639 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0421 10:32:43.701547    2879 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-081c9bf51b__ could not be reached"
      L14: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-081c9bf51b__: lookup ci-4081.3.7-a-081c9bf51b on 168.63.129.16:53: no such host"
      L15: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L16: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L17: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L18: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L19: "cluster.go:125: W0421 10:32:43.860926    2879 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L20: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L21: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L23: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-081c9bf51b kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.27]"
      L24: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L26: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L31: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L32: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L33: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L34: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L38: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 501.638115ms"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.27:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 3.249213478s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.719069263s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.501825118s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-081c9bf51b as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-081c9bf51b as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: hz3sfb.n4rmhs7r57mpq8kb"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.0.27:6443 --token hz3sfb.n4rmhs7r57mpq8kb _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:1e00c763d45fda4e99848f2f061ed1b512d502c2503e2131b245c4f3c59e0de2"
      L92: "kubeadm.go:197: unable to setup cluster: unable to run master script: wait: remote command exited without exit status or exit signal_"
      L93: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:12:08.592884    2505 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0421 10:12:21.555624    2725 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-eed17feea0__ could not be reached"
      L14: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-eed17feea0__: lookup ci-4081.3.7-a-eed17feea0 on 168.63.129.16:53: no such host"
      L15: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L16: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L17: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L18: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L19: "cluster.go:125: W0421 10:12:21.739796    2725 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L20: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L21: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L23: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-eed17feea0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.20]"
      L24: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L26: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L31: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L32: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L33: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L34: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L38: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L39: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L42: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 500.611313ms"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.20:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.501160073s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.072829692s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.001836238s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-eed17feea0 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-eed17feea0 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: gxuw43.wcw11otbxlkgw21q"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.0.20:6443 --token gxuw43.wcw11otbxlkgw21q _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:d420a36af93137949554be38d45d99c4b7919e95d76055b564b7005440b81ab9 "
      L92: "cluster.go:125: namespace/tigera-operator created"
      L93: "cluster.go:125: serviceaccount/tigera-operator created"
      L94: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L95: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L96: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L97: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L98: "cluster.go:125: deployment.apps/tigera-operator created"
      L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L102: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L103: "cluster.go:125: installation.operator.tigera.io/default created"
      L104: "cluster.go:125: apiserver.operator.tigera.io/default created"
      L105: "cluster.go:125: goldmane.operator.tigera.io/default created"
      L106: "cluster.go:125: whisker.operator.tigera.io/default created"
      L107: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/?providers/Microsoft.Compute/virtualMachines/ci-4081.3.7-a-bde07d4ab2"
      L108: "--------------------------------------------------------------------------------"
      L109: "RESPONSE 409: 409 Conflict"
      L110: "ERROR CODE: OperationNotAllowed"
      L111: "--------------------------------------------------------------------------------"
      L112: "{"
      L113: "__error__: {"
      L114: "__code__: __OperationNotAllowed__,"
      L115: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L116: "}"
      L117: "}"
      L118: "--------------------------------------------------------------------------------_"
      L119: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.cilium.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:32:17.969496    2502 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0421 10:32:31.528936    2717 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-d2863b7f19__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-d2863b7f19__: lookup ci-4081.3.7-a-d2863b7f19 on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0421 10:32:31.733938    2717 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-d2863b7f19 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.28]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 505.268093ms"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.28:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 3.510170502s"
      L53: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 4.26983143s"
      L54: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 6.001978466s"
      L55: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L56: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L57: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L58: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-d2863b7f19 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-d2863b7f19 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L60: "cluster.go:125: [bootstrap-token] Using token: i6pwds.5d5b7mk38jh6r0ut"
      L61: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L66: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L67: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L68: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L69: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L70: "cluster.go:125: "
      L71: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L74: "cluster.go:125: "
      L75: "cluster.go:125:   mkdir -p $HOME/.kube"
      L76: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L77: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L78: "cluster.go:125: "
      L79: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L80: "cluster.go:125: "
      L81: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L82: "cluster.go:125: "
      L83: "cluster.go:125: You should now deploy a pod network to the cluster."
      L84: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L85: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L86: "cluster.go:125: "
      L87: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: kubeadm join 10.0.0.28:6443 --token i6pwds.5d5b7mk38jh6r0ut _"
      L90: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:b4afaa7eca975f0885d242cc9e8af00f02231f36931ee227a99d308f8fcb9dee"
      L91: "kubeadm.go:197: unable to setup cluster: unable to run master script: wait: remote command exited without exit status or exit signal_"
      L92: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:11:43.205143    2469 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0421 10:12:10.758423    2715 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-f5e4f53418__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-f5e4f53418__: lookup ci-4081.3.7-a-f5e4f53418 on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0421 10:12:10.903724    2715 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-f5e4f53418 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.30]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 500.903514ms"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.30:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.73661827s"
      L53: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.418727539s"
      L54: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.00173743s"
      L55: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L56: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L57: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L58: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-f5e4f53418 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-f5e4f53418 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L60: "cluster.go:125: [bootstrap-token] Using token: u2zfun.xe6ftkwqk2503jd6"
      L61: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L66: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L67: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L68: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L69: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L70: "cluster.go:125: "
      L71: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L74: "cluster.go:125: "
      L75: "cluster.go:125:   mkdir -p $HOME/.kube"
      L76: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L77: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L78: "cluster.go:125: "
      L79: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L80: "cluster.go:125: "
      L81: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L82: "cluster.go:125: "
      L83: "cluster.go:125: You should now deploy a pod network to the cluster."
      L84: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L85: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L86: "cluster.go:125: "
      L87: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: kubeadm join 10.0.0.30:6443 --token u2zfun.xe6ftkwqk2503jd6 _"
      L90: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:2305510cf366d403a9117e3db3e24560e9bc1ea9615595f69bd0b8cfdb7af58c "
      L91: "cluster.go:125: i  Using Cilium version 1.12.5"
      L92: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L93: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L94: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L95: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L96: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L97: "cluster.go:125: ? Created CA in secret cilium-ca"
      L98: "cluster.go:125: ? Generating certificates for Hubble..."
      L99: "cluster.go:125: ? Creating Service accounts..."
      L100: "cluster.go:125: ? Creating Cluster roles..."
      L101: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L102: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L103: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L104: "cluster.go:125: ? Creating Agent DaemonSet..."
      L105: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L109: "cluster.go:125: ? Creating Operator Deployment..."
      L110: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L111: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L112: "cluster.go:125: ?[33m    /??_"
      L113: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L114: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L115: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L116: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L117: "cluster.go:125: ?[34m    ___/"
      L118: "cluster.go:125: ?[0m"
      L119: "cluster.go:125: Deployment       cilium-operator    "
      L120: "cluster.go:125: DaemonSet        cilium             "
      L121: "cluster.go:125: Containers:      cilium             "
      L122: "cluster.go:125:                  cilium-operator    "
      L123: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L124: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/?providers/Microsoft.Compute/virtualMachines/ci-4081.3.7-a-6f4e22fbf8"
      L125: "--------------------------------------------------------------------------------"
      L126: "RESPONSE 409: 409 Conflict"
      L127: "ERROR CODE: OperationNotAllowed"
      L128: "--------------------------------------------------------------------------------"
      L129: "{"
      L130: "__error__: {"
      L131: "__code__: __OperationNotAllowed__,"
      L132: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L133: "}"
      L134: "}"
      L135: "--------------------------------------------------------------------------------_"
      L136: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.cilium.cgroupv1.base**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.33.8.flannel.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:32:15.225964    2350 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0421 10:32:30.243013    2672 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-5ce9f36940__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-5ce9f36940__: lookup ci-4081.3.7-a-5ce9f36940 on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0421 10:32:30.443177    2672 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-5ce9f36940 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.17]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 500.816856ms"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.17:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 6.00923981s"
      L53: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 6.67743407s"
      L54: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 8.501834204s"
      L55: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L56: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L57: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L58: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-5ce9f36940 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-5ce9f36940 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L60: "cluster.go:125: [bootstrap-token] Using token: mnd6zl.s7u3krok6q0gnisi"
      L61: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L66: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L67: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L68: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L69: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L70: "cluster.go:125: "
      L71: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L74: "cluster.go:125: "
      L75: "cluster.go:125:   mkdir -p $HOME/.kube"
      L76: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L77: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L78: "cluster.go:125: "
      L79: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L80: "cluster.go:125: "
      L81: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L82: "cluster.go:125: "
      L83: "cluster.go:125: You should now deploy a pod network to the cluster."
      L84: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L85: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L86: "cluster.go:125: "
      L87: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: kubeadm join 10.0.0.17:6443 --token mnd6zl.s7u3krok6q0gnisi _"
      L90: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:b9f94ede72aaa1b505ce96f517e913defd9447d417930d417528ba3f6b2afcc6 "
      L91: "cluster.go:125: namespace/kube-flannel created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L93: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L94: "cluster.go:125: serviceaccount/flannel created"
      L95: "cluster.go:125: configmap/kube-flannel-cfg created"
      L96: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L97: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PollUntilDone(ci-4081.3.7-a-15fc7e0361): GET https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/prov?iders/Microsoft.Compute/locations/northeurope/operations/cb3f49ea-2dac-4c8b-bf86-65170d6cb2f6"
      L98: "--------------------------------------------------------------------------------"
      L99: "RESPONSE 200: 200 OK"
      L100: "ERROR CODE: ResourceGroupBeingDeleted"
      L101: "--------------------------------------------------------------------------------"
      L102: "{"
      L103: "__startTime__: __2026-04-21T10:32:49.4293017+00:00__,"
      L104: "__endTime__: __2026-04-21T10:32:51.6870313+00:00__,"
      L105: "__status__: __Failed__,"
      L106: "__error__: {"
      L107: "__code__: __ResourceGroupBeingDeleted__,"
      L108: "__message__: __The resource group _KOLA-CLUSTER-IMAGE-19C3C8E121_ is in deprovisioning state and cannot perform this operation.  Target: _/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGr?oups/kola-cluster-image-19c3c8e121/providers/Microsoft.Compute/disks/ci-4081.3.7-a-15fc7e0361_disk1_412cb5326bd0478b901b26d7a581dcec_.__"
      L109: "},"
      L110: "__name__: __cb3f49ea-2dac-4c8b-bf86-65170d6cb2f6__"
      L111: "}"
      L112: "--------------------------------------------------------------------------------_"
      L113: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c?786d2/providers/Microsoft.Compute/virtualMachines/ci-4081.3.7-a-08d1983581"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.flannel.cgroupv1.base**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.34.4.calico.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:32:30.104479    2508 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0421 10:32:43.198136    2723 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-225db887a9__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-225db887a9__: lookup ci-4081.3.7-a-225db887a9 on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0421 10:32:43.371015    2723 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-225db887a9 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.11]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L44: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L45: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L46: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L47: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L48: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L49: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.001114289s"
      L50: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L51: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.11:6443/livez"
      L52: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L53: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L54: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 3.369811522s"
      L55: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.970747708s"
      L56: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 8.001947223s"
      L57: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L58: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L59: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-225db887a9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L61: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-225db887a9 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L62: "cluster.go:125: [bootstrap-token] Using token: 5b8wlz.pe0myan1j49r863n"
      L63: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L67: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L68: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L69: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L70: "kubeadm.go:197: unable to setup cluster: unable to run master script: wait: remote command exited without exit status or exit signal_"
      L71: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f?2c786d2/providers/Microsoft.Compute/virtualMachines/ci-4081.3.7-a-892e565a07"
      L2: "--------------------------------------------------------------------------------"
      L3: "RESPONSE 409: 409 Conflict"
      L4: "ERROR CODE: OperationNotAllowed"
      L5: "--------------------------------------------------------------------------------"
      L6: "{"
      L7: "__error__: {"
      L8: "__code__: __OperationNotAllowed__,"
      L9: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L10: "}"
      L11: "}"
      L12: "--------------------------------------------------------------------------------_"
      L13: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.cilium.base**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.34.4.flannel.base**; Succeeded: azure (3); Failed: azure (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:32:44.627769    2407 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0421 10:32:56.820519    2620 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-3e5f237984__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-3e5f237984__: lookup ci-4081.3.7-a-3e5f237984 on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0421 10:32:57.035132    2620 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-3e5f237984 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.14]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "kubeadm.go:197: unable to setup cluster: unable to run master script: wait: remote command exited without exit status or exit signal_"
      L35: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for azure, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0421 10:14:00.343071    2433 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0421 10:14:14.890009    2646 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-ac88dd84b3__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4081.3.7-a-ac88dd84b3__: lookup ci-4081.3.7-a-ac88dd84b3 on 168.63.129.16:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0421 10:14:15.044208    2646 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4081.3.7-a-ac88dd84b3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.14]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L44: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L45: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L46: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L47: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L48: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L49: "cluster.go:125: [kubelet-check] The kubelet is healthy after 502.049745ms"
      L50: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L51: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.14:6443/livez"
      L52: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L53: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L54: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 3.16742313s"
      L55: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.851036629s"
      L56: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 6.001995605s"
      L57: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L58: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L59: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-ac88dd84b3 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L61: "cluster.go:125: [mark-control-plane] Marking the node ci-4081.3.7-a-ac88dd84b3 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L62: "cluster.go:125: [bootstrap-token] Using token: hg2hxb.xz7drqg47z7c3mup"
      L63: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L67: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L68: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L69: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L70: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L71: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L74: "cluster.go:125: "
      L75: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L76: "cluster.go:125: "
      L77: "cluster.go:125:   mkdir -p $HOME/.kube"
      L78: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L79: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L80: "cluster.go:125: "
      L81: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L82: "cluster.go:125: "
      L83: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L84: "cluster.go:125: "
      L85: "cluster.go:125: You should now deploy a pod network to the cluster."
      L86: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L87: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L90: "cluster.go:125: "
      L91: "cluster.go:125: kubeadm join 10.0.0.14:6443 --token hg2hxb.xz7drqg47z7c3mup _"
      L92: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:11589c34cac5dd3e3b66e75fbc562fd9eb03762185a40e633ada3e642d5bde16 "
      L93: "cluster.go:125: namespace/kube-flannel created"
      L94: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L95: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L96: "cluster.go:125: serviceaccount/flannel created"
      L97: "cluster.go:125: configmap/kube-flannel-cfg created"
      L98: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L99: "kubeadm.go:197: unable to setup cluster: unable to create worker node: PUT https://management.azure.com/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kola-cluster-image-a8f2c786d2/?providers/Microsoft.Compute/virtualMachines/ci-4081.3.7-a-bab281968c"
      L100: "--------------------------------------------------------------------------------"
      L101: "RESPONSE 409: 409 Conflict"
      L102: "ERROR CODE: OperationNotAllowed"
      L103: "--------------------------------------------------------------------------------"
      L104: "{"
      L105: "__error__: {"
      L106: "__code__: __OperationNotAllowed__,"
      L107: "__message__: __Operation could not be completed as it results in exceeding approved standardDSv4Family Cores quota. Additional details - Deployment Model: Resource Manager, Location: northeurope, Curr?ent Limit: 65, Current Usage: 64, Additional Required: 2, (Minimum) New Limit Required: 66. Setup Alerts when Quota reaches threshold. Learn more at https://aka.ms/quotamonitoringalerting . Submit a r?equest for Quota increase at https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/%7B%22subscriptionId%22:%220e46bd28-a80f-4d3a-8200-d9eb8d80cb2e%22,%22comma?nd%22:%22openQuotaApprovalBlade%22,%22quotas%22:[%7B%22location%22:%22northeurope%22,%22providerId%22:%22Microsoft.Compute%22,%22resourceName%22:%22standardDSv4Family%22,%22quotaRequest%22:%7B%22prope?rties%22:%7B%22limit%22:66,%22unit%22:%22Count%22,%22name%22:%7B%22value%22:%22standardDSv4Family%22%7D%7D%7D%7D]%7D by specifying parameters listed in the ???Details??? section for deployment to succ?eed. Please read more about quota limits at https://docs.microsoft.com/en-us/azure/azure-supportability/per-vm-quota-requests__"
      L108: "}"
      L109: "}"
      L110: "--------------------------------------------------------------------------------_"
      L111: " "
  ```


</details>


🟢 ok **kubeadm.v1.35.1.calico.base**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.35.1.cilium.base**; Succeeded: azure (1)

🟢 ok **kubeadm.v1.35.1.flannel.base**; Succeeded: azure (1)

🟢 ok **sysext.custom-docker.sysext**; Succeeded: azure (1)

🟢 ok **sysext.disable-containerd**; Succeeded: azure (1)

🟢 ok **sysext.disable-docker**; Succeeded: azure (1)

🟢 ok **sysext.simple**; Succeeded: azure (1)

🟢 ok **systemd.journal.user**; Succeeded: azure (1)

🟢 ok **systemd.sysusers.gshadow**; Succeeded: azure (1)
