### Test report for 4669.0.0+nightly-20260428-2100 / amd64

**Platforms tested** : stackit 

🟢 ok **cl.basic**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```


</details>


🟢 ok **cl.cloudinit.basic**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __fb72b6d5-d9c5-4474-a726-dd8afda08070__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.65.206:22: i/o t?imeout_"
      L2: " "
  ```


</details>


❌ not ok **cl.etcd-member.discovery**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __f5445145-f2cb-44a9-af15-a5646115dd00__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.118.19:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __d08f4f6a-7e40-4dab-b511-f335eb291331__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.8:22: i/o tim?eout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __59398749-95e3-4906-bf7d-c70ef291d5b3__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.118.6:22: i/o ti?meout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __6d12a6d0-316d-4f75-89fa-3bffd00aed8c__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.118.250:22: i/o ?timeout_"
      L2: " "
  ```


</details>


❌ not ok **cl.flannel.udp**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __616f24a0-590c-41ae-b351-b53e7ef385f7__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.66.134:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __e8b7e161-c921-47bb-99c0-d9871f16a7e3__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.111:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: error creating IP address: 504 Gateway Timeout, status code 504, Body: <html_ "
      L2: "<head_<title_504 Gateway Time-out</title_</head_ "
      L3: "<body_ "
      L4: "<center_<h1_504 Gateway Time-out</h1_</center_ "
      L5: "<hr_<center_nginx</center_ "
      L6: "</body_ "
      L7: "</html_ _"
      L8: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __3bef76d5-edf7-48c5-a616-6c2bec99840c__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.67.149:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```


</details>


❌ not ok **cl.flannel.vxlan**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __e3486ae6-edd4-4c1f-a52b-7f8e55618986__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.110.6:22: i/o ti?meout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __3c64ede3-4e09-4bd6-a534-7e01c92646a8__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.226:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __e826cb91-c8a9-48ec-bccb-c4ebd4007bf9__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.67.48:22: i/o ti?meout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __dfaa3920-dd43-44fe-a058-a5e452d43412__ failed basic checks: ssh unreachable or system not ready: failure checking if machine is runn?ing: systemctl is-system-running returned stdout: __starting__, stderr: ____, err: Process exited with status 1, systemctl list-jobs returned stdout: __JOB  UNIT                        TYPE  STATE_n10?75 etcd-member.service         start running_n875  multi-user.target           start waiting_n1027 flannel-docker-opts.service start waiting_n1026 flanneld.service            start waiting_n_n4 jobs l?isted.__, stderr: ____, err: <nil__"
      L2: " "
  ```


</details>


🟢 ok **cl.ignition.kargs**; Succeeded: stackit (1)

🟢 ok **cl.ignition.misc.empty**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```


</details>


🟢 ok **cl.ignition.v1.noop**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```


</details>


🟢 ok **cl.ignition.v2.btrfsroot**; Succeeded: stackit (3); Failed: stackit (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __06201608-7237-476b-b4cd-a4ae978739be__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.118.19:22: i/o t?imeout_"
      L2: " "
  ```


</details>


🟢 ok **cl.ignition.v2.noop**; Succeeded: stackit (5); Failed: stackit (1, 2, 3, 4)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __acdd35ec-418c-4c2e-949f-f58f0bcad163__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.64.198:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __a6a4fc45-d63d-4849-b58a-ba19415c6d75__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.108.152:22: i/o ?timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __aab02e03-f311-4c35-8c58-26de276163d4__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.84.220:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __526c7bfd-98ee-4800-ac0c-d4ec4e96e3ce__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.66.248:22: i/o t?imeout_"
      L2: " "
  ```


</details>


❌ not ok **cl.install.cloudinit**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __990632f5-4f1e-4a9f-87bc-0a3d3f7c27ae__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.65.136:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __f0cbcd70-c25d-4aa1-a4a8-db6bc2a3f21d__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.116.43:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __941ff51a-7915-4c70-889d-3868954dbca6__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.124:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```


</details>


🟢 ok **cl.internet**; Succeeded: stackit (3); Failed: stackit (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```


</details>


❌ not ok **cl.network.initramfs.second-boot**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __f0724474-81f4-49a3-8d58-9dc36fbfa925__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.96:22: i/o ti?meout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __77d26799-55ca-4bb0-8a4f-d246527def27__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.84.146:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __75719175-6042-4d25-aef3-efda36eb0c68__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.85.89:22: i/o ti?meout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __d1f7f801-0526-465e-93e3-d64107327bfd__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.85.62:22: i/o ti?meout_"
      L2: " "
  ```


</details>


🟢 ok **coreos.ignition.once**; Succeeded: stackit (1)

🟢 ok **coreos.ignition.resource.local**; Succeeded: stackit (4); Failed: stackit (1, 2, 3)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __b37f5a93-4020-4c8d-9182-d1d118637665__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.65.139:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __f2cc0445-42c4-4b2b-ad96-7fec0e054071__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.84.90:22: i/o ti?meout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __7bdf6eb2-4c50-42f7-b3c2-c063caa69bad__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.64.198:22: i/o t?imeout_"
      L2: " "
  ```


</details>


❌ not ok **coreos.ignition.resource.remote**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __2b779bcd-205c-47c0-8869-293ba66fbe1c__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.85.133:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __b5228530-e5f8-49d7-8b5b-c697ee0cf5d7__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.111.253:22: i/o ?timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __c4ef2f27-b9d1-4c39-ac75-2f336e319020__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.87.116:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __40312796-0e41-47cd-ab19-80f59692d1d8__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.65.136:22: i/o t?imeout_"
      L2: " "
  ```


</details>


❌ not ok **coreos.ignition.security.tls**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __e0a8fb0e-e472-45f7-a552-1b5ae6f4b8e3__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.37:22: i/o ti?meout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __6d492a77-f0f6-4965-83c0-7c1b8cd8a63a__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.87.135:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __b6a3922e-50a6-4229-9970-3e6623d9f2f7__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.118.19:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __97de5c30-aa05-4388-98ec-cc7146ba2d8d__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.237:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __85bc6e60-98de-4a45-8dca-7e3e3daf6396__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.66.153:22: i/o t?imeout_"
      L2: " "
  ```


</details>


🟢 ok **coreos.ignition.sethostname**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __36f80320-44c5-469d-850f-0075393f73da__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.116.123:22: i/o ?timeout_"
      L2: " "
  ```


</details>


🟢 ok **coreos.ignition.ssh.key**; Succeeded: stackit (3); Failed: stackit (1, 2)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __f452745d-83f5-4e67-8c90-8e257c4d60d3__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.86.204:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __8d6d2d80-5dfd-4332-a427-630ee18ba27a__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.67.227:22: i/o t?imeout_"
      L2: " "
  ```


</details>


❌ not ok **docker.network-openbsd-nc**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __3e330d0c-65a7-45c6-9af2-cf8bc9d686bb__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.67.149:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __3f774d3a-d0cb-458a-9859-bbbb387f002c__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.85.187:22: i/o t?imeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __5867850d-13c6-4bab-86da-cec65add373c__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.84.220:22: i/o t?imeout_"
      L2: " "
  ```


</details>


❌ not ok **kubeadm.v1.33.8.calico.base**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __26899678-02f9-4a47-8d34-184782595ea1__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.85.32:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __cea9cfb8-7519-44cd-9f24-0b0a1d0570d7__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.66.153:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.cilium.base**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0429 00:14:50.753627    2200 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0429 00:15:01.590399    2432 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-n-d7ff634c54__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-n-d7ff634c54__: lookup ci-4669-0-0-n-d7ff634c54 on 1.1.1.1:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4669-0-0-n-d7ff634c54 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.0.164]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.501469633s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.164:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.188951226s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 2.421793482s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 4.002104822s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-n-d7ff634c54 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-n-d7ff634c54 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 3v67pr.nhobemod24n9w2qx"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.164:6443 --token 3v67pr.nhobemod24n9w2qx _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:6d3b249e85ffe7d67d41cecf6fc0fc0c6035df426e1d84a5c86681a2f7027716 "
      L90: "cluster.go:125: i  Using Cilium version 1.12.5"
      L91: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L92: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L93: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L94: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L95: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L96: "cluster.go:125: ? Created CA in secret cilium-ca"
      L97: "cluster.go:125: ? Generating certificates for Hubble..."
      L98: "cluster.go:125: ? Creating Service accounts..."
      L99: "cluster.go:125: ? Creating Cluster roles..."
      L100: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L101: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L102: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L103: "cluster.go:125: ? Creating Agent DaemonSet..."
      L104: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L105: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L108: "cluster.go:125: ? Creating Operator Deployment..."
      L109: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L110: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L111: "cluster.go:125: ?[33m    /??_"
      L112: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L113: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L114: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L115: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L116: "cluster.go:125: ?[34m    ___/"
      L117: "cluster.go:125: ?[0m"
      L118: "cluster.go:125: Deployment       cilium-operator    "
      L119: "cluster.go:125: DaemonSet        cilium             "
      L120: "cluster.go:125: Containers:      cilium             "
      L121: "cluster.go:125:                  cilium-operator    "
      L122: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L123: "kubeadm.go:197: unable to setup cluster: unable to create worker node: machine __205090b2-e277-4bff-9331-16b269be5539__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.67.?227:22: i/o timeout_"
      L124: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.flannel.base**; Succeeded: stackit (1)

❌ not ok **kubeadm.v1.34.4.calico.base**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __264db434-985d-4001-a4cf-5fd93566289a__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.85.198:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: machine __e4d807ec-f741-4e40-8582-341b4689e145__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 18?8.34.109.136:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: machine __cc0cebbb-2460-4bfe-882b-95f7fd7a14ea__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 18?8.34.95.56:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.cilium.base**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __835ab3fa-12d9-47eb-bf7f-f5ee1d81a3b9__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.110.6:22: i/o timeout_"
      L2: " "
  ```


</details>


❌ not ok **kubeadm.v1.34.4.flannel.base**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __d6cabdc3-4899-486d-bdf1-f0e9786f5f37__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.86.108:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __0a37ab71-f743-41f8-8824-b88068bbe937__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.65.206:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __a2075871-c204-4e0d-9309-b44ef53f9fc7__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.66.167:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0429 00:44:44.477086    2184 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0429 00:44:52.597278    2383 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-n-699122ba62__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-n-699122ba62__: lookup ci-4669-0-0-n-699122ba62 on 1.1.1.1:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4669-0-0-n-699122ba62 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 10.0.2.93]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L43: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.002282063s"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.2.93:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 1.505579533s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 1.911779063s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 4.501259653s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-n-699122ba62 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-n-699122ba62 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: 0qdsg8.ow08nj78rrqxv3og"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.2.93:6443 --token 0qdsg8.ow08nj78rrqxv3og _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:c6c117bbef83f37726b2a6ba7aab26efe36b0c9a68a727fff5baf39bd08226ef "
      L92: "cluster.go:125: namespace/kube-flannel created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L95: "cluster.go:125: serviceaccount/flannel created"
      L96: "cluster.go:125: configmap/kube-flannel-cfg created"
      L97: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L98: "kubeadm.go:197: unable to setup cluster: unable to create worker node: machine __f9574c19-3809-4650-a1d3-6922db2d7607__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.118?.6:22: i/o timeout_"
      L99: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __e6184e8c-9480-4504-bf7d-bff7fead7c66__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.66.167:22: i/o timeout_"
      L2: " "
  ```


</details>


🟢 ok **kubeadm.v1.35.1.calico.base**; Succeeded: stackit (2); Failed: stackit (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: machine __3bb47dc2-70f4-4c8a-8075-8a2a180c7a53__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 18?8.34.67.197:22: i/o timeout_"
      L2: " "
  ```


</details>


❌ not ok **kubeadm.v1.35.1.cilium.base**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _harness.go:608: Cluster failed: creating network for cluster: WaitWithContext() has timed out: context deadline exceeded_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __a074b71c-be56-4fb2-ac9a-acd60ee92b7b__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.87.142:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: machine __f9b7d738-661d-4336-bc0a-f969903725f8__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 18?8.34.67.48:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: machine __1cf0969d-0cbf-4e56-84c7-a8ffb9a9f1d6__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 18?8.34.64.198:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: machine __f70eb779-914e-4058-aa33-10d5215a4d5b__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 18?8.34.65.7:22: i/o timeout_"
      L2: " "
  ```


</details>


❌ not ok **kubeadm.v1.35.1.flannel.base**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __0ef8c016-b4d4-4dc1-8c91-ec0c52c3d0b8__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.87.188:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __05f756a5-b9a0-420b-ae43-0777360df6d8__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.86.79:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __ec639d53-2e29-4189-94ad-b2af5c03f85c__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.111.63:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: machine __7dd13daa-88fe-435f-8505-2e581af1a5ca__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.?34.108.152:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create etcd node: error creating IP address: 504 Gateway Timeout, status code 504, Body: upstream request timeout_"
      L2: " "
  ```


</details>


❌ not ok **linux.nfs.v3**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _nfs.go:77: Cluster.NewMachine: machine __53f05675-cd17-474a-92d7-a7bd060bbea7__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.87.189:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _nfs.go:77: Cluster.NewMachine: machine __6f0e7cf1-bc17-4bbc-8dd1-d31a871142ca__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.85.226:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _nfs.go:77: Cluster.NewMachine: machine __541f15b5-c07e-43ef-abf8-8d6493bf4d7e__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.67.227:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _nfs.go:80: NFS server booted."
      L2: "nfs.go:85: Test file __/tmp/tmp.2j7xKA5cp1__ created on server."
      L3: "nfs.go:122: Cluster.NewMachine: machine __2a1938e7-8ee4-48fb-9f68-6ffae742b15b__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.66.68:22: i/o timeout_"
      L4: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _nfs.go:77: Cluster.NewMachine: machine __262b4c61-822d-4010-bcb9-0144ddfe8974__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.66.68:22: i/o timeout_"
      L2: " "
  ```


</details>


❌ not ok **linux.nfs.v4**; Failed: stackit (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 5</summary>

  ```
      L1: " Error: _nfs.go:77: Cluster.NewMachine: machine __24b2a685-38e9-4938-b350-41495557c3e9__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.67.48:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 4</summary>

  ```
      L1: " Error: _nfs.go:77: Cluster.NewMachine: machine __e73f8e07-d9d5-44e3-933f-04402de5da86__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.85.128:22: i/o timeout_"
      L2: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 3</summary>

  ```
      L1: " Error: _nfs.go:80: NFS server booted."
      L2: "nfs.go:85: Test file __/tmp/tmp.ohxzEP73gf__ created on server."
      L3: "nfs.go:122: Cluster.NewMachine: machine __a83c3d09-e60d-477e-92ae-c410b7e22afe__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.92.150:22: i/o timeout_"
      L4: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 2</summary>

  ```
      L1: " Error: _nfs.go:80: NFS server booted."
      L2: "nfs.go:85: Test file __/tmp/tmp.dJK34mWVwi__ created on server."
      L3: "nfs.go:122: Cluster.NewMachine: machine __8acb0c76-7889-416f-a053-21e60d838b9b__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 192.214.179.40:22: i/o timeout_"
      L4: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for stackit, run 1</summary>

  ```
      L1: " Error: _nfs.go:77: Cluster.NewMachine: machine __0d2f9451-531d-4dd3-bcff-11c8f0b7b855__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 188.34.64.198:22: i/o timeout_"
      L2: " "
  ```


</details>

