### Test report for 4669.0.0 / arm64

**Platforms tested** : hetzner 

🟢 ok **cl.basic**; Succeeded: hetzner (1)

🟢 ok **cl.etcd-member.discovery**; Succeeded: hetzner (1)

🟢 ok **cl.flannel.vxlan**; Succeeded: hetzner (1)

🟢 ok **cl.ignition.kargs**; Succeeded: hetzner (1)

🟢 ok **cl.ignition.misc.empty**; Succeeded: hetzner (1)

🟢 ok **cl.ignition.v1.noop**; Succeeded: hetzner (1)

🟢 ok **cl.ignition.v2.btrfsroot**; Succeeded: hetzner (1)

🟢 ok **cl.ignition.v2.noop**; Succeeded: hetzner (1)

🟢 ok **cl.install.cloudinit**; Succeeded: hetzner (1)

🟢 ok **cl.internet**; Succeeded: hetzner (1)

🟢 ok **cl.network.initramfs.second-boot**; Succeeded: hetzner (1)

🟢 ok **coreos.ignition.once**; Succeeded: hetzner (1)

🟢 ok **coreos.ignition.resource.local**; Succeeded: hetzner (1)

🟢 ok **coreos.ignition.resource.remote**; Succeeded: hetzner (1)

🟢 ok **coreos.ignition.security.tls**; Succeeded: hetzner (1)

🟢 ok **coreos.ignition.sethostname**; Succeeded: hetzner (2); Failed: hetzner (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for hetzner, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: failed to request new server: error during placement (resource_unavailable, e4a93da96a8ccad620b04761dcb40c91)_"
      L2: " "
  ```


</details>


🟢 ok **coreos.ignition.ssh.key**; Succeeded: hetzner (1)

🟢 ok **docker.network-openbsd-nc**; Succeeded: hetzner (1)

🟢 ok **kubeadm.v1.33.8.calico.base**; Succeeded: hetzner (1)

🟢 ok **kubeadm.v1.33.8.cilium.base**; Succeeded: hetzner (2); Failed: hetzner (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for hetzner, run 1</summary>

  ```
      L1: " Error: _kubeadm.go:197: unable to setup cluster: unable to create master node: failed to request new server: error during placement (resource_unavailable, 55a01d17d2cb56d6106e6a85f1a21183)_"
      L2: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.flannel.base**; Succeeded: hetzner (2); Failed: hetzner (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for hetzner, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0420 17:54:48.154331    2039 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0420 17:54:56.227994    2252 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-8-455cb1d789__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-8-455cb1d789__: lookup ci-4669-0-0-8-455cb1d789 on 185.12.64.2:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4669-0-0-8-455cb1d789 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 178.104.184.30]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.502172821s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://178.104.184.30:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.622050883s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 3.146890188s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.004105631s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-8-455cb1d789 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-8-455cb1d789 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: pggljz.u1ucsr2epwvjb37g"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 178.104.184.30:6443 --token pggljz.u1ucsr2epwvjb37g _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:dbb23866b993ffb3c8a08398d723c5507141e9b6d841050584beed329b3d3811 "
      L90: "cluster.go:125: namespace/kube-flannel created"
      L91: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L92: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L93: "cluster.go:125: serviceaccount/flannel created"
      L94: "cluster.go:125: configmap/kube-flannel-cfg created"
      L95: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L96: "kubeadm.go:197: unable to setup cluster: unable to create worker node: failed to request new server: error during placement (resource_unavailable, c0fe6fc0b2cbb05065d35250de870a34)_"
      L97: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.calico.base**; Succeeded: hetzner (1)

🟢 ok **kubeadm.v1.34.4.cilium.base**; Succeeded: hetzner (2); Failed: hetzner (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for hetzner, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0420 17:54:48.944384    2098 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0420 17:54:56.638703    2298 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-8-43ed53e82b__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-8-43ed53e82b__: lookup ci-4669-0-0-8-43ed53e82b on 185.12.64.1:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4669-0-0-8-43ed53e82b kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 178.105.2.107]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L43: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.001226476s"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://178.105.2.107:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.024970874s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 2.665480903s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 4.502802926s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-8-43ed53e82b as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-8-43ed53e82b as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: nnbqjr.rxhn74tlgsa4ape8"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 178.105.2.107:6443 --token nnbqjr.rxhn74tlgsa4ape8 _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:5e904b13616a5a55b4f53e65fb78a6fde302dcaad7dc56440ca1c30c8460d9a2 "
      L92: "cluster.go:125: i  Using Cilium version 1.12.5"
      L93: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L94: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L95: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L96: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L97: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L98: "cluster.go:125: ? Created CA in secret cilium-ca"
      L99: "cluster.go:125: ? Generating certificates for Hubble..."
      L100: "cluster.go:125: ? Creating Service accounts..."
      L101: "cluster.go:125: ? Creating Cluster roles..."
      L102: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L103: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L104: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L105: "cluster.go:125: ? Creating Agent DaemonSet..."
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L109: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L110: "cluster.go:125: ? Creating Operator Deployment..."
      L111: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L112: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L113: "cluster.go:125: ?[33m    /??_"
      L114: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L115: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L116: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L117: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L118: "cluster.go:125: ?[34m    ___/"
      L119: "cluster.go:125: ?[0m"
      L120: "cluster.go:125: Deployment       cilium-operator    "
      L121: "cluster.go:125: DaemonSet        cilium             "
      L122: "cluster.go:125: Containers:      cilium             "
      L123: "cluster.go:125:                  cilium-operator    "
      L124: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L125: "kubeadm.go:197: unable to setup cluster: unable to create worker node: failed to request new server: error during placement (resource_unavailable, 53a7dadf36bdfe0253d438293c5009fc)_"
      L126: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.flannel.base**; Succeeded: hetzner (2); Failed: hetzner (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for hetzner, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0420 17:54:48.543470    2041 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0420 17:54:56.374376    2252 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-b-35ecb8f306__ could not be reached"
      L13: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-b-35ecb8f306__: lookup ci-4669-0-0-b-35ecb8f306 on 185.12.64.2:53: no such host"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4669-0-0-b-35ecb8f306 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 178.104.176.47]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L43: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 501.543228ms"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://178.104.176.47:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.448194523s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 2.831961682s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 5.003922189s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-b-35ecb8f306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-b-35ecb8f306 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: ycr9l5.5rmdvs12y5j1lnx9"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 178.104.176.47:6443 --token ycr9l5.5rmdvs12y5j1lnx9 _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:6db58564479619e9050fab294affff7e348f1f8de2c16dcc0a8c60e141d42069 "
      L92: "cluster.go:125: namespace/kube-flannel created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L95: "cluster.go:125: serviceaccount/flannel created"
      L96: "cluster.go:125: configmap/kube-flannel-cfg created"
      L97: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L98: "kubeadm.go:197: unable to setup cluster: unable to create worker node: failed to request new server: error during placement (resource_unavailable, 964beff944525d7fd1afa49ebe1a4574)_"
      L99: " "
  ```


</details>


🟢 ok **kubeadm.v1.35.1.calico.base**; Succeeded: hetzner (2); Failed: hetzner (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for hetzner, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.35.4"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.35.4"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.35.4"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.35.4"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.13.1"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.6-0"
      L8: "cluster.go:125: [init] Using Kubernetes version: v1.35.4"
      L9: "cluster.go:125: [preflight] Running pre-flight checks"
      L10: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-f-95ed78d3e5__ could not be reached"
      L11: "cluster.go:125:  [WARNING Hostname]: hostname __ci-4669-0-0-f-95ed78d3e5__: lookup ci-4669-0-0-f-95ed78d3e5 on 185.12.64.2:53: no such host"
      L12: "cluster.go:125:  [WARNING Service-kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L15: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [ci-4669-0-0-f-95ed78d3e5 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.?96.0.1 178.104.190.59]"
      L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L31: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L35: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L36: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L39: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L40: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L41: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 502.781959ms"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://178.104.190.59:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.01169025s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 2.705831339s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 4.502431872s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-f-95ed78d3e5 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-lo?ad-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node ci-4669-0-0-f-95ed78d3e5 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 3fut5n.ldrzn94d41d2uib2"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 178.104.190.59:6443 --token 3fut5n.ldrzn94d41d2uib2 _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:764ce8f43a37ab07945cb15d9425c67b9f94f73d78fcaef25994619bb88cb6fb "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L101: "cluster.go:125: installation.operator.tigera.io/default created"
      L102: "cluster.go:125: apiserver.operator.tigera.io/default created"
      L103: "cluster.go:125: goldmane.operator.tigera.io/default created"
      L104: "cluster.go:125: whisker.operator.tigera.io/default created"
      L105: "kubeadm.go:197: unable to setup cluster: unable to create worker node: failed to request new server: error during placement (resource_unavailable, 7f7c3577fd9e810c5af6556af6e83532)_"
      L106: " "
  ```


</details>


🟢 ok **kubeadm.v1.35.1.cilium.base**; Succeeded: hetzner (1)

🟢 ok **kubeadm.v1.35.1.flannel.base**; Succeeded: hetzner (1)

🟢 ok **linux.nfs.v3**; Succeeded: hetzner (1)

🟢 ok **linux.nfs.v4**; Succeeded: hetzner (1)
