### Test report for 4593.1.0+nightly-20260422-2100 / amd64

**Platforms tested** : qemu_uefi 

🟢 ok **bpf.ig**; Succeeded: qemu_uefi (1)

🟢 ok **cl.basic**; Succeeded: qemu_uefi (1)

🟢 ok **cl.cloudinit.basic**; Succeeded: qemu_uefi (1)

🟢 ok **cl.cloudinit.multipart-mime**; Succeeded: qemu_uefi (1)

🟢 ok **cl.cloudinit.script**; Succeeded: qemu_uefi (1)

🟢 ok **cl.disk.raid0.data**; Succeeded: qemu_uefi (1)

🟢 ok **cl.disk.raid0.root**; Succeeded: qemu_uefi (1)

🟢 ok **cl.disk.raid1.data**; Succeeded: qemu_uefi (1)

🟢 ok **cl.disk.raid1.root**; Succeeded: qemu_uefi (2); Failed: qemu_uefi (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu_uefi, run 1</summary>

  ```
      L1: " Error: _raid.go:223: machine __f3fb7087-32f3-46a3-b1bb-fd96cc5a9d14__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.27:22: connect: no route to host"
      L2: "harness.go:616: Found systemd dependency unit failed to start (?[0;1;39msystemd-fsck????[0mem Check on /dev/disk/by-label/OEM.  ) on machine f3fb7087-32f3-46a3-b1bb-fd96cc5a9d14 console_"
      L3: " "
  ```


</details>


🟢 ok **cl.etcd-member.discovery**; Succeeded: qemu_uefi (1)

🟢 ok **cl.etcd-member.etcdctlv3**; Succeeded: qemu_uefi (1)

🟢 ok **cl.etcd-member.v2-backup-restore**; Succeeded: qemu_uefi (1)

🟢 ok **cl.filesystem**; Succeeded: qemu_uefi (1)

🟢 ok **cl.flannel.udp**; Succeeded: qemu_uefi (1)

🟢 ok **cl.flannel.vxlan**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.instantiated.enable-unit**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.kargs**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.luks**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.oem.indirect**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.oem.indirect.new**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.oem.regular**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.oem.regular.new**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.oem.reuse**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.oem.wipe**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.partition_on_boot_disk**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.symlink**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.translation**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v1.btrfsroot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v1.ext4root**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v1.groups**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v1.once**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v1.sethostname**; Succeeded: qemu_uefi (2); Failed: qemu_uefi (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu_uefi, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __a304eed6-b2b7-442d-8894-b5c8b74c4d7b__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.22:22: connect: ?no route to host_"
      L2: " "
  ```


</details>


🟢 ok **cl.ignition.v1.users**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v1.xfsroot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v2.btrfsroot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v2.ext4root**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v2.users**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v2.xfsroot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v2_1.ext4checkexisting**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v2_1.swap**; Succeeded: qemu_uefi (1)

🟢 ok **cl.ignition.v2_1.vfat**; Succeeded: qemu_uefi (1)

🟢 ok **cl.install.cloudinit**; Succeeded: qemu_uefi (1)

🟢 ok **cl.internet**; Succeeded: qemu_uefi (1)

🟢 ok **cl.locksmith.cluster**; Succeeded: qemu_uefi (1)

🟢 ok **cl.network.initramfs.second-boot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.network.iptables**; Succeeded: qemu_uefi (2); Failed: qemu_uefi (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu_uefi, run 1</summary>

  ```
      L1: " Error: _harness.go:646: Cluster failed starting machines: machine __3dc8a10e-e84c-4f94-8076-e20504de30d3__ failed basic checks: some systemd units failed:"
      L2: "??? systemd-hwdb-update.service loaded failed failed Rebuild Hardware Database"
      L3: "status: "
      L4: "journal:-- No entries --"
      L5: "harness.go:616: Found systemd unit failed to start (?[0;1;39msystemd-hwdb-updat???ervice?[0m - Rebuild Hardware Database.  ) on machine 3dc8a10e-e84c-4f94-8076-e20504de30d3 console"
      L6: "harness.go:616: Found systemd dependency unit failed to start (?[0;1;39msystemd-fsck????[0mem Check on /dev/disk/by-label/OEM.  ) on machine 3dc8a10e-e84c-4f94-8076-e20504de30d3 console_"
      L7: " "
  ```


</details>


🟢 ok **cl.network.listeners**; Succeeded: qemu_uefi (1)

🟢 ok **cl.network.nftables**; Succeeded: qemu_uefi (1)

🟢 ok **cl.network.wireguard**; Succeeded: qemu_uefi (1)

🟢 ok **cl.omaha.ping**; Succeeded: qemu_uefi (1)

🟢 ok **cl.osreset.ignition-rerun**; Succeeded: qemu_uefi (1)

🟢 ok **cl.overlay.cleanup**; Succeeded: qemu_uefi (1)

🟢 ok **cl.swap_activation**; Succeeded: qemu_uefi (1)

🟢 ok **cl.sysext.boot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.sysext.fallbackdownload**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tang.nonroot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tang.root**; Succeeded: qemu_uefi (1)

🟢 ok **cl.toolbox.dnf-install**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tpm.eventlog**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tpm.nonroot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tpm.root**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tpm.root-cryptenroll**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tpm.root-cryptenroll-pcr-noupdate**; Succeeded: qemu_uefi (1)

🟢 ok **cl.tpm.root-cryptenroll-pcr-withupdate**; Succeeded: qemu_uefi (1)

🟢 ok **cl.update.badverity**; Succeeded: qemu_uefi (1)

🟢 ok **cl.update.reboot**; Succeeded: qemu_uefi (1)

🟢 ok **cl.users.shells**; Succeeded: qemu_uefi (1)

🟢 ok **cl.verity**; Succeeded: qemu_uefi (1)

🟢 ok **confext.skiprefresh**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.auth.verify**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.groups**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.once**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.resource.local**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.resource.remote**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.resource.s3.versioned**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.security.tls**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.sethostname**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.ignition.systemd.enable-service**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.locksmith.reboot**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.locksmith.tls**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.selinux.boolean**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.selinux.enforce**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.tls.fetch-urls**; Succeeded: qemu_uefi (1)

🟢 ok **coreos.update.badusr**; Succeeded: qemu_uefi (1)

🟢 ok **devcontainer.docker**; Succeeded: qemu_uefi (1)

🟢 ok **devcontainer.systemd-nspawn**; Succeeded: qemu_uefi (1)

🟢 ok **docker.base**; Succeeded: qemu_uefi (1)

🟢 ok **docker.btrfs-storage**; Succeeded: qemu_uefi (1)

🟢 ok **docker.containerd-restart**; Succeeded: qemu_uefi (1)

🟢 ok **docker.enable-service.sysext**; Succeeded: qemu_uefi (1)

🟢 ok **docker.lib-coreos-dockerd-compat**; Succeeded: qemu_uefi (1)

🟢 ok **docker.network-openbsd-nc**; Succeeded: qemu_uefi (1)

🟢 ok **docker.selinux**; Succeeded: qemu_uefi (1)

🟢 ok **docker.userns**; Succeeded: qemu_uefi (1)

🟢 ok **kubeadm.v1.33.8.calico.base**; Succeeded: qemu_uefi (1)

🟢 ok **kubeadm.v1.33.8.cilium.base**; Succeeded: qemu_uefi (1)

🟢 ok **kubeadm.v1.33.8.flannel.base**; Succeeded: qemu_uefi (2); Failed: qemu_uefi (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu_uefi, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0422 23:48:51.882157    2034 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0422 23:49:34.175964    2304 version.go:261] remote version is much newer: v1.36.0; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L15: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.2?1]"
      L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L31: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L35: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L36: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L39: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L40: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L41: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L43: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L44: "cluster.go:125: [kubelet-check] The kubelet is healthy after 3.511549111s"
      L45: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L46: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.21:6443/livez"
      L47: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L48: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L49: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 7.541044383s"
      L50: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 28.261504926s"
      L51: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 1m15.437829982s"
      L52: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L53: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L54: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L55: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L56: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L57: "cluster.go:125: [bootstrap-token] Using token: kihpqc.7ewxwnts1nenw9d7"
      L58: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L59: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L60: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L63: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L64: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L65: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L66: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L67: "cluster.go:125: "
      L68: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L71: "cluster.go:125: "
      L72: "cluster.go:125:   mkdir -p $HOME/.kube"
      L73: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L74: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L75: "cluster.go:125: "
      L76: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L77: "cluster.go:125: "
      L78: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: You should now deploy a pod network to the cluster."
      L81: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L82: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: kubeadm join 10.0.0.21:6443 --token kihpqc.7ewxwnts1nenw9d7 _"
      L87: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:ab967f57628211bb2a1fb4e0b681b7df15dbbba6e840d6ccd70a8a0fb30aa6de "
      L88: "cluster.go:125: namespace/kube-flannel created"
      L89: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
      L90: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
      L91: "cluster.go:125: serviceaccount/flannel created"
      L92: "cluster.go:125: configmap/kube-flannel-cfg created"
      L93: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
      L94: "kubeadm.go:197: unable to setup cluster: unable to create worker node with large disk: machine __d247f0fb-b62c-4228-81cc-b58b748394fb__ failed to start: ssh journalctl failed: time limit exceeded: dia?l tcp 10.0.0.23:22: connect: no route to host_"
      L95: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.calico.base**; Succeeded: qemu_uefi (2); Failed: qemu_uefi (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu_uefi, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0423 00:14:00.038655    2106 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0423 00:15:00.248761    2394 version.go:260] remote version is much newer: v1.36.0; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L15: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.3?2]"
      L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L31: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L35: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L36: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L39: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L40: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L41: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 2.507881634s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.32:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 9.52696119s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 31.654990348s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 55.593091472s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 4uievq.hk8gkl31ob86k3v9"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.32:6443 --token 4uievq.hk8gkl31ob86k3v9 _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:41f14baf85efaba684add7006a08ee22892ff43b3583c1409e95a4c7402befcf "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: error: timed out waiting for the condition"
      L98: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L99: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.cilium.base**; Succeeded: qemu_uefi (1)

🟢 ok **kubeadm.v1.34.4.flannel.base**; Succeeded: qemu_uefi (1)

🟢 ok **kubeadm.v1.35.1.calico.base**; Succeeded: qemu_uefi (1)

🟢 ok **kubeadm.v1.35.1.cilium.base**; Succeeded: qemu_uefi (1)

🟢 ok **kubeadm.v1.35.1.flannel.base**; Succeeded: qemu_uefi (1)

🟢 ok **linux.nfs.v3**; Succeeded: qemu_uefi (1)

🟢 ok **linux.nfs.v4**; Succeeded: qemu_uefi (1)

🟢 ok **linux.ntp**; Succeeded: qemu_uefi (1)

🟢 ok **misc.fips**; Succeeded: qemu_uefi (1)

🟢 ok **packages**; Succeeded: qemu_uefi (1)

🟢 ok **sysext.custom-docker.sysext**; Succeeded: qemu_uefi (1)

🟢 ok **sysext.custom-oem**; Succeeded: qemu_uefi (1)

🟢 ok **sysext.disable-containerd**; Succeeded: qemu_uefi (1)

🟢 ok **sysext.disable-docker**; Succeeded: qemu_uefi (1)

🟢 ok **sysext.simple**; Succeeded: qemu_uefi (1)

🟢 ok **systemd.journal.remote**; Succeeded: qemu_uefi (1)

🟢 ok **systemd.journal.user**; Succeeded: qemu_uefi (1)

🟢 ok **systemd.sysusers.gshadow**; Succeeded: qemu_uefi (1)
