### Test report for 4081.3.6+nightly-20260415-2100 / amd64

**Platforms tested** : qemu 

🟢 ok **bpf.ig**; Succeeded: qemu (1)

🟢 ok **cl.basic**; Succeeded: qemu (1)

🟢 ok **cl.cgroupv1**; Succeeded: qemu (1)

🟢 ok **cl.cloudinit.basic**; Succeeded: qemu (1)

🟢 ok **cl.cloudinit.multipart-mime**; Succeeded: qemu (1)

🟢 ok **cl.cloudinit.script**; Succeeded: qemu (1)

🟢 ok **cl.disk.raid0.data**; Succeeded: qemu (1)

🟢 ok **cl.disk.raid0.root**; Succeeded: qemu (1)

🟢 ok **cl.disk.raid1.data**; Succeeded: qemu (1)

🟢 ok **cl.disk.raid1.root**; Succeeded: qemu (1)

🟢 ok **cl.etcd-member.discovery**; Succeeded: qemu (1)

🟢 ok **cl.etcd-member.etcdctlv3**; Succeeded: qemu (1)

🟢 ok **cl.etcd-member.v2-backup-restore**; Succeeded: qemu (1)

🟢 ok **cl.filesystem**; Succeeded: qemu (1)

🟢 ok **cl.flannel.udp**; Succeeded: qemu (1)

🟢 ok **cl.flannel.vxlan**; Succeeded: qemu (1)

🟢 ok **cl.ignition.instantiated.enable-unit**; Succeeded: qemu (1)

🟢 ok **cl.ignition.kargs**; Succeeded: qemu (1)

🟢 ok **cl.ignition.luks**; Succeeded: qemu (1)

🟢 ok **cl.ignition.oem.indirect**; Succeeded: qemu (1)

🟢 ok **cl.ignition.oem.indirect.new**; Succeeded: qemu (1)

🟢 ok **cl.ignition.oem.regular**; Succeeded: qemu (1)

🟢 ok **cl.ignition.oem.regular.new**; Succeeded: qemu (1)

🟢 ok **cl.ignition.oem.reuse**; Succeeded: qemu (1)

🟢 ok **cl.ignition.oem.wipe**; Succeeded: qemu (1)

🟢 ok **cl.ignition.partition_on_boot_disk**; Succeeded: qemu (1)

🟢 ok **cl.ignition.symlink**; Succeeded: qemu (1)

🟢 ok **cl.ignition.translation**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v1.btrfsroot**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v1.ext4root**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v1.groups**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v1.once**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v1.sethostname**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v1.users**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v1.xfsroot**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v2.btrfsroot**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v2.ext4root**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v2.users**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v2.xfsroot**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v2_1.ext4checkexisting**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v2_1.swap**; Succeeded: qemu (1)

🟢 ok **cl.ignition.v2_1.vfat**; Succeeded: qemu (1)

🟢 ok **cl.install.cloudinit**; Succeeded: qemu (1)

🟢 ok **cl.internet**; Succeeded: qemu (1)

🟢 ok **cl.locksmith.cluster**; Succeeded: qemu (1)

🟢 ok **cl.misc.falco**; Succeeded: qemu (1)

🟢 ok **cl.network.initramfs.second-boot**; Succeeded: qemu (1)

🟢 ok **cl.network.iptables**; Succeeded: qemu (1)

🟢 ok **cl.network.listeners**; Succeeded: qemu (1)

🟢 ok **cl.network.wireguard**; Succeeded: qemu (1)

🟢 ok **cl.omaha.ping**; Succeeded: qemu (1)

🟢 ok **cl.osreset.ignition-rerun**; Succeeded: qemu (1)

🟢 ok **cl.overlay.cleanup**; Succeeded: qemu (1)

🟢 ok **cl.swap_activation**; Succeeded: qemu (1)

🟢 ok **cl.sysext.boot**; Succeeded: qemu (1)

🟢 ok **cl.sysext.fallbackdownload**; Succeeded: qemu (1)

🟢 ok **cl.tang.nonroot**; Succeeded: qemu (1)

🟢 ok **cl.tang.root**; Succeeded: qemu (1)

🟢 ok **cl.toolbox.dnf-install**; Succeeded: qemu (1)

🟢 ok **cl.tpm.nonroot**; Succeeded: qemu (1)

🟢 ok **cl.tpm.root**; Succeeded: qemu (1)

🟢 ok **cl.tpm.root-cryptenroll**; Succeeded: qemu (1)

🟢 ok **cl.tpm.root-cryptenroll-pcr-noupdate**; Succeeded: qemu (1)

🟢 ok **cl.tpm.root-cryptenroll-pcr-withupdate**; Succeeded: qemu (2); Failed: qemu (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: Warning: keyslot operation could fail as it requires more than available memory."
      L2: "cluster.go:125: New TPM2 token enrolled as key slot 1."
      L3: "cluster.go:125: Wiped slot 2."
      L4: "tpm.go:354: could not reboot machine: machine __5697396b-5d5c-465e-837e-379773a1f153__ failed basic checks: some systemd units failed:"
      L5: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
      L6: "status: "
      L7: "journal:-- No entries --"
      L8: "harness.go:616: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 5697396b-5d5c-465e-837e-379773a1f153 console_"
      L9: " "
  ```


</details>


🟢 ok **cl.update.badverity**; Succeeded: qemu (1)

🟢 ok **cl.update.reboot**; Succeeded: qemu (1)

🟢 ok **cl.users.shells**; Succeeded: qemu (1)

🟢 ok **cl.verity**; Succeeded: qemu (1)

🟢 ok **coreos.auth.verify**; Succeeded: qemu (1)

🟢 ok **coreos.ignition.groups**; Succeeded: qemu (1)

🟢 ok **coreos.ignition.once**; Succeeded: qemu (2); Failed: qemu (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _execution.go:140: Couldn_t reboot machine: machine __b9ca227e-60b4-4057-94c1-17ec4189ddc2__ failed basic checks: some systemd units failed:"
      L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
      L3: "status: "
      L4: "journal:-- No entries --"
      L5: "harness.go:616: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine b9ca227e-60b4-4057-94c1-17ec4189ddc2 console_"
      L6: " "
  ```


</details>


🟢 ok **coreos.ignition.resource.local**; Succeeded: qemu (1)

🟢 ok **coreos.ignition.resource.remote**; Succeeded: qemu (1)

🟢 ok **coreos.ignition.resource.s3.versioned**; Succeeded: qemu (1)

🟢 ok **coreos.ignition.security.tls**; Succeeded: qemu (1)

🟢 ok **coreos.ignition.sethostname**; Succeeded: qemu (1)

🟢 ok **coreos.ignition.systemd.enable-service**; Succeeded: qemu (1)

🟢 ok **coreos.locksmith.reboot**; Succeeded: qemu (2); Failed: qemu (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _locksmith.go:141: failed to check rebooted machine: some systemd units failed:"
      L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
      L3: "status: "
      L4: "journal:-- No entries --"
      L5: "harness.go:616: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine af62486b-3be0-482d-b736-f6b81ed26c9f console_"
      L6: " "
  ```


</details>


🟢 ok **coreos.locksmith.tls**; Succeeded: qemu (1)

🟢 ok **coreos.selinux.boolean**; Succeeded: qemu (1)

🟢 ok **coreos.selinux.enforce**; Succeeded: qemu (1)

🟢 ok **coreos.tls.fetch-urls**; Succeeded: qemu (1)

🟢 ok **coreos.update.badusr**; Succeeded: qemu (1)

🟢 ok **devcontainer.docker**; Succeeded: qemu (1)

🟢 ok **devcontainer.systemd-nspawn**; Succeeded: qemu (1)

🟢 ok **docker.base**; Succeeded: qemu (1)

🟢 ok **docker.btrfs-storage**; Succeeded: qemu (1)

🟢 ok **docker.containerd-restart**; Succeeded: qemu (1)

🟢 ok **docker.enable-service.sysext**; Succeeded: qemu (1)

🟢 ok **docker.lib-coreos-dockerd-compat**; Succeeded: qemu (1)

🟢 ok **docker.network-openbsd-nc**; Succeeded: qemu (1)

🟢 ok **docker.selinux**; Succeeded: qemu (1)

🟢 ok **docker.userns**; Succeeded: qemu (1)

🟢 ok **kubeadm.v1.33.8.calico.base**; Succeeded: qemu (2); Failed: qemu (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: W0416 02:05:43.106020    1833 version.go:109] could not fetch a Kubernetes version from the internet: unable to get URL __https://dl.k8s.io/release/stable-1.txt__: Get __https:?//dl.k8s.io/release/stable-1.txt__: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
      L2: "cluster.go:125: W0416 02:05:43.109092    1833 version.go:110] falling back to the local client version: v1.33.8"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.8"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.8"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.8"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.8"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L10: "cluster.go:125: I0416 02:06:31.989280    2135 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L11: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L12: "cluster.go:125: [preflight] Running pre-flight checks"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 02:06:33.914338    2135 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?57]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 7.595054829s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.157:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 6.937608346s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 26.906450598s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 1m23.025765338s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 2ziff0.rt6qk85u0rbbwrie"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.157:6443 --token 2ziff0.rt6qk85u0rbbwrie _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:4c9dc9645b0b21118b21adb3e5d457652f66ca4ea8e3795336dce63592482cf7 "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L101: "cluster.go:125: installation.operator.tigera.io/default created"
      L102: "cluster.go:125: apiserver.operator.tigera.io/default created"
      L103: "cluster.go:125: goldmane.operator.tigera.io/default created"
      L104: "cluster.go:125: whisker.operator.tigera.io/default created"
      L105: "cluster.go:125: W0416 02:18:51.842594    1675 joinconfiguration.go:113] [config] WARNING: Ignored configuration document with GroupVersionKind kubelet.config.k8s.io/v1beta1, Kind=KubeletConfiguration"
      L106: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L107: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L108: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L109: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L110: "harness.go:616: Found systemd dependency unit failed to start (?[0;1;39msystemd-fsck????[0mem Check on /dev/disk/by-label/OEM.  ) on machine 09fb20e0-36ec-4d6d-af66-ec99cb9a615e console_"
      L111: " "
  ```


</details>


❌ not ok **kubeadm.v1.33.8.calico.cgroupv1.base**; Failed: qemu (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 5</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 04:19:49.617556    2012 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 04:28:05.498163    2621 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 04:28:18.243298    2621 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 4.195424796s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.6:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 17.46111324s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 30.280126895s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 1m7.540542259s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 9ycu6l.arbaaeu3kk083zgq"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.6:6443 --token 9ycu6l.arbaaeu3kk083zgq _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:1a9fe070b7c99ab0a106bbdd603d65150bcea6c99b47533f5aaf6ceb730a6919 "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: error: timed out waiting for the condition"
      L98: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L99: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 4</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 03:39:20.058118    1927 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 03:39:54.466363    2215 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 03:39:55.180870    2215 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.7?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.512317865s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.7:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 5.872943341s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 16.644575388s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 1m0.523933756s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 8kn1bk.3421atc42i0inh0d"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.7:6443 --token 8kn1bk.3421atc42i0inh0d _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:58a149265f26b67db12c6ef6a95998857f89d1c58003bd3b3897ffccbc9c67e4 "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: error: timed out waiting for the condition"
      L98: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L99: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 3</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 03:14:00.698011    1948 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 03:15:33.216920    2337 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 03:15:34.014073    2337 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.7?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 8.709577339s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.7:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 24.119606657s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 45.696550623s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 3m33.503618039s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 23xh21.72u3oc5he53iwgql"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.7:6443 --token 23xh21.72u3oc5he53iwgql _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:a63b2475f5535153794741932b043449c18fbac5a7a1d03ca49441065d5b2a84 "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: error: timed out waiting for the condition"
      L98: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L99: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 02:33:32.928202    1921 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 02:33:56.282885    2172 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 02:33:56.873415    2172 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.5?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 5.781096401s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.5:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 7.193281385s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 23.407070183s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 31.021815728s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: ixegca.3qqbjienwy0oeslg"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.5:6443 --token ixegca.3qqbjienwy0oeslg _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:ad993afdef0053593de70fa6dbd718c278a5f496441b64bf1d1c5210b9fbc707 "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
      L101: "cluster.go:125: installation.operator.tigera.io/default created"
      L102: "cluster.go:125: apiserver.operator.tigera.io/default created"
      L103: "cluster.go:125: goldmane.operator.tigera.io/default created"
      L104: "cluster.go:125: whisker.operator.tigera.io/default created"
      L105: "cluster.go:125: W0416 02:36:06.198076    1800 joinconfiguration.go:113] [config] WARNING: Ignored configuration document with GroupVersionKind kubelet.config.k8s.io/v1beta1, Kind=KubeletConfiguration"
      L106: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L107: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L108: "--- FAIL: kubeadm.v1.33.8.calico.cgroupv1.base/node_readiness (264.42s)"
      L109: "kubeadm.go:213: nodes are not ready: ready nodes should be equal to 2: 1"
      L110: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L111: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L112: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L113: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L114: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L115: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L116: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L117: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L118: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L119: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L120: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L121: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L122: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L123: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)_"
      L124: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 01:26:35.526986    1959 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 01:26:47.871117    2183 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 01:26:48.209005    2183 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?02]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 507.701644ms"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.102:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.504739687s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 4.672571798s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 10.002939085s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 15go0s.dyyoy3y5kv19au34"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.102:6443 --token 15go0s.dyyoy3y5kv19au34 _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:4b814d5315870d5ec6a9055d0d42384e828fa9169e04a587dcb18af7cf18adb2 "
      L90: "cluster.go:125: namespace/tigera-operator created"
      L91: "cluster.go:125: serviceaccount/tigera-operator created"
      L92: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L94: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L96: "cluster.go:125: deployment.apps/tigera-operator created"
      L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
      L98: "cluster.go:125: error: .status.conditions accessor error: <nil_ is of the type <nil_, expected []interface{}"
      L99: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L100: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.cilium.base**; Succeeded: qemu (1)

🟢 ok **kubeadm.v1.33.8.cilium.cgroupv1.base**; Succeeded: qemu (5); Failed: qemu (1, 2, 3, 4)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 4</summary>

  ```
      L1: " Error: _cluster.go:125: W0416 03:42:31.815319    2023 version.go:109] could not fetch a Kubernetes version from the internet: unable to get URL __https://dl.k8s.io/release/stable-1.txt__: Get __https:?//dl.k8s.io/release/stable-1.txt__: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
      L2: "cluster.go:125: W0416 03:42:31.853292    2023 version.go:110] falling back to the local client version: v1.33.8"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.8"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.8"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.8"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.8"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L10: "cluster.go:125: I0416 03:45:50.514590    2540 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L11: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L12: "cluster.go:125: [preflight] Running pre-flight checks"
      L13: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L14: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L15: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L16: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L17: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L18: "cluster.go:125: W0416 03:46:00.626727    2540 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L19: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L20: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L21: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L22: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.5?]"
      L23: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L25: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L30: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L31: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L32: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L33: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L37: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 6.069331264s"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.5:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 12.562570678s"
      L53: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 34.147698402s"
      L54: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 1m20.911020905s"
      L55: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L56: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L57: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L59: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L60: "cluster.go:125: [bootstrap-token] Using token: wz9lpi.euwrvlprfwx4e8tg"
      L61: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L66: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L67: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L68: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L69: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L70: "cluster.go:125: "
      L71: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L74: "cluster.go:125: "
      L75: "cluster.go:125:   mkdir -p $HOME/.kube"
      L76: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L77: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L78: "cluster.go:125: "
      L79: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L80: "cluster.go:125: "
      L81: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L82: "cluster.go:125: "
      L83: "cluster.go:125: You should now deploy a pod network to the cluster."
      L84: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L85: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L86: "cluster.go:125: "
      L87: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: kubeadm join 10.0.0.5:6443 --token wz9lpi.euwrvlprfwx4e8tg _"
      L90: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:ce50b8da4ff65edcafed3fea07c1b586738383cfa58caffc1e0d0d58cf27baf0 "
      L91: "cluster.go:125: i  Using Cilium version 1.12.5"
      L92: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L93: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L94: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L95: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L96: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L97: "cluster.go:125: ? Created CA in secret cilium-ca"
      L98: "cluster.go:125: ? Generating certificates for Hubble..."
      L99: "cluster.go:125: ? Creating Service accounts..."
      L100: "cluster.go:125: ? Creating Cluster roles..."
      L101: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L102: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L103: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L104: "cluster.go:125: ? Creating Agent DaemonSet..."
      L105: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L109: "cluster.go:125: ? Creating Operator Deployment..."
      L110: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L111: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L112: "cluster.go:125: ?[33m    /??_"
      L113: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L114: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L115: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L116: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L117: "cluster.go:125: ?[34m    ___/"
      L118: "cluster.go:125: ?[0m"
      L119: "cluster.go:125: Deployment       cilium-operator    "
      L120: "cluster.go:125: DaemonSet        cilium             "
      L121: "cluster.go:125: Containers:      cilium             "
      L122: "cluster.go:125:                  cilium-operator    "
      L123: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L124: "cluster.go:125: W0416 03:52:22.474051    1785 joinconfiguration.go:113] [config] WARNING: Ignored configuration document with GroupVersionKind kubelet.config.k8s.io/v1beta1, Kind=KubeletConfiguration"
      L125: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L126: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L127: "--- FAIL: kubeadm.v1.33.8.cilium.cgroupv1.base/IPSec_encryption (300.01s)"
      L128: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L129: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L130: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L131: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L132: "cluster.go:125: Error: Unable to determine status:  timeout while waiting for status to become successful: context deadline exceeded"
      L133: "cluster.go:145: __/opt/bin/cilium status --wait --wait-duration 1m__ failed: output ?[33m    /????_"
      L134: "?[36m /?????[33m___/?[32m????_?[0m    Cilium:         ?[31m3 errors?[0m"
      L135: "?[36m ___?[31m/????_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L136: "?[32m /?????[31m___/?[35m????_?[0m    Hubble:         ?[36mdisabled?[0m"
      L137: "?[32m ___?[34m/????_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L138: "?[34m    ___/"
      L139: "?[0m"
      L140: "Deployment        cilium-operator    "
      L141: "DaemonSet         cilium             "
      L142: "Containers:       cilium-operator    Running: ?[32m1?[0m"
      L143: "cilium             Running: ?[32m2?[0m"
      L144: "Cluster Pods:     5/5 managed by Cilium"
      L145: "Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 2"
      L146: "cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
      L147: "Errors:           cilium             cilium-skp8r    controller sync-to-k8s-ciliumendpoint (259) is failing since 8s (3x): Unauthorized"
      L148: "cilium             cilium-skp8r    controller sync-to-k8s-ciliumendpoint (1247) is failing since 21s (18x): Unauthorized"
      L149: "cilium             cilium-skp8r    controller cilium-health-ep is failing since 3m55s (1x): Get __http://192.168.1.38:4240/hello__: dial tcp 192.168.1.38:4240: connect: no route to host, status Proces?s exited with status 1_"
      L150: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 3</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 03:14:01.935624    1956 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 03:15:31.424289    2323 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 03:15:32.980249    2323 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.5?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 7.089999184s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.5:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 12.457634149s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 33.500680391s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 4m8.670738474s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 5nag5i.st1qt1rwzip2zbr6"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.5:6443 --token 5nag5i.st1qt1rwzip2zbr6 _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:4932530e11775ce55378cbef4115e20c943b367034a9c11608a459fff23cf5f5 "
      L90: "cluster.go:125: i  Using Cilium version 1.12.5"
      L91: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L92: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L93: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L94: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L95: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L96: "cluster.go:125: ? Created CA in secret cilium-ca"
      L97: "cluster.go:125: ? Generating certificates for Hubble..."
      L98: "cluster.go:125: ? Creating Service accounts..."
      L99: "cluster.go:125: ? Creating Cluster roles..."
      L100: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L101: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L102: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L103: "cluster.go:125: ? Creating Agent DaemonSet..."
      L104: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L105: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L108: "cluster.go:125: ? Creating Operator Deployment..."
      L109: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L110: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L111: "cluster.go:125: ?[33m    /??_"
      L112: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L113: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L114: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L115: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L116: "cluster.go:125: ?[34m    ___/"
      L117: "cluster.go:125: ?[0m"
      L118: "cluster.go:125: DaemonSet         cilium             Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
      L119: "cluster.go:125: Deployment        cilium-operator    Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
      L120: "cluster.go:125: Containers:       cilium             Running: ?[32m1?[0m"
      L121: "cluster.go:125:                   cilium-operator    Running: ?[32m1?[0m"
      L122: "cluster.go:125: Cluster Pods:     2/2 managed by Cilium"
      L123: "cluster.go:125: Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 1"
      L124: "cluster.go:125:                   cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
      L125: "cluster.go:125: W0416 03:30:09.640913    1830 joinconfiguration.go:113] [config] WARNING: Ignored configuration document with GroupVersionKind kubelet.config.k8s.io/v1beta1, Kind=KubeletConfiguration"
      L126: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L127: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L128: "cluster.go:125: error execution phase kubelet-wait-bootstrap: failed while waiting for TLS bootstrap: context deadline exceeded"
      L129: "cluster.go:125: To see the stack trace of this error execute with --v=5 or higher"
      L130: "kubeadm.go:197: unable to setup cluster: unable to run worker script: Process exited with status 1_"
      L131: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 02:33:56.061663    1944 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 02:34:42.024341    2247 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 02:34:42.658928    2247 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.504775216s"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.6:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.874972164s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 4.846042224s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 19.514692362s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: 0kx8rl.zesmt8upfkiduay8"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.6:6443 --token 0kx8rl.zesmt8upfkiduay8 _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:275aba19b8cace61f476150e24a28641ea323ce75514a0c9a6a1bcf14e76853c "
      L90: "cluster.go:125: i  Using Cilium version 1.12.5"
      L91: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L92: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L93: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L94: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L95: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L96: "cluster.go:125: ? Created CA in secret cilium-ca"
      L97: "cluster.go:125: ? Generating certificates for Hubble..."
      L98: "cluster.go:125: ? Creating Service accounts..."
      L99: "cluster.go:125: ? Creating Cluster roles..."
      L100: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L101: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L102: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L103: "cluster.go:125: ? Creating Agent DaemonSet..."
      L104: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L105: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L108: "cluster.go:125: ? Creating Operator Deployment..."
      L109: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L110: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L111: "cluster.go:125: ?[33m    /??_"
      L112: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L113: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L114: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L115: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L116: "cluster.go:125: ?[34m    ___/"
      L117: "cluster.go:125: ?[0m"
      L118: "cluster.go:125: Deployment        cilium-operator    Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
      L119: "cluster.go:125: DaemonSet         cilium             Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
      L120: "cluster.go:125: Containers:       cilium             Running: ?[32m1?[0m"
      L121: "cluster.go:125:                   cilium-operator    Running: ?[32m1?[0m"
      L122: "cluster.go:125: Cluster Pods:     2/2 managed by Cilium"
      L123: "cluster.go:125: Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 1"
      L124: "cluster.go:125:                   cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
      L125: "cluster.go:125: W0416 02:44:07.469954    1795 joinconfiguration.go:113] [config] WARNING: Ignored configuration document with GroupVersionKind kubelet.config.k8s.io/v1beta1, Kind=KubeletConfiguration"
      L126: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L127: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L128: "--- FAIL: kubeadm.v1.33.8.cilium.cgroupv1.base/nginx_deployment (330.89s)"
      L129: "kubeadm.go:232: nginx is not deployed: ready replicas should be equal to 1: null"
      L130: "--- FAIL: kubeadm.v1.33.8.cilium.cgroupv1.base/NFS_deployment (447.77s)"
      L131: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L132: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L133: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L134: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L135: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L136: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L137: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L138: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L139: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L140: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L141: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L142: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L143: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L144: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L145: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L146: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L147: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L148: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L149: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L150: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L151: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L152: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L153: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L154: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L155: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L156: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L157: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L158: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L159: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L160: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L161: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L162: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L163: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L164: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L165: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L166: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L167: "kubeadm.go:264: nginx pod with NFS is not deployed: getting container status: Process exited with status 5"
      L168: "--- FAIL: kubeadm.v1.33.8.cilium.cgroupv1.base/IPSec_encryption (86.76s)"
      L169: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L170: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L171: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L172: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L173: "cluster.go:125: Error: Unable to determine status:  timeout while waiting for status to become successful: context deadline exceeded"
      L174: "cluster.go:145: __/opt/bin/cilium status --wait --wait-duration 1m__ failed: output ?[33m    /????_"
      L175: "?[36m /?????[33m___/?[32m????_?[0m    Cilium:         ?[31m2 errors?[0m"
      L176: "?[36m ___?[31m/????_?[32m__/?[0m    Operator:       ?[31m1 errors?[0m, ?[33m1 warnings?[0m"
      L177: "?[32m /?????[31m___/?[35m????_?[0m    Hubble:         ?[36mdisabled?[0m"
      L178: "?[32m ___?[34m/????_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L179: "?[34m    ___/"
      L180: "?[0m"
      L181: "Deployment        cilium-operator    Desired: 1, Unavailable: ?[31m1/1?[0m"
      L182: "DaemonSet         cilium             Desired: 2, Ready: ?[33m1/2?[0m, Available: ?[33m1/2?[0m, Unavailable: ?[31m1/2?[0m"
      L183: "Containers:       cilium             Running: ?[32m2?[0m"
      L184: "cilium-operator    Pending: ?[32m1?[0m"
      L185: "Cluster Pods:     3/5 managed by Cilium"
      L186: "Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 2"
      L187: "cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
      L188: "Errors:           cilium             cilium-95kf6                        unable to retrieve cilium status: command terminated with exit code 1"
      L189: "cilium             cilium                              1 pods of DaemonSet cilium are not ready"
      L190: "cilium-operator    cilium-operator                     1 pods of Deployment cilium-operator are not ready"
      L191: "Warnings:         cilium-operator    cilium-operator-6c4d7847fc-h92cg    pod is pending, status Process exited with status 1_"
      L192: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 01:39:37.810935    1934 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.11"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.11"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.11"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.33.11"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.24-0"
      L9: "cluster.go:125: I0416 01:39:46.528034    2159 version.go:261] remote version is much newer: v1.35.4; falling back to: stable-1.33"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.33.11"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 01:39:46.788121    2159 checks.go:843] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm.It is reco?mmended to use __registry.k8s.io/pause:3.10__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?40]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L43: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L44: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L45: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L46: "cluster.go:125: [kubelet-check] The kubelet is healthy after 502.150305ms"
      L47: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L48: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.140:6443/livez"
      L49: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L50: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L51: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 1.506186567s"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 1.982374138s"
      L53: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 4.001826199s"
      L54: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L55: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L56: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L57: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L59: "cluster.go:125: [bootstrap-token] Using token: otncwz.f4cdy4m7u9iw8jek"
      L60: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L61: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L65: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L66: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L67: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L68: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L69: "cluster.go:125: "
      L70: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L73: "cluster.go:125: "
      L74: "cluster.go:125:   mkdir -p $HOME/.kube"
      L75: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L76: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L77: "cluster.go:125: "
      L78: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L79: "cluster.go:125: "
      L80: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: You should now deploy a pod network to the cluster."
      L83: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L84: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: kubeadm join 10.0.0.140:6443 --token otncwz.f4cdy4m7u9iw8jek _"
      L89: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:d6473390c7b8221bfd2e312e396c9a4ccb4d3bbf0ee7c0b167b2f5eeff175cc8 "
      L90: "cluster.go:125: i  Using Cilium version 1.12.5"
      L91: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L92: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L93: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L94: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L95: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L96: "cluster.go:125: ? Created CA in secret cilium-ca"
      L97: "cluster.go:125: ? Generating certificates for Hubble..."
      L98: "cluster.go:125: ? Creating Service accounts..."
      L99: "cluster.go:125: ? Creating Cluster roles..."
      L100: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L101: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L102: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L103: "cluster.go:125: ? Creating Agent DaemonSet..."
      L104: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L105: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L108: "cluster.go:125: ? Creating Operator Deployment..."
      L109: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L110: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L111: "cluster.go:125: ?[33m    /??_"
      L112: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L113: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L114: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L115: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L116: "cluster.go:125: ?[34m    ___/"
      L117: "cluster.go:125: ?[0m"
      L118: "cluster.go:125: Deployment       cilium-operator    "
      L119: "cluster.go:125: DaemonSet        cilium             "
      L120: "cluster.go:125: Containers:      cilium             "
      L121: "cluster.go:125:                  cilium-operator    "
      L122: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
      L123: "cluster.go:125: W0416 01:40:22.815067    1773 joinconfiguration.go:113] [config] WARNING: Ignored configuration document with GroupVersionKind kubelet.config.k8s.io/v1beta1, Kind=KubeletConfiguration"
      L124: "cluster.go:125:  [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2"
      L125: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L126: "--- FAIL: kubeadm.v1.33.8.cilium.cgroupv1.base/nginx_deployment (189.31s)"
      L127: "kubeadm.go:232: nginx is not deployed: ready replicas should be equal to 1: null"
      L128: "--- FAIL: kubeadm.v1.33.8.cilium.cgroupv1.base/NFS_deployment (204.82s)"
      L129: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L130: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L131: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L132: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L133: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L134: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L135: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L136: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L137: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L138: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L139: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L140: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L141: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L142: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L143: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L144: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L145: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L146: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L147: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L148: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L149: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L150: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L151: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L152: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L153: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L154: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L155: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L156: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L157: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L158: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L159: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L160: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L161: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L162: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L163: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L164: "cluster.go:125: jq: error (at <stdin_:122): Cannot iterate over null (null)"
      L165: "kubeadm.go:264: nginx pod with NFS is not deployed: getting container status: Process exited with status 5"
      L166: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L167: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L168: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L169: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog_"
      L170: " "
  ```


</details>


🟢 ok **kubeadm.v1.33.8.flannel.base**; Succeeded: qemu (1)

🟢 ok **kubeadm.v1.33.8.flannel.cgroupv1.base**; Succeeded: qemu (1)

❌ not ok **kubeadm.v1.34.4.calico.base**; Failed: qemu (1, 2, 3, 4, 5)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 5</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 04:18:43.564110    1853 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: W0416 04:24:30.756016    2345 version.go:108] could not fetch a Kubernetes version from the internet: unable to get URL __https://dl.k8s.io/release/stable-1.txt__: Get __https://dl.k8s?.io/release/stable-1.txt__: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
      L10: "cluster.go:125: W0416 04:24:30.766175    2345 version.go:109] falling back to the local client version: v1.34.4"
      L11: "cluster.go:125: [init] Using Kubernetes version: v1.34.4"
      L12: "cluster.go:125: [preflight] Running pre-flight checks"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 04:24:49.244709    2345 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.7?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L43: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 4.030182742s"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.7:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 20.762109113s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 36.818197357s"
      L55: "cluster.go:125: E0416 04:34:41.297265    2345 request.go:1196] __Unexpected error when reading response body__ err=__context deadline exceeded (Client.Timeout or context cancellation while reading bod?y)__"
      L56: "cluster.go:125: E0416 04:45:49.077856    2345 request.go:1196] __Unexpected error when reading response body__ err=__context deadline exceeded (Client.Timeout or context cancellation while reading bod?y)__"
      L57: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 16m46.235528367s"
      L58: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L59: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L60: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L61: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L62: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L63: "cluster.go:125: [bootstrap-token] Using token: rnv0p9.wxk9p48f0a4g8h8r"
      L64: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L67: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L68: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L69: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L70: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L71: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L72: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L75: "cluster.go:125: "
      L76: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L77: "cluster.go:125: "
      L78: "cluster.go:125:   mkdir -p $HOME/.kube"
      L79: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L80: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L81: "cluster.go:125: "
      L82: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L83: "cluster.go:125: "
      L84: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L85: "cluster.go:125: "
      L86: "cluster.go:125: You should now deploy a pod network to the cluster."
      L87: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L88: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L91: "cluster.go:125: "
      L92: "cluster.go:125: kubeadm join 10.0.0.7:6443 --token rnv0p9.wxk9p48f0a4g8h8r _"
      L93: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:9f59c247516226d62313d719764bf5870190ebdcd453b8ecdf7916249158d79f "
      L94: "cluster.go:125: namespace/tigera-operator created"
      L95: "cluster.go:125: serviceaccount/tigera-operator created"
      L96: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L97: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L98: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L99: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L100: "cluster.go:125: deployment.apps/tigera-operator created"
      L101: "cluster.go:125: error: timed out waiting for the condition"
      L102: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L103: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 4</summary>

  ```
      L1: " Error: _cluster.go:125: W0416 03:42:08.365625    1873 version.go:108] could not fetch a Kubernetes version from the internet: unable to get URL __https://dl.k8s.io/release/stable-1.txt__: Get __https:?//dl.k8s.io/release/stable-1.txt__: context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
      L2: "cluster.go:125: W0416 03:42:08.369617    1873 version.go:109] falling back to the local client version: v1.34.4"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.4"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.4"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.4"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.4"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L10: "cluster.go:125: I0416 03:45:15.100722    2313 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L11: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L12: "cluster.go:125: [preflight] Running pre-flight checks"
      L13: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L14: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L15: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L16: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L17: "cluster.go:125: W0416 03:45:22.815465    2313 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L18: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L19: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L20: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L21: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?]"
      L22: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L24: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L29: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L30: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L31: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L32: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L36: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L42: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L43: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L44: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L45: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L46: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L47: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L48: "cluster.go:125: [kubelet-check] The kubelet is healthy after 2.686806984s"
      L49: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L50: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.6:6443/livez"
      L51: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L52: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 8.756008992s"
      L54: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 30.074464231s"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 2m13.184923534s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: fgymnu.l0j181rxa70dbofl"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.0.6:6443 --token fgymnu.l0j181rxa70dbofl _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:67fc3d40953cef757dfa27a38a37597a808344bcfb50c39ea8ab4c59b9dcc8da "
      L92: "cluster.go:125: namespace/tigera-operator created"
      L93: "cluster.go:125: serviceaccount/tigera-operator created"
      L94: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L95: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L96: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L97: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L98: "cluster.go:125: deployment.apps/tigera-operator created"
      L99: "cluster.go:125: error: timed out waiting for the condition"
      L100: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L101: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 3</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 03:14:01.716319    1816 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0416 03:15:22.962296    2148 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L15: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L16: "cluster.go:125: W0416 03:15:24.835540    2148 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?]"
      L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L32: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L40: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L42: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 3.071883944s"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.6:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 13.129884143s"
      L53: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 30.515617488s"
      L54: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 4m6.232808819s"
      L55: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L56: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L57: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L59: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L60: "cluster.go:125: [bootstrap-token] Using token: o1sdsf.2s75g8rq6lrxnk5l"
      L61: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L66: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L67: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L68: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L69: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L70: "cluster.go:125: "
      L71: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L74: "cluster.go:125: "
      L75: "cluster.go:125:   mkdir -p $HOME/.kube"
      L76: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L77: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L78: "cluster.go:125: "
      L79: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L80: "cluster.go:125: "
      L81: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L82: "cluster.go:125: "
      L83: "cluster.go:125: You should now deploy a pod network to the cluster."
      L84: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L85: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L86: "cluster.go:125: "
      L87: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: kubeadm join 10.0.0.6:6443 --token o1sdsf.2s75g8rq6lrxnk5l _"
      L90: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:7d2bb6b23628026d9ded2f0bb65e73507a5c2407b3da47c10a89645fe8199716 "
      L91: "cluster.go:125: namespace/tigera-operator created"
      L92: "cluster.go:125: serviceaccount/tigera-operator created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L94: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L96: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L97: "cluster.go:125: deployment.apps/tigera-operator created"
      L98: "cluster.go:125: error: timed out waiting for the condition"
      L99: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L100: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 2</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 03:01:33.495932    1858 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0416 03:03:43.715664    2249 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L15: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L16: "cluster.go:125: W0416 03:03:48.544850    2249 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?6]"
      L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L32: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L40: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L42: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 3.55387179s"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.16:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 8.329374619s"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 9.618069563s"
      L54: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 1m0.736617289s"
      L55: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L56: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L57: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L59: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L60: "cluster.go:125: [bootstrap-token] Using token: tuing7.92gr2v2ammj94cim"
      L61: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L66: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L67: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L68: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L69: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L70: "cluster.go:125: "
      L71: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L74: "cluster.go:125: "
      L75: "cluster.go:125:   mkdir -p $HOME/.kube"
      L76: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L77: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L78: "cluster.go:125: "
      L79: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L80: "cluster.go:125: "
      L81: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L82: "cluster.go:125: "
      L83: "cluster.go:125: You should now deploy a pod network to the cluster."
      L84: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L85: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L86: "cluster.go:125: "
      L87: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: kubeadm join 10.0.0.16:6443 --token tuing7.92gr2v2ammj94cim _"
      L90: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:edd88462bc46cd94f75cd540ff13fb6f486f25763a3dc61734f6562f67204b4a "
      L91: "cluster.go:125: namespace/tigera-operator created"
      L92: "cluster.go:125: serviceaccount/tigera-operator created"
      L93: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L94: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L95: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L96: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L97: "cluster.go:125: deployment.apps/tigera-operator created"
      L98: "cluster.go:125: error: timed out waiting for the condition"
      L99: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L100: " "
  ```

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 01:54:10.479367    1828 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0416 01:57:15.165138    2229 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L15: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L16: "cluster.go:125: W0416 01:57:30.045170    2229 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?54]"
      L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L32: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L40: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L42: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 16.157621194s"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.154:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 35.554339358s"
      L53: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 37.87395973s"
      L54: "cluster.go:125: E0416 02:09:25.915280    2229 request.go:1196] __Unexpected error when reading response body__ err=__context deadline exceeded (Client.Timeout or context cancellation while reading bod?y)__"
      L55: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 18m51.592194457s"
      L56: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L57: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L58: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L59: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L60: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L61: "cluster.go:125: [bootstrap-token] Using token: ad6ojm.3gac5qykommfksgo"
      L62: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L66: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L67: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L68: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L69: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L70: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L71: "cluster.go:125: "
      L72: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L73: "cluster.go:125: "
      L74: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L75: "cluster.go:125: "
      L76: "cluster.go:125:   mkdir -p $HOME/.kube"
      L77: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L78: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L79: "cluster.go:125: "
      L80: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L81: "cluster.go:125: "
      L82: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L83: "cluster.go:125: "
      L84: "cluster.go:125: You should now deploy a pod network to the cluster."
      L85: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L86: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L87: "cluster.go:125: "
      L88: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L89: "cluster.go:125: "
      L90: "cluster.go:125: kubeadm join 10.0.0.154:6443 --token ad6ojm.3gac5qykommfksgo _"
      L91: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:aa8c1c30e9590cc0599d670e33427fd6522690c0d59b1769728d679340581828 "
      L92: "cluster.go:125: namespace/tigera-operator created"
      L93: "cluster.go:125: serviceaccount/tigera-operator created"
      L94: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L95: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
      L96: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
      L97: "cluster.go:125: rolebinding.rbac.authorization.k8s.io/tigera-operator-secrets created"
      L98: "cluster.go:125: deployment.apps/tigera-operator created"
      L99: "cluster.go:125: error: timed out waiting for the condition"
      L100: "kubeadm.go:197: unable to setup cluster: unable to run master script: Process exited with status 1_"
      L101: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.cilium.base**; Succeeded: qemu (2); Failed: qemu (1)

<details>

<summary>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diagnostic output for qemu, run 1</summary>

  ```
      L1: " Error: _cluster.go:125: I0416 01:42:36.193877    1785 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.34.7"
      L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.34.7"
      L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.34.7"
      L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.34.7"
      L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.1"
      L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.10.1"
      L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.6.5-0"
      L9: "cluster.go:125: I0416 01:43:03.992124    2038 version.go:260] remote version is much newer: v1.35.4; falling back to: stable-1.34"
      L10: "cluster.go:125: [init] Using Kubernetes version: v1.34.7"
      L11: "cluster.go:125: [preflight] Running pre-flight checks"
      L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
      L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
      L15: "cluster.go:125: [preflight] You can also perform this action beforehand using _kubeadm config images pull_"
      L16: "cluster.go:125: W0416 01:43:04.382607    2038 checks.go:827] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended to use __registry.k8s.io/pause:3.10.1__ as the CRI sandbox image."
      L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
      L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
      L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
      L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?48]"
      L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
      L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
      L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
      L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
      L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
      L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
      L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
      L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
      L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
      L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
      L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
      L32: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
      L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
      L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
      L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
      L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
      L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
      L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
      L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
      L40: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
      L41: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/instance-config.yaml__"
      L42: "cluster.go:125: [patches] Applied patch of type __application/strategic-merge-patch+json__ to target __kubeletconfiguration__"
      L43: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
      L44: "cluster.go:125: [kubelet-start] Starting the kubelet"
      L45: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__"
      L46: "cluster.go:125: [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
      L47: "cluster.go:125: [kubelet-check] The kubelet is healthy after 1.504823602s"
      L48: "cluster.go:125: [control-plane-check] Waiting for healthy control plane components. This can take up to 30m0s"
      L49: "cluster.go:125: [control-plane-check] Checking kube-apiserver at https://10.0.0.148:6443/livez"
      L50: "cluster.go:125: [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
      L51: "cluster.go:125: [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
      L52: "cluster.go:125: [control-plane-check] kube-controller-manager is healthy after 2.055008206s"
      L53: "cluster.go:125: [control-plane-check] kube-scheduler is healthy after 2.994193947s"
      L54: "cluster.go:125: [control-plane-check] kube-apiserver is healthy after 6.517340443s"
      L55: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
      L56: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
      L57: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
      L58: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
      L59: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
      L60: "cluster.go:125: [bootstrap-token] Using token: 98i1jp.mgyl7kbj128h5ngo"
      L61: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
      L62: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
      L63: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
      L64: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
      L65: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
      L66: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
      L67: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
      L68: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
      L69: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
      L70: "cluster.go:125: "
      L71: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
      L72: "cluster.go:125: "
      L73: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
      L74: "cluster.go:125: "
      L75: "cluster.go:125:   mkdir -p $HOME/.kube"
      L76: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
      L77: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
      L78: "cluster.go:125: "
      L79: "cluster.go:125: Alternatively, if you are the root user, you can run:"
      L80: "cluster.go:125: "
      L81: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
      L82: "cluster.go:125: "
      L83: "cluster.go:125: You should now deploy a pod network to the cluster."
      L84: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
      L85: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
      L86: "cluster.go:125: "
      L87: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
      L88: "cluster.go:125: "
      L89: "cluster.go:125: kubeadm join 10.0.0.148:6443 --token 98i1jp.mgyl7kbj128h5ngo _"
      L90: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:3ee238d535eebe6eb8abc04e588719a3d8597700d6cff7b4c8702bdcd52d2021 "
      L91: "cluster.go:125: i  Using Cilium version 1.12.5"
      L92: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
      L93: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
      L94: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
      L95: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
      L96: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
      L97: "cluster.go:125: ? Created CA in secret cilium-ca"
      L98: "cluster.go:125: ? Generating certificates for Hubble..."
      L99: "cluster.go:125: ? Creating Service accounts..."
      L100: "cluster.go:125: ? Creating Cluster roles..."
      L101: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
      L102: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
      L103: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
      L104: "cluster.go:125: ? Creating Agent DaemonSet..."
      L105: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L106: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L107: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L108: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L109: "cluster.go:125: ? Creating Operator Deployment..."
      L110: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
      L111: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
      L112: "cluster.go:125: ?[33m    /??_"
      L113: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
      L114: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
      L115: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
      L116: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L117: "cluster.go:125: ?[34m    ___/"
      L118: "cluster.go:125: ?[0m"
      L119: "cluster.go:125: Deployment        cilium-operator    Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
      L120: "cluster.go:125: DaemonSet         cilium             Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
      L121: "cluster.go:125: Containers:       cilium             Running: ?[32m1?[0m"
      L122: "cluster.go:125:                   cilium-operator    Running: ?[32m1?[0m"
      L123: "cluster.go:125: Cluster Pods:     2/2 managed by Cilium"
      L124: "cluster.go:125: Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 1"
      L125: "cluster.go:125:                   cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
      L126: "cluster.go:125: W0416 01:49:31.692766    1729 joinconfiguration.go:112] [config] WARNING: Ignored configuration document with GroupVersionKind kubelet.config.k8s.io/v1beta1, Kind=KubeletConfiguration"
      L127: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
      L128: "--- FAIL: kubeadm.v1.34.4.cilium.base/NFS_deployment (446.43s)"
      L129: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L130: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L131: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L132: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L133: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L134: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L135: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L136: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L137: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L138: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L139: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L140: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L141: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L142: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L143: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L144: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L145: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L146: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L147: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L148: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L149: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L150: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L151: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L152: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L153: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L154: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L155: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L156: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L157: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L158: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L159: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L160: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L161: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L162: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L163: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L164: "cluster.go:125: jq: error (at <stdin_:123): Cannot iterate over null (null)"
      L165: "kubeadm.go:264: nginx pod with NFS is not deployed: getting container status: Process exited with status 5"
      L166: "--- FAIL: kubeadm.v1.34.4.cilium.base/IPSec_encryption (109.58s)"
      L167: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/mount-cgroup]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L168: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites]: deprecated since v1.30; use the ___appArmorProfile___ fi?eld instead__ subsys=klog"
      L169: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/clean-cilium-state]: deprecated since v1.30; use the ___appArmorProfile___ field i?nstead__ subsys=klog"
      L170: "cluster.go:125: level=warning msg=__spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/cilium-agent]: deprecated since v1.30; use the ___appArmorProfile___ field instead?__ subsys=klog"
      L171: "cluster.go:125: Error: Unable to determine status:  timeout while waiting for status to become successful: context deadline exceeded"
      L172: "cluster.go:145: __/opt/bin/cilium status --wait --wait-duration 1m__ failed: output ?[33m    /????_"
      L173: "?[36m /?????[33m___/?[32m????_?[0m    Cilium:         ?[31m2 errors?[0m, ?[33m1 warnings?[0m"
      L174: "?[36m ___?[31m/????_?[32m__/?[0m    Operator:       ?[31m1 errors?[0m, ?[33m1 warnings?[0m"
      L175: "?[32m /?????[31m___/?[35m????_?[0m    Hubble:         ?[36mdisabled?[0m"
      L176: "?[32m ___?[34m/????_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
      L177: "?[34m    ___/"
      L178: "?[0m"
      L179: "Deployment        cilium-operator    Desired: 1, Unavailable: ?[31m1/1?[0m"
      L180: "DaemonSet         cilium             Desired: 2, Unavailable: ?[31m2/2?[0m"
      L181: "Containers:       cilium             Pending: ?[32m1?[0m, Running: ?[32m1?[0m"
      L182: "cilium-operator    Pending: ?[32m1?[0m"
      L183: "Cluster Pods:     4/5 managed by Cilium"
      L184: "Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 2"
      L185: "cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
      L186: "Errors:           cilium-operator    cilium-operator                     1 pods of Deployment cilium-operator are not ready"
      L187: "cilium             cilium                              2 pods of DaemonSet cilium are not ready"
      L188: "cilium             cilium-lshdj                        unable to retrieve cilium status: command terminated with exit code 1"
      L189: "Warnings:         cilium             cilium-d5g6h                        pod is pending"
      L190: "cilium-operator    cilium-operator-6f9c7c5859-bthd9    pod is pending, status Process exited with status 1_"
      L191: " "
  ```


</details>


🟢 ok **kubeadm.v1.34.4.flannel.base**; Succeeded: qemu (1)

🟢 ok **kubeadm.v1.35.1.calico.base**; Succeeded: qemu (1)

🟢 ok **kubeadm.v1.35.1.cilium.base**; Succeeded: qemu (1)

🟢 ok **kubeadm.v1.35.1.flannel.base**; Succeeded: qemu (1)

🟢 ok **linux.nfs.v3**; Succeeded: qemu (1)

🟢 ok **linux.nfs.v4**; Succeeded: qemu (1)

🟢 ok **linux.ntp**; Succeeded: qemu (1)

🟢 ok **misc.fips**; Succeeded: qemu (1)

🟢 ok **packages**; Succeeded: qemu (1)

🟢 ok **sysext.custom-docker.sysext**; Succeeded: qemu (1)

🟢 ok **sysext.custom-oem**; Succeeded: qemu (1)

🟢 ok **sysext.disable-containerd**; Succeeded: qemu (1)

🟢 ok **sysext.disable-docker**; Succeeded: qemu (1)

🟢 ok **sysext.simple**; Succeeded: qemu (1)

🟢 ok **systemd.journal.remote**; Succeeded: qemu (1)

🟢 ok **systemd.journal.user**; Succeeded: qemu (1)

🟢 ok **systemd.sysusers.gshadow**; Succeeded: qemu (1)
