Skip to main content
Install drivers for GPU node groups
Last update:

Install drivers for GPU node groups

You can create Managed Kubernetes clusters with GPUs without pre-installed drivers. To install the driver yourself, use the application NVIDIA® GPU Operator.

  1. Connect to the cluster.

  2. Install the Helm package manager version 3.7.0 or higher.

  3. Add a repository nvidia in Helm:

    helm repo add nvidia https://helm.ngc.nvidia.com/nvidia
  4. Update the repository nvidia in Helm:

    helm repo update
  5. Install NVIDIA GPU Operator and specify the correct version of the GPU driver:

    helm install \
    --namespace gpu-operator \
    --create-namespace \
    --set driver.version=<driver_version> \
    gpu-operator nvidia/gpu-operator

    Specify <driver_version> — NVIDIA® driver version. You can look in the NVIDIA GPU Driver row, in the table below GPU Operator Component Matrix NVIDIA® documentation.

  6. To verify that NVIDIA GPU Operator and the GPU driver are installed correctly, run a GPU application. For example, the CUDA VectorAdd vector addition application:

    cat << EOF | kubectl create -f -
    apiVersion: v1
    kind: Pod
    metadata:
    name: cuda-vectoradd
    spec:
    restartPolicy: OnFailure
    containers:
    - name: cuda-vectoradd
    image: "nvidia/samples:vectoradd-cuda11.2.1"
    resources:
    limits:
    nvidia.com/gpu: 1
    EOF
  7. Check that the CUDA VectorAdd application has completed successfully — the suppression status should be Completed:

    kubectl get pods

    The answer from the pitcher cuda-vectoradd will be the status Completed:

    NAME                                  READY   STATUS        RESTARTS   AGE
    cuda-vectoradd 0/1 Completed 0 51s