Skip to main content
Install drivers for GPU node groups
Last update:

Install drivers for GPU node groups

You can create Managed Kubernetes clusters with GPUs without pre-installed drivers. To install the driver yourself, use the NVIDIA® GPU Operator application.

  1. Connect to the cluster.

  2. Install the Helm package manager version 3.7.0 or higher.

  3. Add the nvidia repository to Helm:

    helm repo add nvidia https://helm.ngc.nvidia.com/nvidia
  4. Update the nvidia repository in Helm:

    helm repo update
  5. Install NVIDIA GPU Operator and specify the correct version of the GPU driver:

    helm install \
    --namespace gpu-operator \
    --create-namespace \
    --set driver.version=<driver_version> \
    gpu-operator nvidia/gpu-operator

    Specify < driver_version > is the NVIDIA® driver version. You can look in the NVIDIA GPU Driver row in the GPU Operator Component Matrix table of  the NVIDIA® documentation.

  6. To verify that NVIDIA GPU Operator and the GPU driver are installed correctly, run a GPU application. For example, the CUDA VectorAdd vector addition application:

    cat << EOF | kubectl create -f -
    apiVersion: v1
    kind: Pod
    metadata:
    name: cuda-vectoradd
    spec:
    restartPolicy: OnFailure
    containers:
    - name: cuda-vectoradd
    image: "nvidia/samples:vectoradd-cuda11.2.1"
    resources:
    limits:
    nvidia.com/gpu: 1
    EOF
  7. Check that the CUDA VectorAdd application has completed successfully — the status of the feed should be Completed:

    kubectl get pods

    The cuda-vectoradd feed will have a status of Completed in the response:

    NAME                                  READY   STATUS        RESTARTS   AGE
    cuda-vectoradd 0/1 Completed 0 51s