Install drivers for GPU node groups
You can create Managed Kubernetes clusters with GPUs without pre-installed drivers. To install the driver yourself, use the application NVIDIA® GPU Operator.
-
Install the Helm package manager version 3.7.0 or higher.
-
Add a repository
nvidia
in Helm:helm repo add nvidia https://helm.ngc.nvidia.com/nvidia
-
Update the repository
nvidia
in Helm:helm repo update
-
Install NVIDIA GPU Operator and specify the correct version of the GPU driver:
helm install \
--namespace gpu-operator \
--create-namespace \
--set driver.version=<driver_version> \
gpu-operator nvidia/gpu-operatorSpecify
<driver_version>
— NVIDIA® driver version. You can look in the NVIDIA GPU Driver row, in the table below GPU Operator Component Matrix NVIDIA® documentation. -
To verify that NVIDIA GPU Operator and the GPU driver are installed correctly, run a GPU application. For example, the CUDA VectorAdd vector addition application:
cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: cuda-vectoradd
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vectoradd
image: "nvidia/samples:vectoradd-cuda11.2.1"
resources:
limits:
nvidia.com/gpu: 1
EOF -
Check that the CUDA VectorAdd application has completed successfully — the suppression status should be
Completed
:kubectl get pods
The answer from the pitcher
cuda-vectoradd
will be the statusCompleted
:NAME READY STATUS RESTARTS AGE
cuda-vectoradd 0/1 Completed 0 51s