Installing CUDA, cuDNN, and PyTorch can be a daunting task, especially if you are not familiar with the different components and dependencies. In this tutorial, we will walk you through the full installation process of CUDA, cuDNN, and PyTorch for all GPUs on a Linux system.
Step 1: Installing CUDA
CUDA (Compute Unified Device Architecture) is a parallel computing platform developed by NVIDIA. It allows software developers to use NVIDIA GPUs for general-purpose computing. To install CUDA on your Linux system, follow these steps:
1.1 Verify that your GPU is supported by CUDA by checking the CUDA compatibility list on NVIDIA’s website.
1.2 Download the CUDA Toolkit from NVIDIA’s website. Select the version that is compatible with your GPU and Linux distribution.
1.3 Once the download is complete, open a terminal window and navigate to the directory where the CUDA Toolkit was downloaded.
1.4 Make the installer executable by running the following command:
chmod +x cuda_*.run
1.5 Run the installer with the following command:
sudo ./cuda_*.run
1.6 Follow the on-screen instructions to complete the installation process. Make sure to specify the installation directory and select the components you want to install.
1.7 After the installation is complete, add the CUDA bin directory to your PATH by adding the following line to your ~/.bashrc
or ~/.zshrc
file:
export PATH=/usr/local/cuda/bin:$PATH
1.8 Reboot your system to apply the changes.
Step 2: Installing cuDNN
cuDNN (CUDA Deep Neural Network) is a GPU-accelerated library for deep learning. It provides highly optimized implementations of common deep learning operations and is compatible with CUDA. To install cuDNN on your Linux system, follow these steps:
2.1 Download the cuDNN library from NVIDIA’s website. You will need to create an account to access the cuDNN download page.
2.2 Extract the downloaded archive:
tar -xzvf cudnn-*.tgz
2.3 Copy the cuDNN files to the CUDA installation directory:
sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
Step 3: Installing PyTorch
PyTorch is an open-source machine learning library developed by Facebook. It provides a flexible and dynamic computational graph that makes it easy to build and train deep neural networks. To install PyTorch with CUDA and cuDNN support on your Linux system, follow these steps:
3.1 Install the required dependencies by running the following command:
sudo apt-get update
sudo apt-get install python3-pip python3-dev
3.2 Install PyTorch using pip with CUDA and cuDNN support:
pip3 install torch torchvision torchaudio
3.3 Verify that PyTorch is installed correctly by running the following Python code:
import torch
print(torch.cuda.is_available())
If the output is True
, PyTorch is installed and configured correctly with CUDA and cuDNN support.
In conclusion, installing CUDA, cuDNN, and PyTorch with GPU support on a Linux system can be a complex process, but by following this detailed tutorial, you should be able to successfully set up your development environment for deep learning applications. Remember to always refer to the official documentation for the latest installation instructions and updates.
Thank you for the video,
I had a question in regards to the nvidia driver install on 22.04, when I install the nvidia drivers via: sudo install nvidia-driver-535 nvidia-dkms-535. It appears to install however it never gets utilized, even after cuda and cudnn is installed, when I use libraries such as tensorflow and pytorch it defaults to using the cpu-flags even if otherwise specified. This also includes games, they default to using the CPU rather than the GPU. Can you help me with this problem?
Hello brother Saad, I am facing issues related nvidia drivers and cuda tool kit compatibility. i want to install a library which requires cuda tool kit 11.7 . but for nvidia drivers cuda version 12.2 is available and custom installation of drivers is not working and giving error although the drivers are compatible with my rtx 3080 GPU . also nvidia drivers cuda version 12.x and cuda tool kit11.7 is showing incompatibility while installing the required library mamba_ssm . i am also installing everything into new installed ubuntu 22.04.
what is your name of gtx? i get stuck in choosing gtx geforce notebook and gtx 16 series (product series dropdown, 5:36)
Thank you so much bro, very good video
I've been trying for days, had some bad ideas, had some bad luck, your tutorial had everything I needed to finall get torch.cuda.is_available() to evaluate to true. Thank you very much from the bottom of my heart.
The content is very useful. Keep it up!
I will try this out today. thanks
It feels like you're reading a script. I would have felt more engaged if I didn't feel like that. Or even, making it in Hindi is good too.