Utilize Google CoLab GPU with PyTorch for efficient machine learning program execution.

Posted by


PyTorch is a popular deep learning framework that provides easy-to-use APIs for building and training neural networks. Google Colab is a free cloud-based service that provides an easy way to run Python code in a Jupyter notebook environment, and it also offers GPU support for training deep learning models. This tutorial will guide you through setting up PyTorch in Google Colab with GPU support to run your machine learning programs.

Step 1: Open Google Colab
First, open Google Colab in your web browser by going to https://colab.research.google.com/. You will need a Google account to use the service.

Step 2: Create a new notebook
Click on the "File" menu and select "New Python 3 notebook" to create a new notebook.

Step 3: Enable GPU support
Next, go to "Runtime" menu and select "Change runtime type". In the dialog that appears, select "GPU" from the "Hardware accelerator" dropdown menu and click "Save".

Step 4: Install PyTorch
To install PyTorch, run the following command in a code cell in your notebook:

!pip install torch torchvision

This command will download and install the latest version of PyTorch along with torchvision, which is a library of useful tools for working with images in PyTorch.

Step 5: Verify GPU support
To verify that PyTorch is using the GPU for computation, run the following code in a code cell in your notebook:

import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

print('Using device:', device)

If the output of this code cell is "Using device: cuda", then PyTorch is successfully using the GPU for computation.

Step 6: Build and train a model
Now that you have PyTorch installed with GPU support, you can start building and training deep learning models. Here is an example of building a simple neural network with PyTorch:

import torch
import torch.nn as nn
import torch.optim as optim

# Define a simple neural network
class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(784, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = torch.flatten(x, 1)
        x = self.fc1(x)
        x = nn.functional.relu(x)
        x = self.fc2(x)
        return x

# Load the MNIST dataset
import torchvision.datasets as datasets
mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None)

# Define a data loader
train_loader = torch.utils.data.DataLoader(mnist_trainset, batch_size=64, shuffle=True)

# Initialize the neural network
model = SimpleNN().to(device)

# Define the loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Train the model
for epoch in range(5):
    for i, (data, targets) in enumerate(train_loader):
        data, targets = data.to(device), targets.to(device)

        optimizer.zero_grad()
        outputs = model(data)
        loss = criterion(outputs, targets)
        loss.backward()
        optimizer.step()

        if i % 100 == 0:
            print(f'Epoch {epoch}, Step {i}, Loss: {loss.item()}')

This code defines a simple neural network for classifying images from the MNIST dataset, loads the dataset using torchvision, sets up a data loader for training, initializes the neural network, defines the loss function and optimizer, and trains the model over 5 epochs.

Step 7: Evaluate the model
After training the model, you can evaluate its performance by testing it on a separate test set. Here is an example of testing the model on the MNIST test set:

mnist_testset = datasets.MNIST(root='./data', train=False, download=True, transform=None)
test_loader = torch.utils.data.DataLoader(mnist_testset, batch_size=64, shuffle=False)

model.eval()
with torch.no_grad():
    correct = 0
    total = 0
    for data, targets in test_loader:
        data, targets = data.to(device), targets.to(device)
        outputs = model(data)
        _, predicted = torch.max(outputs.data, 1)
        total += targets.size(0)
        correct += (predicted == targets).sum().item()

    print(f'Accuracy on the test set: {100 * correct / total}%')

This code loads the MNIST test set, sets the model to evaluation mode, and computes the accuracy of the model on the test set.

That’s it! You have successfully set up PyTorch in Google Colab with GPU support and built and trained a simple neural network for classifying images from the MNIST dataset. You can now use this setup to experiment with different deep learning models and datasets in Google Colab with the power of GPU acceleration. Happy coding!

0 0 votes
Article Rating
12 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@arulprakashm.1165
3 months ago

training = datasets.MNIST("", train=True, download=True,
transform = transforms.Compose([transforms.ToTensor()]))

testing = datasets.MNIST("", train=False, download=True,
transform = transforms.Compose([transforms.ToTensor()]))

train_set = torch.utils.data.DataLoader(training, batch_size=10, shuffle=True)
test_set = torch.utils.data.DataLoader(testing, batch_size=10, shuffle=True)

@sunshine_grass
3 months ago

Thank you for your tutorial!

@KalmanHuman
3 months ago

Thank you!
I couldn't find the method .to(device) anywhere! Now as you showed it it looks pretty obvious 🙂

@kkawesomeperson1032
3 months ago

thank you for this tutorial, it was a life saver!! i couldn't figure out how to load my model and data to Collab's GPU previously but this cleared up all my doubts!

@phatemeow
3 months ago

The code doesn't work.

@NiklasUnruh
3 months ago

I've used the exact same code but no matter what image I enter, it always returns 3 as its guess, do you know why that could be?

@Fine_Mouche
3 months ago

10:57 : the 5 is the answer the AI give or it's just his analysis ? Like a true answer would be a printed : "5"
edit : Ok so my run has false, the test image is 3 and he answer "tensor(2)" … :c
redit : i set up to epochs 15 and try différent 'lr' but the IA still wrong everytime …

@Fine_Mouche
3 months ago

if we don't had the gpu stuff, where it run ? in our computer ? because i didnt add your extra code for test and only "firstTest.png" has an error. (it didn't find it)

@user-vn9ld2ce1s
3 months ago

That's amazing, but i suppose they'll take it down soon bcs people will use it to mine crypto

@yilu435
3 months ago

i got a error when i re run using GPU. Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: cude

@dec13666
3 months ago

Couldn't make it work

@stuclott4056
3 months ago

Cool video 👌