Running PyTorch Models on an AMD Mi300X: Is it Easy?

Posted by


Running PyTorch models on an AMD Mi300x is fairly straightforward and can be easily achieved with a few simple steps. In this tutorial, I will guide you through the process of setting up your environment and running a PyTorch model on an AMD Mi300x GPU.

Step 1: Install PyTorch and AMDGPU-PRO driver
First, you need to install PyTorch and the AMDGPU-PRO driver on your system. You can easily install PyTorch using pip or conda by running the following commands:

pip install torch torchvision

Next, you need to download and install the AMDGPU-PRO driver for your AMD Mi300x GPU from the AMD website. Once the driver is installed, make sure to reboot your system to apply the changes.

Step 2: Check GPU availability
To ensure that your AMD Mi300x GPU is recognized by PyTorch, you can run the following Python code snippet to check if the GPU is available:

import torch
print(torch.cuda.is_available())

If the output is True, it means that your AMD Mi300x GPU is successfully recognized by PyTorch.

Step 3: Run a PyTorch model on the AMD Mi300x GPU
Now that your GPU is recognized by PyTorch, you can start running your PyTorch model on the AMD Mi300x GPU. Below is an example code snippet that demonstrates how to train a simple neural network on the GPU:

import torch
import torch.nn as nn
import torch.optim as optim

# Define a simple neural network
class SimpleModel(nn.Module):
    def __init__(self):
        super(SimpleModel, self).__init__()
        self.fc1 = nn.Linear(784, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = torch.flatten(x, 1)
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

# Initialize the model and move it to the GPU
model = SimpleModel().cuda()

# Define loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Generate random input data
inputs = torch.randn(64, 784).cuda()
targets = torch.randint(0, 10, (64,)).cuda()

# Train the model
for epoch in range(10):
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, targets)
    loss.backward()
    optimizer.step()
    print(f'Epoch {epoch+1}, Loss: {loss.item()}')

In the code snippet above, we first define a simple neural network model called SimpleModel. We then initialize the model and move it to the GPU using the .cuda() method. Next, we define the loss function and optimizer, generate random input data, and train the model for 10 epochs on the GPU.

By following these simple steps, you can easily run PyTorch models on an AMD Mi300x GPU. This will allow you to leverage the computational power of your GPU and accelerate the training of deep learning models. Happy coding!

0 0 votes
Article Rating

Leave a Reply

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@Sam-te6np
12 days ago

Nice to see AMD still making an effort for devs, hope they bring this support to more older gpus since the list of supported gpus is very small.
Did you come across any amd specific bugs btw?

1
0
Would love your thoughts, please comment.x
()
x