Training Pytorch with a GPU on Apple Silicon, specifically the M1 series of chips, has become increasingly popular due to the performance improvements offered by these chips. In this tutorial, I will walk you through the steps required to set up your environment and train Pytorch models using the GPU on an Apple Silicon device.
Step 1: Install the necessary dependencies
Before we can start training Pytorch models with a GPU on Apple Silicon, we need to make sure we have all the necessary dependencies installed. First, you will need to install the latest version of Pytorch that supports Apple Silicon. You can do this by running the following command in your terminal:
pip install torch torchvision torchaudio
Next, we need to install the necessary GPU support for Pytorch on Apple Silicon. To do this, you can use the following command:
pip install torch-si
This will install the necessary GPU support for Pytorch on Apple Silicon.
Step 2: Verify GPU support
To verify that GPU support is working correctly, you can run the following Python script in your terminal:
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name())
If everything is set up correctly, you should see True
printed out, along with the name of your GPU.
Step 3: Training Pytorch models with the GPU
Now that you have GPU support set up and verified, you can start training Pytorch models with the GPU on your Apple Silicon device. To do this, you will need to make sure to move your model and data to the GPU before training. You can do this by using the following code snippet:
import torch
# Move model to GPU
model = model.to('cuda')
# Move data to GPU
data = data.to('cuda')
# Initialize optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# Training loop
for epoch in range(num_epochs):
optimizer.zero_grad()
output = model(data)
loss = loss_function(output, target)
loss.backward()
optimizer.step()
In this code snippet, we first move our model and data to the GPU using the to('cuda')
method. We then initialize our optimizer and run our training loop as usual.
Step 4: Monitoring GPU usage
To monitor the GPU usage while training your Pytorch models on Apple Silicon, you can use the following command in your terminal:
nvidia-smi
This will give you real-time information about the GPU usage and memory usage during training.
By following these steps, you should now be able to successfully train Pytorch models with the GPU on Apple Silicon devices. This will allow you to take advantage of the improved performance that GPUs offer for training deep learning models.
Is it relevant with to fix ERROR: PYTORCH_MPS_HIGH_WATERMARK_RATIO to 0.0 on my MacBook Pro M3 Pro?
I tried this before of this pytorch support with M1 and tensorflow metal or another Apple "suggestions" and it didn't work, and I had a lot of problems but I congratulate you if you can now use that processor for GPU tasks but I think it doesn't compare with the training with a minimal GPU capability in a free Google Colab server or with a Gaming PC
Ok, seems rather hopeless for my apple mac air. Anyway, for the price, sort of expected…
I ran the command –device mpa, I got below error.
RuntimeError: PyTorch is not linked with support for mps devices
torch_version: '1.12.0.dev20220520'
Installed torch using anaconda
Hi, nice video, can you try with a different or bigger batch size…
I tried it on ultralytics yolov5 and it doesn't work yet. they're making progress and hopefully by end of the year it will work on yolov5
Great!
My torch version is 1.12.0.dev20220518 but cuda still using CPU
Great video and so timely too!