PyTorch Mobile Runtime for Android is a powerful framework that allows you to run PyTorch models on mobile devices. In this tutorial, we will walk you through how to set up and run PyTorch Mobile on an Android device.
Step 1: Set up your development environment
Before you can start using PyTorch Mobile on Android, you need to set up your development environment. Here are the steps you need to follow:
-
Install Android Studio: Android Studio is the official integrated development environment (IDE) for Android app development. You can download Android Studio from the official website and install it on your computer.
-
Set up your Android device: To run PyTorch Mobile on your Android device, you need to enable Developer Mode on your device. To do this, go to Settings > About Phone > tap on Build Number 7 times. This will enable Developer Options on your device.
- Install PyTorch Mobile: Next, you need to install PyTorch Mobile on your computer. You can do this by running the following command in your terminal:
pip install torch torchvision
Step 2: Build and run a PyTorch Mobile app on Android
Once you have set up your development environment, you can now start building and running a PyTorch Mobile app on your Android device. Here are the steps you need to follow:
-
Create a new Android project: Open Android Studio and create a new Android project. Select an empty activity template and click Finish.
- Add PyTorch Mobile to your project: To use PyTorch Mobile in your Android project, you need to add the PyTorch library to your project. You can do this by adding the following dependencies to your app build.gradle file:
dependencies {
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
}
- Load your PyTorch model: Next, you need to load your PyTorch model in your Android app. You can do this by copying your PyTorch model file (usually a .pt file) to the assets folder of your Android project. Then, you can load the model in your app using the PyTorch library:
try {
Path modelPath = FileUtil.loadModelFromAsset(getAssets(), "model.pt");
MobileNetV2 module = MobileNetV2.loadModel(modelPath);
} catch (IOException e) {
Log.e(TAG, "Error reading model", e);
}
- Run inference with your PyTorch model: Now that you have loaded your PyTorch model in your Android app, you can run inference with the model. You can do this by passing inputs to the model and getting outputs from the model:
try {
Tensor input = TensorImageUtils.bitmapToFloat32Tensor(bitmap,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB);
Tensor output = module.forward(IValue.from(input)).toTensor();
} catch (IOException e) {
Log.e(TAG, "Error running inference", e);
}
- Run your app on your Android device: Finally, you can run your PyTorch Mobile app on your Android device. Connect your Android device to your computer, build and run your app in Android Studio, and select your device as the deployment target.
And that’s it! You have successfully set up and run PyTorch Mobile on your Android device. You can now start building powerful machine learning models and running them on your mobile device using PyTorch Mobile.
Is it still working with pytorch 2?
Why my pytorch model in colab and my pytorch model in my Android phone sometimes predict diferent classes to the same image? I don't know why the model in my phone tends to make many more mistakes.
Thank you for the video.Â
Not sure if you tried to edit the audio in post using YouTube's tools, but it resulted in a desynchronising between the audio and video. Nothing too extreme and I could still follow, wanted to pointed it out as it seems fixable by offsetting audio a good 30s backwards or so.
where do I find the source code show in the video?
Could you please give us the source code?
I failed in infering the quantized mobilenet v2 but successed in FP 32 mobilenet v2
What if I using lib with defined model inside and my program is different functions using this lib. How can I in such case do these steps according to my ipynb?
I can't find source code on github. you can upload file model.pt on github ?
Should it quantize??
Would it be possible to create a video using semantic segmentation? Using FCN or DeepLabV3 https://pytorch.org/docs/stable/torchvision/models.html#semantic-segmentation