In this tutorial, we will walk you through the process of deploying a machine learning model using FastAPI, Docker, and Heroku.
FastAPI is a modern, fast (high-performance) web framework for building APIs with Python 3.6+ based on standard Python type hints. Docker is a platform for developing, shipping, and running applications in containers. Heroku is a cloud platform that lets you build, deploy, and scale applications quickly and easily.
By the end of this tutorial, you will have a fully functioning API that can serve predictions from your machine learning model. Let’s get started!
Step 1: Create a machine learning model
For this tutorial, we will use a simple machine learning model that predicts the sentiment of a text message (positive or negative). You can use any machine learning model of your choice for this tutorial.
Step 2: Create a FastAPI app
First, install FastAPI using pip:
pip install fastapi
Next, create a new Python file (e.g., app.py
) and import the necessary libraries:
from fastapi import FastAPI
app = FastAPI()
Next, define a route that will accept text input and return a prediction using your machine learning model:
@app.get("/predict/{text}")
def predict_sentiment(text: str):
# Make prediction using your machine learning model
prediction = model.predict(text)
return {"sentiment": prediction}
Step 3: Build a Docker image
Next, create a Dockerfile
in the same directory as your app.py
file:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
COPY ./app /app
This Dockerfile
specifies a base image for running Python applications with FastAPI and copies the contents of the app
directory to the container’s /app
directory.
Build the Docker image using the following command:
docker build -t my_fastapi_app .
Step 4: Run the Docker container
Run the Docker container and map the container’s port to the host machine’s port:
docker run -d -p 8000:80 my_fastapi_app
You can now access your FastAPI app at http://localhost:8000
.
Step 5: Deploy to Heroku
Create a Procfile
in the root directory of your application with the following content:
web: uvicorn app:app --host=0.0.0.0 --port=$PORT
Next, create a Heroku account if you don’t already have one and install the Heroku CLI.
Login to Heroku using the CLI:
heroku login
Create a new Heroku app:
heroku create your_app_name
Push your Docker image to Heroku’s container registry:
heroku container:push web -a your_app_name
Release the new version of your app:
heroku container:release web -a your_app_name
Your FastAPI app with your machine learning model is now deployed on Heroku and accessible at https://your_app_name.herokuapp.com
.
That’s it! You have successfully deployed a machine learning model using FastAPI, Docker, and Heroku. Feel free to customize the app further to suit your needs. Happy coding!
love it
Why you talk like that
How to train in deep
for bigger models, doesn't this cost a lot to host and run per month?
Thanks for this professional tutorial 🙂
If i have my idea can i deploy public ?
genius!
2:00 Portugeese 🦆🦆🦆🤣🤣🤣
can you help me like i want to a image classification ml model to deploy but in ur video its string version
other than heroku, when can we deploy it?
too basic.
What a good video!
thanks a lot for sharing, excellent tutorial!
really great tutorial, thanks man!
Its a great video but I need to give a thumbs down because you never made this video a beginner friendly, Docker is really new to me as well has deploying it on heroku, it would have been great if you just a 1min introduction to what they are or where can we lear, like a prerequisite in the beginning of the video.
thank you for sharing ❤
Thank you very much, was able to deploy the whole thing
The filling in of the requirements.txt part is why all python users should use envs by default, all the time 🙂
Briliant Patric
Thanks for explaining this.