How to Use TensorFlow Serving for Deep Learning – Tutorial 48 (TensorFlow, Python)

Posted by


In this tutorial, we are going to cover TensorFlow Serving, which is a system for serving machine learning models in production environments. TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It has a flexible architecture that allows you to deploy new algorithms and experiment with them easily.

For this tutorial, we are going to assume that you have some basic knowledge of TensorFlow and Python. If you are new to TensorFlow, we recommend checking out some of the introductory tutorials available on the TensorFlow website.

Let’s get started with our tutorial on TensorFlow Serving!

Step 1: Installing TensorFlow Serving

The first step is to install TensorFlow Serving on your machine. You can install TensorFlow Serving using Docker, but for this tutorial, we are going to install it using pip. Open your terminal and run the following command:

pip install tensorflow-serving-api

Step 2: Building a TensorFlow model

Next, we need to build a TensorFlow model that we want to serve using TensorFlow Serving. For this tutorial, we are going to build a simple model that classifies images of handwritten digits from the MNIST dataset. Here is the code to build the model:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense

# Load the MNIST dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize the pixel values
x_train, x_test = x_train / 255.0, x_test / 255.0

# Build the model
model = Sequential([
    Flatten(input_shape=(28, 28)),
    Dense(128, activation='relu'),
    Dense(10, activation='softmax')
])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train the model
model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))

Step 3: Exporting the model for TensorFlow Serving

Once we have trained our model, we need to export it in a format that TensorFlow Serving can understand. We can do this using the SavedModel format. Here is the code to export the model:

import os

# Define the directory to save the model
export_path = 'saved_model/1'

# Save the model in the SavedModel format
model.save(export_path)

Step 4: Running TensorFlow Serving

Now that we have exported our model, we can start TensorFlow Serving to serve it. Open your terminal and run the following command:

tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=mnist --model_base_path=/path/to/exported_model

Make sure to replace /path/to/exported_model with the path to the directory where you saved your exported model.

Step 5: Making predictions using the served model

Once TensorFlow Serving is running, we can make predictions using the served model. We can use the REST API provided by TensorFlow Serving to make predictions. Here is an example code to make predictions using the served model:

import requests
import json

# Define the input data
data = json.dumps({"signature_name": "serving_default", "instances": x_test[0:3].tolist()})

# Make a POST request to the prediction endpoint
headers = {"content-type": "application/json"}
response = requests.post('http://localhost:8501/v1/models/mnist:predict', data=data, headers=headers)

# Print the predictions
predictions = json.loads(response.text)['predictions']
print(predictions)

And that’s it! You have successfully served a TensorFlow model using TensorFlow Serving and made predictions using the served model.

In this tutorial, we covered how to install TensorFlow Serving, build a TensorFlow model, export the model for serving, run TensorFlow Serving, and make predictions using the served model. TensorFlow Serving is a powerful tool for serving machine learning models in production environments, and we hope this tutorial helped you understand how to use it.

If you have any questions or run into any issues, feel free to ask for help in the comments section. Happy coding!

0 0 votes
Article Rating

Leave a Reply

32 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@codebasics
11 days ago

Check out our premium machine learning course with 2 Industry projects: https://codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced

@rendikanurhartantos.7940
11 days ago

{

"error": "Malformed request: POST /v1/models/garbage_model"

}

can someone help me, or maybe have some problem and solve

@521182
11 days ago

excellent content!

@YashwantM-t6t
11 days ago

Tensorflow _text is not working or no module declared

@vishwarathtomar199
11 days ago

localhost refused to connect is the error i am getting. Before that everything was exactly as you mentioned

@biswadeeproy3952
11 days ago

bhai tu mast padhata ha

@shirsendu_bairagi
11 days ago

how to bind the tensorflow serving rest api to 0.0.0.0, by default it is binding to 127.0.0.1 so deploying to the cloud is problematic

@sarmale-cu-mamaliga
11 days ago

Nice video, thank you!

@jackycwwang
11 days ago

Thanks a ton to the instructor!! Very straightforward and get-to-point tutorial for TensorFlow serving for beginners! Highly recommended!!!

@voldemore6300
11 days ago

Forgets this and using Flask and Fast API, we are looking for customizable Api serving our input and output, this is just temporary web serving and blind for changing thing 😅

@AnimalLore_YT
11 days ago

I would like to ask about inference, what if the inference was an image? how to send it? or how do I preprocess the image first before inference?

@SuperHddf
11 days ago

Thank you!

@fetamedia788
11 days ago

{

"error": "JSON Parse error: The document is empty"

}

@shrikanthpv5937
11 days ago

Thanks a lot Bhai 👍.

Simple and best explanation ever

@chandruv6574
11 days ago

Thanks for your tutorial. I configure as per your flow. When i use localhost its working. But instead of localhost when i use public ip its not working. Can you give some suggestions. Am using ubuntu server.

@DDhgaming1
11 days ago

what sir how much time you are spending, so hardworking person you are

@pwchan9748
11 days ago

How about the exported Bert processor? Does it need to be served like the Bert model? Besides, is it only output the probability of the model? How about other responses?

@cadandprogramminguniverse6195
11 days ago

thanks for the content , my question is if I have made the saved_model to accept a image as tf_example, then how to send the request

@kaegancasey5291
11 days ago

You are a god!

@khanaftab3852
11 days ago

the music in BG is soooo annoying

32
0
Would love your thoughts, please comment.x
()
x