In this tutorial, we will guide you through the process of creating an executable image analysis GUI app using Python libraries such as PyQt, TensorFlow, Mediapipe, and OpenCV. This app will allow users to load an image, apply various image analysis techniques, and view the results in a user-friendly GUI interface.
Before we begin, make sure you have the following libraries installed on your system:
- PyQt5: A Python binding for the Qt application development framework.
- TensorFlow: An open-source machine learning framework developed by Google.
- Mediapipe: A cross-platform framework for building multimodal applied machine learning pipelines.
- OpenCV: An open-source computer vision and machine learning software library.
Step 1: Setting up the project structure
Create a new directory for your project and navigate to it in the terminal. Create subdirectories for your project files, such as src
, assets
, and models
.
mkdir image_analysis_app
cd image_analysis_app
mkdir src assets models
Step 2: Designing the GUI layout
Open your favorite text editor or IDE and create a new Python file main.py
inside the src
directory. In this file, we will define the layout and functionality of the GUI app using PyQt5.
# main.py
import sys
from PyQt5.QtWidgets import QApplication, QMainWindow, QLabel, QVBoxLayout, QPushButton, QFileDialog, QPixmap, QGraphicsScene, QGraphicsView
from PyQt5.QtGui import QImage, QPixmap
from PyQt5.QtCore import Qt
import cv2
class ImageAnalysisApp(QMainWindow):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
self.setWindowTitle('Image Analysis App')
self.setGeometry(100, 100, 800, 600)
self.label = QLabel(self)
self.label.setAlignment(Qt.AlignCenter)
self.scene = QGraphicsScene()
self.view = QGraphicsView(self.scene)
layout = QVBoxLayout()
layout.addWidget(self.view)
self.setLayout(layout)
self.show()
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = ImageAnalysisApp()
sys.exit(app.exec_())
This code sets up a basic PyQt5 window with a label for displaying the image and a QGraphicsView for viewing the image. We will populate this window with more widgets and functionality in the following steps.
Step 3: Loading and displaying an image
In this step, we will add functionality to load an image from the file system and display it in the GUI.
# main.py
def loadImage(self):
filename, _ = QFileDialog.getOpenFileName(self, 'Open Image', '', 'Image files (*.jpg *.png)')
if filename:
image = cv2.imread(filename)
height, width, channel = image.shape
bytesPerLine = 3 * width
qImg = QImage(image.data, width, height, bytesPerLine, QImage.Format_RGB888).rgbSwapped()
pixmap = QPixmap.fromImage(qImg)
self.scene.clear()
self.scene.addPixmap(pixmap)
We have added a new method loadImage
to the ImageAnalysisApp
class that opens a file dialog for selecting an image file and displays the selected image in the GUI.
Step 4: Integrating TensorFlow for image analysis
Now, let’s integrate TensorFlow into our app to perform image analysis tasks such as object detection or image classification.
# main.py
from tensorflow.keras.applications.mobilenet_v2 import decode_predictions, preprocess_input
from tensorflow.keras.preprocessing import image
import numpy as np
def analyzeImage(self):
pixmap = self.scene.pixmap()
qImg = pixmap.toImage()
qImg.save('temp.jpg')
img = image.load_img('temp.jpg', target_size=(224, 224))
img_array = image.img_to_array(img)
img_array = np.expand_dims(img_array, axis=0)
img_array = preprocess_input(img_array)
model = tf.keras.applications.MobileNetV2()
predictions = model.predict(img_array)
labels = decode_predictions(predictions)
result = ', '.join([label[1] for label in labels[0]])
self.statusBar().showMessage(result)
This code adds a new method analyzeImage
to the ImageAnalysisApp
class that loads the displayed image, preprocesses it for the MobileNetV2 model, makes predictions, and displays the results in the status bar.
Step 5: Implementing feature extraction with Mediapipe
Next, we will integrate Mediapipe into our app for feature extraction tasks, such as facial landmarks detection.
# main.py
import mediapipe as mp
def extractFeatures(self):
pixmap = self.scene.pixmap()
qImg = pixmap.toImage()
qImg.save('temp.jpg')
image_data = cv2.imread('temp.jpg')
mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh
with mp_face_mesh.FaceMesh(static_image_mode=True) as face_mesh:
results = face_mesh.process(cv2.cvtColor(image_data, cv2.COLOR_BGR2RGB))
annotated_image = image_data.copy()
mp_drawing.draw_landmarks(annotated_image, results.multi_face_landmarks[0], mp_face_mesh.FACEMESH_TESSELATION)
height, width, channel = annotated_image.shape
bytesPerLine = 3 * width
qImg = QImage(annotated_image.data, width, height, bytesPerLine, QImage.Format_RGB888).rgbSwapped()
pixmap = QPixmap.fromImage(qImg)
self.scene.clear()
self.scene.addPixmap(pixmap)
This code adds a new method extractFeatures
that uses the Facial Mesh model from Mediapipe to detect facial landmarks in the displayed image and annotate it with the detected landmarks.
Step 6: Adding image enhancement with OpenCV
Finally, we will add image enhancement functionality using OpenCV, such as image smoothing or edge detection.
# main.py
def enhanceImage(self):
pixmap = self.scene.pixmap()
qImg = pixmap.toImage()
qImg.save('temp.jpg')
image_data = cv2.imread('temp.jpg')
blurred_image = cv2.GaussianBlur(image_data, (5, 5), 0)
height, width, channel = blurred_image.shape
bytesPerLine = 3 * width
qImg = QImage(blurred_image.data, width, height, bytesPerLine, QImage.Format_RGB888).rgbSwapped()
pixmap = QPixmap.fromImage(qImg)
self.scene.clear()
self.scene.addPixmap(pixmap)
This code adds a new method enhanceImage
that blurs the displayed image using the Gaussian Blur filter from OpenCV.
Step 7: Connecting actions to GUI elements
To make our app interactive, we need to connect the actions to GUI elements like buttons. Add the following code to the initUI
method to create buttons for each action and connect them to their respective methods.
# main.py
loadButton = QPushButton('Load Image', self)
loadButton.clicked.connect(self.loadImage)
analyzeButton = QPushButton('Analyze Image', self)
analyzeButton.clicked.connect(self.analyzeImage)
extractButton = QPushButton('Extract Features', self)
extractButton.clicked.connect(self.extractFeatures)
enhanceButton = QPushButton('Enhance Image', self)
enhanceButton.clicked.connect(self.enhanceImage)
layout.addWidget(loadButton)
layout.addWidget(analyzeButton)
layout.addWidget(extractButton)
layout.addWidget(enhanceButton)
Step 8: Running the app
To run the app, open a terminal, navigate to the project directory, and run the main.py
file.
python src/main.py
This will launch the GUI window with buttons for loading an image, analyzing it with TensorFlow, extracting features with Mediapipe, and enhancing it with OpenCV.
Step 9: Packaging the app as an executable
To package the app as an executable for distribution, we can use tools like PyInstaller or cx_Freeze. These tools bundle your Python script and all required dependencies into a single executable file that can be run on any system.
For PyInstaller, you can install it using pip:
pip install pyinstaller
Once installed, navigate to the project directory in the terminal and run the following command to create an executable:
pyinstaller --onefile src/main.py
This will create a dist
directory containing the executable file. You can distribute this file to users without requiring them to have Python or any dependencies installed.
Step 10: Conclusion
Congratulations! You have successfully created an executable image analysis GUI app using Python libraries such as PyQt, TensorFlow, Mediapipe, and OpenCV. You can further enhance the app by adding more features, improving the UI design, or integrating other image analysis algorithms.
We hope this tutorial was helpful in guiding you through the process of creating a powerful and user-friendly image analysis tool. Feel free to explore more functionalities and customization options to tailor the app to your specific needs. Happy coding!
i really think this man is master in this really looking forward to learn more from him in his course binding some money for his course…
Excellent course.This a must join course.it illustrates "from idea to production" concept which many other courses lack.
Can I get a financial aid for this course?