In this tutorial, we will build a Python facial recognition app using Tensorflow and Kivy. Tensorflow is an open-source machine learning framework developed by Google, while Kivy is a Python framework for developing multitouch applications. By combining these two powerful tools, we can create a facial recognition app that can identify faces in real-time.
Before we begin, make sure you have Tensorflow and Kivy installed on your system. You can install Tensorflow using pip:
pip install tensorflow
You can install Kivy using pip as well:
pip install kivy
Once you have both Tensorflow and Kivy installed, we can start building our facial recognition app. We will be using a pre-trained model called OpenCV for face detection. OpenCV is a popular computer vision library that provides pre-trained models for detecting faces, objects, and more.
First, let’s create a new Python file called facial_recognition.py
and import the necessary libraries:
import cv2
import numpy as np
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.image import Image
from kivy.clock import Clock
Next, let’s define our FacialRecognition
class that will inherit from the App
class:
class FacialRecognition(App):
def build(self):
self.layout = BoxLayout()
self.image = Image()
self.layout.add_widget(self.image)
Clock.schedule_interval(self.detect_faces, 1.0 / 30.0)
return self.layout
In the FacialRecognition
class, we have a build
method that creates a BoxLayout
and an Image
widget. We add the Image
widget to the layout, and we use Clock.schedule_interval
to call the detect_faces
method every 1/30 of a second.
Next, let’s define the detect_faces
method that will use the OpenCV face detection model to detect faces in real-time:
def detect_faces(self, dt):
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cap.release()
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
buf = cv2.flip(frame, 0).tostring()
texture = self.image.texture
texture.blit_buffer(buf, colorfmt='bgr', bufferfmt='ubyte')
self.image.texture = texture
In the detect_faces
method, we capture a frame from the default camera using cv2.VideoCapture(0)
. We then convert the frame to grayscale and use the OpenCV face detection model to detect faces in the frame. We draw rectangles around the detected faces and update the Image
widget with the processed frame.
Now, we can run our facial recognition app by creating an instance of the FacialRecognition
class and calling the run
method:
if __name__ == '__main__':
FacialRecognition().run()
That’s it! You have just built a Python facial recognition app using Tensorflow and Kivy. You can further enhance the app by training your own facial recognition model using Tensorflow or adding more advanced features such as facial recognition authentication.
I hope you found this tutorial helpful. If you have any questions or run into any issues, feel free to ask in the comments below. Happy coding!
"The kernel for facial recognition.ipynb appears to have died. It will restart automatically." It is happening every time i try to open camera only happening on jupyter lab help me guys
LOVE YOUR VIDEO THIS IS SUCH A GOOD KICKSTART PROJECT TO LEARN COMPUTER VISION
Hey Nicholas Renotte make the same models with an android application
tensorflow-gpu is not working on my laptop and it shows empty list on it so what should I do ? @Nicholas Renotte
I am confused because it keeps saying dataset and anchor embedding is not defined and when I check your code it says the same on github, is this a problem for anyone else
WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
y_hat = siamese_model.predict([test_input, test_val])
y_hat
H, when i'm starting this line all my predictions is the same 0.5004733, how can i fix it?
array([[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773],
[0.5004773]], dtype=float32)
please make this video for new version
hi , Nicholas , so when i run the faceid.py file using the terminal ( i am using pycharm) it gives me this error :
ValueError: Unknown layer: 'L1Dist'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope.
which i fixed using the help of internet , but then got this error :
TypeError: Exception encountered when calling L1Dist.call().
Expected float32, but got embedding of type 'str'.
Arguments received by L1Dist.call():
• args=('<KerasTensor shape=(None, 4096), dtype=float32, sparse=False, name=keras_tensor_10>',)
• kwargs={'validation_embedding': ["'embedding'", '2', '0']}
can you help me with this or anyone else . who might know what is going on ?
How can I utilize this build for multiple class people?
Like say training this model on the left side and right side face of multiple people and then testing it out?
hope you got my question
1. not just one people but multiple
2. only on the left and right side of the face
I'm getting everything right, true and predicted labels are matching, after training the model, but when it comes to test it on webcam, verification is showing false results. What could be reason?
Hi Nicholas, thank you so much for your work, appreciate so much your contribution to the community. I have a little problem, I would love to know your opinion: In case I need to recognize let's say 3 faces and deny all others, how should I organize the sets?, the anchor and positives should be times 3?, It wouldn't be more convenient to custom train an object detection model such as yolo with these three faces?.
Once again thank you so much for every video, I always watch them all.
Cheers!
thank you nic for bring this very helpfull
hi sir, can the app be built in buildozer to android mobile app? and what is the requirements steps for it?
really hope can receive ur reply, thanks!
hey nichola u did a great job but i was hoping can u do that at basic level i guess it too advance. Can u do that in VS code and like go with every step slow
I have just started the tutorial, is it possible to verify multiple people ? Thank you soo much
is it foolproof?
at 1:43:55 timestamp of this video you said that if i want my model to recognize my firend also other than me then < add images of yourself as a positive example. So add your images as an anchor and positive and label for that would be 1
add images of your best firend as an anchor and your best friend as a positive and that label would be 1 >.
Question1 :
I am a little bit confused as to how will the model tell If the image shown is me or my firend's i.e what i am trying to say is when the image is shown is me or my firend's in both the cases the output will be 1 but if I also want my model to diffenretiate between me and my firend how will this work?
Question2:
I am a little bit confused as to how will our data folder look like if we add more people to train this particular CNN. Do i need to create a sub folder for me and my firend's images in anchor and positive directories? .. or should I just keep the images as it is in anchor and positive directories. the reason I am asking this because when downloading "Labled faces in the wild " dataset I noticed that each person images were in their own folders with their respective names as the folder name. Do I need to do something like that?
I will be very grateful if you could answer these questions please.. Thank you @Nicholas Renotte
You are my hero sir .. Thank you Thank you so much … Love from India that is Bharat
Hey nic, Actually if u add from_logits = True. Then what happens is while calculating the exponential part it implements some tweaks in calculation so that calculation errors in decimals could be avoided . Also it becomes a kind of maximum likelyhood estimation.