Utilizing YOLOv8 for Object Detection on Android Platform

Posted by

Object detection app using YOLOv8 and Android

Object detection app using YOLOv8 and Android

Object detection is a popular computer vision technique that involves identifying and classifying objects in images or videos. YOLO (You Only Look Once) is one of the most widely used algorithms for object detection, and YOLOv8 is the latest version of this model.

Developers can now integrate object detection capabilities into their Android applications using YOLOv8. This allows users to point their device’s camera at objects in the real world and have the app recognize and classify them in real-time.

How does it work?

YOLOv8 is a deep neural network that divides an image into a grid and predicts bounding boxes and class probabilities for each grid cell. This allows the model to detect multiple objects in a single pass and is known for its speed and accuracy.

Android developers can use pre-trained YOLOv8 models and libraries to implement object detection in their apps. They can also fine-tune the model on a specific dataset for better performance on specific objects or scenarios.

Benefits of using YOLOv8 for object detection in Android apps

1. Real-time detection: YOLOv8’s speed and efficiency make it suitable for real-time object detection applications on mobile devices.

2. Accuracy: YOLOv8 has state-of-the-art performance in object detection tasks, making it a reliable choice for developers.

3. Customizable: Developers can fine-tune the model to improve performance on specific objects or scenes, making it versatile for various applications.

Conclusion

Object detection using YOLOv8 and Android opens up a world of possibilities for developers to create innovative and interactive applications. By harnessing the power of deep learning and computer vision, developers can create intuitive and engaging user experiences that leverage real-time object recognition.

Now is the perfect time for developers to explore the potential of YOLOv8 for object detection in Android apps and create cutting-edge solutions that push the boundaries of what is possible with mobile technology.

0 0 votes
Article Rating

Leave a Reply

28 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@GianmarcoGoycocheaCasas
2 days ago

Dear Madam Aarohi, Thank you very much for your contribution. I would like to know how I could complement the external annotations to the yolo model, for example 'supervision' annotations that set an object counter? The process you show also run in yolov9 right? I mean app processing

@luqmanabidoye7344
2 days ago

The github code does not match the one shown in the video. No: step 2…. line in the code

@shantilalzanwar8687
2 days ago

getting error when trying to run the .pynb file.

@fathaariyaprasetya9950
2 days ago

thankyouuuu🙏🙏🙏🙏🙏🙏

@shubhamb7272
2 days ago

Hi,
Can we train the model on Mac's with M series?

@cortex-technologies
2 days ago

Where did you provide the dataset ?

@onewhoflutters4866
2 days ago

Thanks a lot for the video. I have a question. @CodeWithAarohi why did you use kotlin instead of java like in yolov5 android app?

@airlangpark6596
2 days ago

could you please create YOLOv8 android app that use onnx

@yahiryablonsky1716
2 days ago

Hi, first of all thank u for this amazing content. I want to ask you a question. I am trying to create a model based on YoloV8 to detect the color of the traffic light. I am having an issue that when I use the train model with the camera in video mode, the model takes a while but detect the traffic light but when I try to detect only by sending one frame, the detection is null. I do not understand why when you send multiple frames it gives you a better detection. Could you help me with this?

Thank u

@UniversalConnexions
2 days ago

I'm trying to implement the model with my custom java code but can't seem to scalep : private List<ObjectDetectionResult> processOutput(float[] outputData) {

allResults = new ArrayList<>();

int numDetections = outputData.length / (NUM_VALUES_PER_DETECTION + labels.size());

for (int i = 0; i < numDetections; i++) {

if(outputData[i * 8 + 4] > NMS_THRESHOLD){

int detectionOffset = i * (NUM_VALUES_PER_DETECTION + labels.size());

float confidence = outputData[detectionOffset];

if (confidence < CONFIDENCE_THRESHOLD) continue;

float centerX = outputData[detectionOffset] * 640;

float centerY = outputData[detectionOffset + 1] * 640;

float width = outputData[detectionOffset + 2] * 640;

float height = outputData[detectionOffset + 3] * 640;

float scaleFactorX = surfaceView.getWidth() * 1f / 640;

float scaleFactorY = surfaceView.getHeight() * 1f / 640;

// Calculer les positions relatives à l'image redimensionnée

float left = centerX – width / 2;

float top = centerY – height / 2;

float right = centerX + width / 2;

float bottom = centerY + height / 2;

RectF boundingBox = new RectF(left, top, right, bottom); OR // Convertir les coordonnées du modèle en pixels de l'image d'entrée (640×640)

float centerX = outputData[detectionOffset] * 640;

float centerY = outputData[detectionOffset + 1] * 640;

float width = outputData[detectionOffset + 2] * 640;

float height = outputData[detectionOffset + 3] * 640;

// Calculer les positions relatives à l'image redimensionnée

float left = centerX – width / 2;

float top = centerY – height / 2;

float right = centerX + width / 2;

float bottom = centerY + height / 2;

// Mettre à l'échelle les coordonnées en fonction des dimensions de la surfaceView

left *= scaleFactorX;

top *= scaleFactorY;

right *= scaleFactorX;

bottom *= scaleFactorY;

RectF boundingBox = new RectF(left, top, right, bottom);

doesn't work

@emmanuelakpaklikwasi4300
2 days ago

You are very good at what you do… trust me! After deploying it on android. How can I share the application or how someone else could get it

@New_Supra_One_
2 days ago

Miss, where is the confidence score? 🙃 I need that

@-IvanTandella
2 days ago

i am unable to run the app, "This app has stopped" did anyone know why is this happen? I use android 7 device

@wellhell8163
2 days ago

hello, any suggestion how to optimise the app further so that the ms and the detection becomes faster? and maybe how to include int8 format?
any links/suggestions/example projects on how to make the detection faster?
thanks for any answer in advance

@pinocchio200
2 days ago

Hello! I'm here again. Not related. But is it possible to build an application or to train models with gesture recognition? I already made a static sign language translator but i want it to be dyamic. It can identify dynamic sign language. Is it possible? If yes, then how?

@pinocchio200
2 days ago

When I try to train the model in vscode using gpu. Why it became hang my laptop?

@pinocchio200
2 days ago

You're such an amazing professional. My new IDOL for now! Anyways, Did u have a not custom dataset. Its like its already made for tflite?

@hasancansolakoglu8224
2 days ago

Does the int8 quantized TFLite file work with this app?

@handokosupeno5425
2 days ago

Thanks you very much

@andremedeiros5732
2 days ago

what is your python version? I tried with 3.9 and 3.11 and an error occurred when exporting to tflite

28
0
Would love your thoughts, please comment.x
()
x