You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.
Real time object detection Android application using OpenCV 4.1 and YOLOv3.
Notifications You must be signed in to change notification settings
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Go to fileReal time object detection Android application using OpenCV 4.1 and YOLO. Author: Matteo Medioli YOLO: https://pjreddie.com/darknet/yolo/
After OpenCV module import:
This activity is the core of application and it implements org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2. It has 2 main private instance variable: a net (org.opencv.dnn.Net) and a cameraView (org.opencv.android.CameraBridgeViewBase). Basically has three main features:
Load convolutional net from *.cfg and *.weights files and read labels name (COCO Dataset) in assets folder when calls onCameraViewStarted() using Dnn.readNetFromDarknet(String path_cfg, String path_weights).
NOTE: this repo doesn't contain weights file. You have to download it from YOLO site.
Iteratively generate a frame from CameraBridgeViewBase preview and analize it as an image. Real time detection and the frames flow generation is managed by onCameraFrame(CvCameraViewFrame inputFrame). Preview frame is translate in a Mat matrix and set as input for Dnn.blobFromImage(frame, scaleFactor, frame_size, mean, true, false) to preprocess frames. Note that frame_size is 416x416 for YOLO Model (you can find input dimension in *.cfg file). We can change the size by adding or subtracting by a factor of 32. Reducing the framesize increases the performance but worsens the accuracy. The detection phase is implemented by net.forward(List results, List outNames) that runs forward pass to compute output of layer with name outName. In results the method writes all detections in preview frame as Mat objects. Theese Mat instances contain all information such as positions and labels of detected objects.
Performing Non Maximum Suppression by YOLO, in List results are stored all coordinates of optimal bounding boxes (the first 4 numbers are [center_x, center_y, width, height], followed by all class probabilities). classId is the corresponding index for label of detection in COCO Dataset list className.