Click here to Skip to main content
15,867,568 members
Articles / Artificial Intelligence / Deep Learning

AI Social Distancing Detector: Object Detection in Video Frames

Rate me:
Please Sign up or sign in to vote.
5.00/5 (1 vote)
7 Dec 2020CPOL2 min read 4.2K   69   3  
In this article, we continue learning how to use AI to build a social distancing detector.
After learning how to employ TensorFlow and a pre-trained MobileNet model, we move forward to combine our detector with a web camera. By the end of this article, you will know how to run object detection on video sequences, as shown below.

You can find the companion code here.

Camera Capture

I started by implementing the Camera class, which helps capture frames from the webcam (see camera.py in Part_04). To do so, I am using OpenCV. In particular, I am using the VideoCapture class. I get the reference to the default webcam and store it in the camera_capture field:

Python
def __init__(self):
    # Initialize the camera capture
    try:
        self.camera_capture = opencv.VideoCapture(0)
    except expression as identifier:
        print(identifier)

To capture a frame, use the read method of the VideoCapture class instance. It returns two values:

  • status – A boolean variable representing the status of the capture.
  • frame – The actual frame acquired with the camera.

It is good practice to check the status before using the frame. Additionally, on some devices, the first frame might appear blank. The capture_frame method from the Camera class compensates by ignoring the first frame, depending on the input parameter:

Python
def capture_frame(self, ignore_first_frame):
    # Get frame, ignore the first one if needed
    if(ignore_first_frame):
        self.camera_capture.read()
 
    (capture_status, current_camera_frame) = self.camera_capture.read()
    
    # Verify capture status
    if(capture_status):
        return current_camera_frame
 
    else:
        # Print error to the console
        print('Capture error')

The general flow of using the Camera class is to call the initializer once and then invoke capture_frame as needed.

Referencing Previously Developed Modules

To proceed further, we will use the previously developed Inference class and ImageHelper. To do so, we will use main.py, in which we reference those modules. The source code of those modules is included in the Part_03 folder and explained in the previous article.

To reference the modules, I supplemented main.py with the following statements (I assume that the main.py file is executed from the Part_04 folder):

Python
import sys
sys.path.insert(1, '../Part_03/')
 
from inference import Inference as model
from image_helper import ImageHelper as imgHelper
 
Now, we can easily access the object detector, and perform inference (object detection), even though the source files are in a different folder:
 
# Load and prepare model
model_file_path = '../Models/01_model.tflite'
labels_file_path = '../Models/02_labels.txt'
 
# Initialize model
ai_model = model(model_file_path, labels_file_path)
 
# Perform object detection
score_threshold = 0.5
results = ai_model.detect_objects(camera_frame, score_threshold)

Putting Things Together

We just need to capture the frame from the camera and pass it to the AI module. Here is the complete example (see main.py):

Python
import sys
sys.path.insert(1, '../Part_03/')
 
from inference import Inference as model
from image_helper import ImageHelper as imgHelper
 
from camera import Camera as camera
 
if __name__ == "__main__": 
    # Load and prepare model
    model_file_path = '../Models/01_model.tflite'
    labels_file_path = '../Models/02_labels.txt'
 
    # Initialize model
    ai_model = model(model_file_path, labels_file_path)
 
    # Initialize camera
    camera_capture = camera()
 
    # Capture frame and perform inference
    camera_frame = camera_capture.capture_frame(False)
        
    score_threshold = 0.5
    results = ai_model.detect_objects(camera_frame, score_threshold)
 
    # Display results
    imgHelper.display_image_with_detected_objects(camera_frame, results)

After running the above code, you will get the result shown in the introduction.

Wrapping Up

We developed a Python console application that performs object detection in a video sequence from a webcam. Although it was a single frame inference, you can extend the sample by capturing and calling the inference in a loop, continuously displaying the video stream, and invoking the inference on demand (by pressing the key on a keyboard, for example). We will perform object detection on the frames from the test data sets including video sequence stored in the video file in the next article.

This article is part of the series 'AI Social Distancing Detector View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
United States United States
Dawid Borycki is a software engineer and biomedical researcher with extensive experience in Microsoft technologies. He has completed a broad range of challenging projects involving the development of software for device prototypes (mostly medical equipment), embedded device interfacing, and desktop and mobile programming. Borycki is an author of two Microsoft Press books: “Programming for Mixed Reality (2018)” and “Programming for the Internet of Things (2017).”

Comments and Discussions

 
-- There are no messages in this forum --