Click here to Skip to main content
15,867,594 members
Articles / Artificial Intelligence / Deep Learning
Article

AI Social Distancing Detector: Calculating the Center Point of Detected People

Rate me:
Please Sign up or sign in to vote.
5.00/5 (1 vote)
9 Dec 2020CPOL2 min read 5.9K   95   4   2
In this article, we'll calculate the center of each detected bounding box, which will serve as a base for calculating distance.
So far in the series, we learned how to detect people and find bounding boxes to indicate people's locations. In general, you can estimate distances using the closest vertices of the bounding boxes, but to keep things simple, I use the center of the bounding boxes, as shown in the figure below. I then calculate the space between them using the Euclidean formula for a plane.

Image 1

Calculating the Center of a Rectangle

Remember that our application returns the list of detected objects. Each element in that list provides the label, rectangle (bounding box), and recognition score. Here, we use the rectangle. Two parameters represent it: top-left corner and bottom-right corner. Both of these are made up of an x and y coordinate in a plane.

To calculate the rectangle's center, we calculate its width and height and then divide each by 2. I implemented this functionality within get_rectangle_center of the DistanceAnalyzer class (see distance_analyzer.py file in the Part_06 folder):

Python
def get_rectangle_center(rectangle):
    # Get top and bottom right corner of the rectangle
    top_left_corner = rectangle[0]
    bottom_right_corner = rectangle[1] 

    # Calculate width and height of the rectangle
    width = bottom_right_corner[0] - top_left_corner[0]
    height = bottom_right_corner[1] - top_left_corner[1]

    # Calculate and return the center
    center = (int(width/2 + top_left_corner[0]), int(height/2 + top_left_corner[1]))

    return center

As explained above, the function recovers the top left corner and bottom right corner and then performs calculations. Given the get_rectangle_center function, I added another one, get_rectangle_centers, that iterates over the list of detection results:

Python
@staticmethod
def get_rectangle_centers(detection_results):
    # Prepare the list
    rectangle_centers = []

    # Iterate over detection results, and determine center of each rectangle
    for i in range(len(detection_results)):
        rectangle = detection_results[i]['rectangle']            

        center = DistanceAnalyzer.get_rectangle_center(rectangle)

        rectangle_centers.append(center)

    # Return rectangle centers
    return rectangle_centers

To ensure that centers are calculated correctly, I will use OpenCV to draw those locations on the video sequence frame.

Displaying the Centers of the Bounding Boxes

Given the rectangle centers list, I can draw them on the image using the circle function from OpenCV. This function works similarly to a rectangle in that it accepts as parameters the input image, the circle center, thickness, color, etc.

Here is the example of using the function to draw a yellow circle with a radius of 15 pixels (see common.py from Part_03). I set the thickness to -1 to fill the circle:

Python
def draw_rectangle_centers(image, rectangle_centers):        
    for i in range(len(rectangle_centers)):  
        opencv.circle(image, 
            rectangle_centers[i], 
            common.CIRCLE_RADIUS, 
            common.YELLOW, 
            common.THICKNESS_FILL)

The above function is implemented as the static method within the image_helper module (see Part_03/image_helper.py)

Putting Things Together

We are now ready to put everything together. We implement the main.py file as follows:

Python
import sys
sys.path.insert(1, '../Part_03/')
sys.path.insert(1, '../Part_05/')

from inference import Inference as model
from image_helper import ImageHelper as imgHelper
from video_reader import VideoReader as videoReader
from distance_analyzer import DistanceAnalyzer as analyzer

if __name__ == "__main__": 
    # Load and prepare model
    model_file_path = '../Models/01_model.tflite'
    labels_file_path = '../Models/02_labels.txt'

    # Initialize model
    ai_model = model(model_file_path, labels_file_path)    

    # Initialize video reader
    video_file_path = '../Videos/01.mp4'
    video_reader = videoReader(video_file_path)

    # Detection and preview parameters
    score_threshold = 0.4    
    delay_between_frames = 5

    # Perform object detection in the video sequence
    while(True):
        # Get frame from the video file
        frame = video_reader.read_next_frame()

        # If frame is None, then break the loop
        if(frame is None):
            break
        
        # Perform detection        
        results = ai_model.detect_people(frame, score_threshold)
        
        # Get centers of the bounding boxes (rectangle centers)
        rectangle_centers = analyzer.get_rectangle_centers(results)

        # Draw centers before displaying results
        imgHelper.draw_rectangle_centers(frame, rectangle_centers)   

        # Display detection results
        imgHelper.display_image_with_detected_objects(frame, results, delay_between_frames)

After configuring the input paths for the modules developed earlier, we initialize the AI model and perform inference to detect people. Then, the resulting detections are passed to get_rectangle_centers of the DistanceAnalyzer class instance. Given the list of centers, we draw them on the frame from the video file (draw_rectangle_centers) along with the bounding boxes and labels (display_image_with_detected_objects). After running main.py, you will get the results shown above.

Wrapping Up

In this article, we learned how to calculate the center locations of the people detected in a video sequence. In the next article, we will use those centers to estimate distances between people and indicate people that are too close.

This article is part of the series 'AI Social Distancing Detector View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
United States United States
Dawid Borycki is a software engineer and biomedical researcher with extensive experience in Microsoft technologies. He has completed a broad range of challenging projects involving the development of software for device prototypes (mostly medical equipment), embedded device interfacing, and desktop and mobile programming. Borycki is an author of two Microsoft Press books: “Programming for Mixed Reality (2018)” and “Programming for the Internet of Things (2017).”

Comments and Discussions

 
QuestionNanny State Pin
swshurts11-Dec-20 2:39
swshurts11-Dec-20 2:39 
AnswerRe: Nanny State Pin
cplas14-Dec-20 9:23
cplas14-Dec-20 9:23 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.