Click here to Skip to main content
15,880,608 members
Articles / Artificial Intelligence / Deep Learning

Completing the AI Traffic Speed Detector

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
22 Jan 2021CPOL5 min read 6.6K   3  
In this we discuss improvements we can make to the software in terms of performance or accuracy. We also compare our homebrew open-source system to commercial vehicle speed detection systems.
Here we'll compare our system to other Automatic Speed Enforcement (ASE) systems based on LIDAR or RADAR, and semi-automated or manual speed detection systems like VASCAR.

Introduction

Traffic speed detection is big business. Municipalities around the world use it to deter speeders and generate revenue via speeding tickets. But the conventional speed detectors, typically based on RADAR or LIDAR, are very expensive.

This article series shows you how to build a reasonably accurate traffic speed detector using nothing but Deep Learning, and run it on an edge device like a Raspberry Pi.

You are welcome to download code for this series from the TrafficCV Git repository. We are assuming that you are Python and have basic knowledge of AI and neural networks.

So far in this series, we’ve seen how to implement in Python a vehicle "speed trap" using object detection via the various computer vision and Deep Learning models, coupled with a procedure for calibrating the physical constants for converting distance and time intervals. This gives us an automatic way to estimate the speed of a vehicle as it passes through our camera’s FOV. Multiple vehicles can be tracked, with their speeds estimated, using our Raspberry Pi 4 + ArduCam camera hardware, along with an optional Coral USB AI Accelerator.

In this article, we’ll talk about improving the program we’d created. We’ll also compare our system to other Automatic Speed Enforcement (ASE) systems based on LIDAR or RADAR, and semi-automated or manual speed detection systems like VASCAR.

Multi-threading

Throughout our development of the TrafficCV Python program we’ve been limited to video processing and inferencing on a single thread. This means that reading video data from the Pi’s SD card, running object detection and object tracking on video frames, and displaying video data all happened in the same thread on a single Pi core while the other cores remained idle. We saw earlier how this negatively affected performance when streaming video data over a network.

We can increase the performance of our detector by moving video data reading and video data output to separate threads and utilizing FIFO queues. Our data input thread reads video data from storage, decodes it, and writes video frames to the queue. Our detector loop thread reads data from this queue – instead of directly from storage –- and overlays the bounding boxes and speed estimates onto the output frames. The detector thread then places these output frames into another queue, where a data output thread picks them up and displays them, or sends them across the network, or writes them to storage. In this way, three of the Pi cores can be utilized with the fourth core reserved for additional tasks (like running vehicle license plate recognition).

Our ArduCam camera can run at 90 FPSat a lower 640x480 resolution. We can utilize this higher FPS video source to improve our accuracy when measuring time intervals, assuming our multi-threaded Python software is fast enough to keep up.

Correlation Tracker vs Model Inference

One key assumption we’ve made is that our dlib object correlation tracker is computationally less expensive than the model inference. This is the entire reason we update the object positions using the tracker every frame while running the object detection inference only every 10 frames. But this assumption might be incorrect! Indeed, we saw – when using an AI accelerator like the Coral USB device – that the CPU was pegged at 100% and the FPS was limited to 13 regardless of whether we ran inference on the CPU or not. The dlib tracker that runs on the CPU every frame is a major bottleneck, which limits performance of the entire system.

We need to add an option to our detector that disables object tracking and instead runs object detection inference every frame. This may yield much higher FPS rates, especially with a multi-threaded program.

A Robic stopwatch commonly used in manual VASCAR systems

How Does Our Speed Detector Compare to the "Norm"

Many municipalities see ASEs and other speed enforcement systems as a significant source of revenue from issuing citations and fines to speeders while reducing the number of accidents due to excessive speed. The cheapest manual VASCAR system requires only a quality stopwatch, which can cost as little as $60, but this makes the operator perform all the tasks manually, including measuring distances accurately and toggling the stopwatch at exactly the right times. VASCAR systems only track one vehicle at a time. Besides, speeding tickets issued based on VASCAR are the most prone to error and the most challenged in court.

The limitations of VASCAR systems result in many municipalities turning to RADAR or LIDAR speed guns. Law enforcement-grade speed guns can start at around $1,000 – but they can only track one vehicle at a time and still require a human operator. Fully automated standalone ASE systems start at several thousand dollars for the hardware alone, with the higher-end systems able to track multiple vehicles. The cost of deploying traditional LIDAR and RADAR ASE systems in municipalities can range in the hundreds of thousands to millions of dollars when considering training and personnel costs. In addition, there are many jurisdictions that outlaw RADAR- and LIDAR-based vehicle speed tracking.

In our Pi-based system, the hardware cost is less than $200. Our system uses commodity hardware and runs a standard version of Debian "Buster" Linux. Therefore, we can expect the costs of deploying and servicing, as well as training personnel to administer such a system, to be far less than the costs of the specialized hardware and software used in traditional ASE systems. TrafficCV is GPL-licensed free software that can be easily extended to tasks like transmitting videos of the monitored cars in real time, or emailing vehicle data and video to a central server and automating much of the process of issuing citations. Our system can track multiple vehicles at a time while retaining the capability to perform additional computer vision tasks like license plate recognition using the Coral USB AI accelerator.

Our speed detection system is fully automatic and does not require intervention of a human operator at any time. This can significantly reduce mistakes or claims of bias if we carefully calibrate the constants used to calculate vehicle speed from our camera’s video. It also does not rely on RADAR or LIDAR, which eliminates any legal barriers to implementation.

Potential errors in a human-operated vehicle speed detection system

Next Steps?

Actually, none: we are done. In this series of articles, we’ve seen how the use of deep Learning on edge devices can be orders-of-magnitude cheaper to implement and maintain than the existing traffic speed detection systems, while offering many advantages in terms of program capabilities and accuracy. Happy speed trapping!

This article is part of the series 'AI on the Edge: Traffic Speed Detection View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer
Trinidad and Tobago Trinidad and Tobago
I've been programming computers as a hobby and professionally for more than 20 years. I like both Windows and Linux. My current areas of interest are computer security, machine learning, conversational user interfaces, and .NET HPC.

Comments and Discussions

 
-- There are no messages in this forum --