Click here to Skip to main content
15,880,956 members
Articles / Artificial Intelligence / Deep Learning

Choosing Hardware for AI Traffic Speed Detection

Rate me:
Please Sign up or sign in to vote.
4.14/5 (4 votes)
14 Jan 2021CPOL7 min read 10.6K   3   3
In this article, we select hardware components for our AI/Pi-based solution and assemble them into a functional system.
Here we discuss the hardware components for our edge-computing project: Raspberry Pi 4, the ArduCam 5MP camera, and the Coral USB AI accelerator.

Introduction

Traffic speed detection is big business. Municipalities around the world use it to deter speeders and generate revenue via speeding tickets. But the conventional speed detectors, typically based on RADAR or LIDAR, are very expensive.

This article series shows you how to build a reasonably accurate traffic speed detector using nothing but Deep Learning, and run it on an edge device like a Raspberry Pi.

You are welcome to download code for this series from the TrafficCV Git repository. We are assuming that you are Python and have basic knowledge of AI and neural networks.

In the previous article, we discussed why a traffic speed detector built around commodity hardware would be a useful alternative to high-cost proprietary RADAR and LIDAR speed cameras. In this article, we’ll examine the hardware components to be used in our edge computing project and assemble these components into a system ready for software installation.

Image 1

Base Hardware Platform

First, let’s have a look at the base hardware platform – the Raspberry Pi 4 Single Board Computer.

The latest iteration of the Raspberry Pi SBC packs a modern ARMv8-powered 64bit computer into a $45 credit-card sized board with up to 8GB of DDR4 RAM. It has all the I/O ports needed to develop computer vision and Machine Learning applications, and can run a more-or-less standard install of Debian 10 "Buster" Linux. This means that the field personnel can use their Linux/Unix knowledge to administer and secure our traffic speed detection system, with no need for specialized knowledge or equipment.

The Pi 4’s small size and low power requirements means we can be flexible with where we position our detector, and can potentially rely on alternative power sources like a rechargeable battery pack or solar power. The Raspberry Pi’s sheer popularity means that it is compatible with a myriad of inexpensive accessories that can be used in any edge AI project. Another advantage of the Pi 4 for computer vision projects is that it still comes with a 4-pole 3.5mm TRRS AV output, making it easier to reuse older TVs and monitors available from the existing CCTV systems.

Although the Pi 4’s board with 4GB RAM costs only $45, we still need a case, a power supply, and other accessories. The LABISTS Raspberry Pi 4 kit retails for about $100, and it includes everything needed for Pi-powered development, including heat sinks, a case fan, and a quality USB-C power supply, which are important components for CPU-intensive apps like computer vision.

Image 2

We can easily assemble the LABISTS kit using the instructions provided. Screw the fan to the upper case and attach heatsinks to the chips using the provided adhesive tabs. The front of the fan with the label should point down towards the Pi board.

Image 3

The airflow is from the unlabelled back of the fan towards the front, so air will be pushed towards the board and heatsinks. One point to note: attach a camera cable to the camera port on the Pi board first, before inserting the board into the bottom half of the case. The Pi’s Camera Serial Interface (CSI) port sits in a small space between the HDMI and TRRS port, and there isn’t enough space to work the cable in when the case is present.

Camera

The camera we’ll use for our Pi-based project is the ArduCam Lens Board, which comes with an adjustable M12 lens module. It retails for about $22 on Amazon. Although this camera has a lower resolution (5MP 2592×1944) for static images than other Pi cameras, it offers two key features that makes it a near-ideal choice for us. It has an adjustable-focus lens (vs the fixed focus of the other cameras), and it’s possible to buy additional M12 lenses to use with this camera. Since we’re going to be doing detection of entire objects (without examining fine details), it’s important to place the camera at different positions and different distances from a roadway, and then focus the lens for the best recognition image.

Getting the camera installed is by far the most difficult part of our hardware setup. The camera cable is short, the screws and bolts for the provided 3D-printed mount are tiny, the mount starts peeling off layers when you fumble with it too hard, and you have to accurately grip the lens holder ring to unscrew the camera cap and also to focus the lens.

Be gentle when turning the lens focus as it’s easy to damage the threading with too much force. The CSI interface the Pi uses for its camera is much faster than a USB camera, so using a specialized Pi camera is definitely worth it.

Image 4Image 5

Pi 4 with ArduCam and Coral USB attachment

Hardware AI Accelerator

The last component of our system is the hardware AI accelerator. Most edge devices are not powerful enough for inferences that use neural network models with architectures like CNN and RNN, which make heavy computational demands on general-purpose CPUs.

Recently, a new class of hardware devices has emerged, called AI accelerators, which are designed specifically to accelerate inference using deep-learning models, and to allow devices at the edge to run latency-sensitive Machine Learning and computer vision applications without having to communicate with a centralized cloud AI service. Accelerators either plug into host computers as co-processors or run as standalone System on Modules (SOM), and are based on microarchitectures like GPUs, FPGAs, and ASICs.

The Coral USB accelerator uses Google’s Edge TPU (Tensor Processing Unit) as a co-processor that plugs into a host computer via a USB 3 interface. The Edge TPU is an ASIC (Application Specific Integrated Circuit) designed by Google specifically for accelerating inference using neural network models created using TensorFlow.

We can compare the Coral accelerator with 2 other AI accelerators: Intel’s Neural Compute Stick 2 (NCS2), and NVIDIA’s Jetson Nano. NCS2 is also a USB co-processor, which contains Intel’s Movidius Myriad X VPU (Vision Processing Unit) designed to accelerate computer vision and Deep Learning inference on edge devices. Intel offers the OpenVINO toolkit for optimizing and deploying Machine Learning models on the different hardware platforms Intel makes. OpenVino supports Caffe, TensorFlow, MXNet, and other popular Machine Learning frameworks. NCS2 is also designed to plug into a USB 3.0 port, and can be used on Windows, Linux, and ARM platforms.

Jetson Nano is a standalone GPU-based AI accelerator combining an ARM A-57 quad-core CPU with an NVIDIA Maxwell-class GPU that has 128 CUDA cores. Because the Jetson series are GPU-based, they can accelerate a wide range of Deep Learning model types and computing workloads. From NVIDIA’s benchmarks, Nano is quite a bit faster than a stock Raspberry Pi 4 when running models like SSD MobileNet-v1. However, the Coral accelerator is much faster than Jetson Nano when running on similar ARM A-class embedded CPUs.

Benchmarks comparing performance of all three accelerators on MobileNet models show that the Coral accelerator has a performance advantage over the two competitors when running the TFLite versions of the models. The Coral accelerator only supports acceleration of TFLite models. This specialization has both advantages and disadvantages over more general-purpose processors.

Coral runs TensorFlow inference on TFLite models much faster than more general-purpose AI accelerators like Nvidia Jetson or Intel Movidius, while costing the same or less as these accelerators. However, Jetson and Movidius are far more versatile than Coral in terms of the kinds of Deep Learning models you can use them for via the JetPack SDK from NVIDIA and the OpenVino framework from Intel.

For this project we’ll use the Coral accelerator as TFLite versions of the computer vision models we want to use are available. However, for other projects the accelerators from NVIDIA and Intel are viable options.

Installing the Coral accelerator is pretty simple – we just need to install the Edge TPU runtime and then plug Coral into one of the USB 3.0 ports on the Pi.

Next Step

In the next article, we’ll go through installation of the operating system on the Pi, securing it, and configuring it for remote access over WiFi. Stay tuned!

This article is part of the series 'AI on the Edge: Traffic Speed Detection View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer
Trinidad and Tobago Trinidad and Tobago
I've been programming computers as a hobby and professionally for more than 20 years. I like both Windows and Linux. My current areas of interest are computer security, machine learning, conversational user interfaces, and .NET HPC.

Comments and Discussions

 
BugBroken navigation Pin
Rage27-Jan-21 0:25
professionalRage27-Jan-21 0:25 
GeneralRe: Broken navigation Pin
Sean Ewington27-Jan-21 2:53
staffSean Ewington27-Jan-21 2:53 
GeneralRe: Broken navigation Pin
Rage27-Jan-21 2:57
professionalRage27-Jan-21 2:57 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.