Click here to Skip to main content
15,880,469 members
Articles / Artificial Intelligence / Deep Learning
Article

Eliminating Pests with a Raspberry Pi

Rate me:
Please Sign up or sign in to vote.
5.00/5 (3 votes)
24 Dec 2020CPOL5 min read 7K   57   5  
In this article, we’ll test our detection algorithm on a Raspberry Pi 3 device and create the "scare pests away" part of our pest eliminator by playing a loud sound.
Here we modify the Python code for launching MD and DNN on a Pi device. Then we demonstrate moose detection on a video file. Then we show the reader how to trigger a connected speaker to scare away the pest by playing a loud sound. And finally we give a short resume of the series and suggest further improvements.

Introduction

Unruly wildlife can be a pain for businesses and homeowners alike. Animals like deer, moose, and even cats can cause damage to gardens, crops, and property.

In this article series, we’ll demonstrate how to detect pests (such as a moose) in real time (or near-real time) on a Raspberry Pi and then take action to get rid of the pest. Since we don’t want to cause any harm, we’ll focus on scaring the pest away by playing a loud noise.

You are welcome to download the source code of the project. We are assuming that you are familiar with Python and have a basic understanding of how neural networks work.

In the previous article, we developed a motion detector and combined it with the trained DNN classifier. In this article, we’ll modify our Python code to perform pest detection on a Raspberry Pi device, as well as enable the resulting solution to play a loud sound to scare the pests away.

Configuring Raspberry Pi

First, we need to configure our edge device. We’ll use a Raspberry Pi 3B+ with 16 GB of memory. The operating system will be Raspberry Pi OS (32-bit). The Python OpenCV library can be installed through the PIP channel.

To play the sound, we’ll use the Pygame library. This package is preinstalled on most operating systems used with the Raspberry Pi devices. It’s assumed that we’ll use the 3.5 mm headphone jack to play the sound, so use the default configuration for audio output on your Pi device.

Note that this is intended as a proof-of-concept. If you were trying to build a commercial pest detector, you’d likely want to use something like Jetson or Coral. They offer both affordable prototyping boards and production-ready hardware - so once your product is ready, it’s easy to move into mass production.

Modifying Code

To play sound and test the detection algorithm on a Pi device, we need to write some additional code.

First, we’ll create a sound player using the Pygame mixer module:

Python
from pygame import mixer

class SoundPlayer:
    def __init__(self, sound_file):
        mixer.init(44100, -16, 2, 2048)
        self.sound = mixer.Sound(sound_file)
        
    def play(self):
        self.sound.play()

Next, we need a class for measuring our application’s frame processing speed:

Python
import time

class FPS:
    def __init__(self):
        self.frame_count = 0
        self.elapsed_time = 0
        
    def start(self):
        self.start_time = time.time()

    def stop(self):
        self.stop_time = time.time()
        self.frame_count += 1
        self.elapsed_time += (self.stop_time-self.start_time)
    
    def count(self):
        return self.frame_count
    
    def elapsed(self):
        return self.elapsed_time
    
    def fps(self):
        if self.elapsed_time==0:
            return 0
        else:
            return self.frame_count/self.elapsed_time

Now we can modify our VideoPD class developed in the previous article to play a sound and evaluate the detection speed:

Python
class VideoPDSound:
    def __init__(self, md, pd, thresh, sp):
        self.md = md
        self.pd = pd
        self.thresh = thresh
        self.sp = sp
        self.fps = FPS()
        
    def play(self, file_path):
        capture = cv2.VideoCapture(file_path)
        
        md_name = 'Motion objects'
        cv2.namedWindow(md_name, cv2.WINDOW_NORMAL)
        cv2.resizeWindow(md_name, 640, 480)
        
        counter = 0
        play_dt = 10.0
        curr_time = time.time()
        play_time = curr_time - play_dt - 0.1
        detect_count = 0
        
        while(True):    
            (ret, frame) = capture.read()
            if frame is None:
                break
            
            counter = counter + 1
            #if (counter % 3) != 0:
            #    continue
            
            self.fps.start()
            self.md.process(frame)
            objects = self.md.objects()
            
            l = len(objects)
            pests = []
            if l>0 :
                for (i, obj) in enumerate(objects) :
                    (roi, (class_num, class_conf)) = self.pd.detect(frame, obj)
                    if (class_num>0) and (class_conf>=self.thresh) :
                        pests.append(roi)
                        
            self.fps.stop()
            
            if l>0:
                Utils.draw_objects(objects, "OBJECT", (255, 0, 0), frame)
            
            k = len(pests)
            if k>0:
                detect_count = detect_count + 1
                Utils.draw_objects(pests, "PEST", (0, 0, 255), frame)
                curr_time = time.time()
                dt = curr_time - play_time
                if dt>play_dt :
                    self.sp.play()
                    play_time = curr_time
            
            # Display the resulting frame with object rects
            cv2.imshow(md_name, frame)
            
            #time.sleep(0.01)
            
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
            
        capture.release()
        cv2.destroyAllWindows()    

        f = self.fps.fps()
        return (detect_count, f)

This code now monitors the performance of the detection process by calling the self.fps.start and self.fps.stop methods. Note that we evaluate only the execution time of the detection algorithm (motion detection and classification).The modified code also plays a sound by calling self.sp.play when a pest is detected in a frame. In real-life conditions, a moose is likely to be detected in several consecutive frames. We don’t want to play the sound every time a moose appears in the camera view.

The goal is to emit the scary sound once, and then wait for a new pest to appear. The algorithm plays the sound not more often than once every 10 seconds. So if we’ve got a pack of malicious moose meandering through our marigolds, we’ll frighten them all away — one at a time, if necessary.

We’re also covered if a single moose decides to stick around after we’ve tried to scare it away. If our sound is insufficiently frigentening for this moose, it will (hopefully) leave out of annoyance after we play the same sound every 10 seconds as long as the moose is within visual range.

We also need to choose a sound appropriate for scaring pests. Let’s take one of the many free buzzer sounds available on the Internet; for example, this one: buzzer.wav).

Running the Pest Eliminator

We are now ready to run our pest elimination algorithm on the Raspberry Pi device:

Python
video_file = r"/home/pi/Desktop/PI_PEST/video/moose_1.mp4"

md = MD(0.05, 0.1)
proto = r"/home/pi/Desktop/PI_PEST/net/moose.prototxt"
model = r"/home/pi/Desktop/PI_PEST/net/moose.caffemodel"
pd = PestDetector(proto, model, 128)

sound_file = r"/home/pi/Desktop/PI_PEST/video/buzzer.wav"
scarer = SoundPlayer(sound_file)

v_pd = VideoPDSound(md, pd, 0.99, scarer)
(detect_count, fps) = v_pd.play(video_file)
print("FPS = %s" % fps)

Here is the resulting captured video:

When the video processing is finished, we show the average processing speed in the console. For this video file, it measures 11 to 12 FPS. An order of magnitude faster than the algorithm based on a pre-trained SSD model. (Remember? The speed there was about 1.25 FPS). Our custom algorithm runs at almost real-time speed without hard optimization or parallel processing.

Next Steps

In this series of articles, we demonstrated how to use AI algorithms on a Raspberry Pi device to eliminate pests in a real-life outdoors setting. We started by developing code to make a pre-trained SSD model support detection of common pests, such as cows, sheep, or even cats and dogs.

Then we assembled a dataset of image samples for an "unusual" pest: Moose. We applied several data augmentation methods to enhance our dataset. We then developed a simple, small DNN classifier model and trained it on the augmented dataset with a rather high accuracy of 97%.

Finally, we designed a basic motion detection algorithm and combined it with the DNN classifier. The complete algorithm was tested on a video file and showed a surprisingly good performance of 11 to 12 FPS.

We hope that you found this series useful, and we hope that it gave you some pointers for developing interesting AI applications for edge devices. The main goal of the series was to show conceptual solutions for the detection problem.

If you’re looking to deploy a commercial pest elimination tool, your next steps are to do further work to optimize your software stack for performance. We encourage you to try and make this solution work better. Good luck!

This article is part of the series 'Real-time AI Pest Elimination on Edge Devices View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Team Leader VIPAKS
Russian Federation Russian Federation
EDUCATION:

Master’s degree in Mechanics.

PhD degree in Mathematics and Physics.



PROFESSIONAL EXPERIENCE:

15 years’ experience in developing scientific programs
(C#, C++, Delphi, Java, Fortran).



SCIENTIFIC INTERESTS:

Mathematical modeling, symbolic computer algebra, numerical methods, 3D geometry modeling, artificial intelligence, differential equations, boundary value problems.

Comments and Discussions

 
-- There are no messages in this forum --