(Comments)
In 1933, a chicken keeper and amateur photographer decided to find the culprit who was stealing his eggs. Since its inception, security cameras are everywhere nowadays, most of the claimed "smart ones" work by streaming videos back to a monitor or a server so as someone or some software can analyze video frames and hopefully find some useful information from them. They consume a large amount of network bandwidth and power to stream videos even though ten image frames are all we need to know who was stealing the eggs. They are also facing a dilemma of out of service when the network is unstable, images cannot be analyzed and the "smart" becomes "dumb".
Edge computing is a network model which enables data processing occurs at the edge of the network where the camera is located, eliminating the need to send videos to a central server for processing. Processing image data on the edge reduces system latency, the power consumed by video transmitting and cost of bandwidth, plus improved privacy since less information will be transmitted which are suspectable to be hacked. Simple concept, but why it was not popular yet? Simple answer, the hardware, and software were not ready yet. Image processing has long been known for its notoriously crave for processing power and advanced algorithm to extract useful information from it. With the recent advance in deep learning algorithms followed by emerging of friendly priced inferencing hardware opens the door for more advanced edge computing right on the camera.
How about making such a cool camera yourself? The final goal for this tutorial is to show you how to build such a security camera that process the footage locally with advanced object detection algorithm and filters the important images out of hours of video frames all in real-time. To build, you need the following essential tools and hardware.
Optional to make the camera run on battery.
Optional to build an additional turret to turn the camera and cover an even more extensive range of view.
Optional materials to build housing for the project, cardboard, solid wood board, etc.
In this post, you will learn how to install the necessary software on the Raspberry Pi for Movidius NCS and do object detection on webcam frames in real-time.
The basic installation and configuration for Raspberry Pi are already listed on Movidius document, note that Raspberry Pi 3 must run the latest Raspbian Stretch for NCSDK2, and an SD card-sized 16G + is recommended.
After installing NCSDK2, install the NCAPI v2's ncappzoo which contains examples for the Movidius Neural Compute Stick.
git clone -b ncsdk2 https://github.com/movidius/ncappzoo.git
Then cd ncappzoo/apps
git clone https://github.com/Tony607/video_objects
It is necessary to install opencv3 to run this demo. While the official standard approach is to build from the source which I have tried several times without luck, that leads to my alternative solution to install the pre-built OpenCV3 library on raspberry Pi within a matter of minutes compares to hours of building from source. Just run the following four lines in a terminal.
sudo pip3 install opencv-python==3.3.0.10
sudo apt-get update
sudo apt-get install libqtgui4
sudo apt-get install python-opencv
You just installed
To check the installation in python3, you can run like this. Expect to see the OpenCV3 version string "3.3.0".
pi@raspberrypi:~/workspace/ncappzoo/apps/video_objects $ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170124] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'3.3.0'
>>>
So no more complain about "My Code's compiling".
With your NCS and a webcam plugged in your Raspberry Pi's USB ports, connect an HDMI monitor or use VNC to access your Pi's desktop. Let's fire up the demo. In a terminal, cd into
make run_cam
The first time you run this, the make script will download the SSD mobileNet deep learning model definition file and weights, so expect some delay. After that, a new window pops up with real-time camera feed plus the overlay of bounding boxes for objects detected. The software will only save the critical image frames when the number of people recognized changes between two frames into images
Being the first post of the security camera project, it shows what it takes to build a bare minimal "smarter" camera that detects people in real time and saves only useful keyframes. The installing of NCSDK2 on Raspberry Pi illustrated in the previous section is the toughest and most time-consuming part of this project, hope my few tips help you expedite your endeavor. In the next post, I will explain the code you just ran and show you how to add an Arduino turret to turn the camera which follows people around as shown in this video I uploaded.
One final tip, if you plan to run NCS with NCSDK2 on an Ubuntu 16.04 PC, my suggestion is to use a physical machine instead of a virtual machine, I ran into the issue that NCS device is not detected, no solution yet.
Movidius neural compute stick introduction
Share on Twitter Share on Facebook
Comments