Face recognition has countless applications in a lot of different fields from security to marketing. But it usually requires expensive hardware or proprietary software applications. In this session we’re going to describe an open software platform based on Raspberry Pi and OpenCV that covers a subset of this functionality: faces counter. It would be useful when control access to closed spaces such as rooms with a limited capacity is needed. Solution combines cameras, Raspberry Pis, OpenCV, MQTT, embedded Java, and Java SE to cover business needs, privacy constraints, scale out needs, … and much more.
The problem is, people have limited time, attention and accuracy, I mean they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things – gathering data without any help from us -- we would be able to track and count everything and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or whatever.
The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, … that are provided with unique identifiers and the ability to transfer data over a network without requiring human interaction.
Broadband Internet is become more widely available, the cost of connecting is decreasing, more devices are being created with Wi-Fi capabilities and sensors built into them, technology costs are going down, and smartphone penetration is sky-rocketing. All of these things are creating a “perfect storm” for the IoT.
Practical applications of IoT technology can be found in many industries today, including precision agriculture, building management, healthcare, energy and transportation.
Open computer vision is an C library under BSD (Berkeley Software Distribution) license. It can be used in windows, linux or Mac or even in mobile such as Android or iOS.
The library is optimized to use hardware accelerator (GPU) and multi-core processing; Recently, even it is optimized for ARM platforms.
Java interfaces are provided within the current distribution. A jar file that have to be included in the classpath application and a DLL (dynamic link library) (or SO, shared object in linux) whose path has to be set to “java.library.path” system property.
OpenCV has two versions: version 2 whose last update was in May and the version 3 that was updated in December of the last year. Benefits of version 3 on 2 are:
- Improved and extended Java, Phyton and Matlab bindings
- More clean up APIs
- Improved Android support
It’s the XML file that defines the object or form to detect in a frame.
OpenCV distribution and opencv_contrib (a module outside the official distribution) contain some cascade classifiers to detect human and cat faces, eyes, nose, ears and more
It is possible to generate cascade classifiers by training. OpenCV provides tools for it and to generate the set of samples or dataset that are used as input for the training.
There two main types: Local binary patterns and haar. The main differences between both are the calculations time and accuracy. LBP is better intend for embedded or mobile devices.
In this table shows a simple test result: 25 seconds working with both cascade classifiers in raspberry PI. The false positives column shows the number of wrong detections. The Lost column shows how many objects escaped from the detection. And the Frames column shows the total number of frames captured from the video stream during the test. As you can see, in this particular test, LBP was faster and as accurate as haar.
In this slide you can see the main packages and classes of the java api.
- Core: has the core functionality, data structure and operations. Mat is the core data structure. It’s a matrix containing a frame or picture.
- Videoio: video input / output. The main class here is VideoCapture captures video from files, image sequences, or cameras.
- Objdetect. As the name says it is for object detection. The main class is cascade classifier
- Imgproc: Operations with images such as dilate an image, convert from one color space to another or calculates the integral of an image.
- Imgcodecs: has just one class with the same name. It’s used for read or write images from/to file.
This diagram shows how to use openCV Java API for detecting an object from a camera installed in the device.
- First of all, we have to create a video capture object and an empty matrix which is the data structure to store each frame read from video stream.
- The second step is: in an infinite loop, read from the video stream and store the frame in the matrix
- The third step is: convert to gray color scale and equalize histograms that is, normalize frames.
- The four step is the main: we have to apply the cascade classifier to the matrix in order to detect the desired object in the frame. The result is an array of rectangles demarcating the object in the frame.
- Finally, this step is optional, for example for debugging proposes, you can use Imgproc class to draw these rectangles in the frame and Imgcodecs to save the frame to a file.
It is a lightweight messaging protocol for small sensors and mobile devices. Similar to XMPP used for Whatsapp this protocol is used (for example) for Facebook chat. Built as a low-overhead protocol with strong considerations towards bandwidth and CPU limitations
- Consume little bandwidth
- Low power or energy usage
- For the previous two points, MQTT is perfect for IoT
- Works using publish and subscribe mechanism; listeners subscribe to a topic and senders publish to this topic
- Can goes over WebSockets, by default goes over TCP
- You can find MQTT client libraries under Paho project and the broker server under Mosquitto project. Both under Eclipse for IoT project.
A very useful development tool when you use MQTT is a chrome plugin called MQTTLens.
It is an input parameter that publishers have to set when they want to send a message and defines how the broker will work with the message. Its value is between zero and two.
Value 0 is also called “fire and forget” because the broker will send the message to receivers once and won’t notify to sender if this message was really delivered.
For the value 1, the broker guarantees that the message was delivered at least once (may be more than one). So, your application has to support duplicated messages.
When this is not possible, you have the quality value two. In this case, the broker guarantees that the message was delivered exactly once
This is our face dectector device. One raspberry PI model 3 or model B plus. And a video camera for the raspberry, model V1 or v2, both are ok. All of this for sixty dolars more or less
This slide shows the face detector whole solution that we are going to show you in a live demo in a couple of minutes.
First of all, the raspberries with camera sending to broker the number of detected faces for each frame using MQTT protocol and quality of service value 2.
The broker server is iot.eclipse.org, ready to use on internet for testing porpoises.
Installed in this laptop, a web application to monitor the face detectors activity and connected to the same broker server
And finally we will connect to this web application using a web browser with web sockets support (or any other push technology such as server send events) and we will tell to web application to subscribe to same topic where raspberries are publishing.
The previous one with more detail.
At the left side the raspberries containing in the lower layer the raspbian operative system. Above this, the raspicam driver, uv4l.
The Open computer vision libraries (in this case version 3)
The Java runtime version 8.
And running on it, the face detector java application using as dependencies OpenCV and mqtt paho libraries.
In the other hand, the monitor web application is a spring root application using vaadin and vaading charts and running over java runtime 8
Preparing a Raspberry for object detection is a little bit complex and can take a lot of time.
The starting point is a raspbian operative system already installed.
So the first thing to do is to install all required packages.
Next is set the JAVA_HOME and ANT_HOME environment variable because opencv compilation process will look them
Download the opencv source code from github and unzip it.
Next step is to create a makefile using the CMAKE command line tool with java flags
Once we have created makefile, we can run “MAKE” command to start the opencv compilation and then install it.
The next, install raspicam drivers, enable raspicam interfaces via raspi-config menú, update firmware and reboot the raspberry pi
Finally, we have to transfer faces-detector and MQTT client paho libraries jar files using a FTP client tool like filezilla. And customize faces-detector properties.
And that is it!!!, We are ready to run. In this point is very important don’t forget set java.library.path system property to the right value of opencv dynamic link library
When you are going to develop a program for raspberry PIs, you have three options. The first of them is to develop over the raspberry PI. The current models have enough power to run a lightweight integrated development environment such as BlueJ or Granny. Also, the current raspbian distribution already includes oracle jdk 8.
But the more comfortable options are any of these. In both cases you use a personal computer, where you have your preferred IDE and usually these are the most efficient way to develop. The only difference between both are if you use Java Micro Edition or the standard one.
To use java micro edition has some adventages like:
+ remote application management
+ debugging
+ profiling
+ or even emulation
And to finish, you can download all source code from git hub:
+ The application to run into detector devices
+ And the web application to monitor activity detectors.