Digital singnage technical report digital signage, also called dynamic signage, is a specialized form narrowcasting in which video or multimedia content is displayed in public places for informational or advertising purposes. A digital signage usually consists of a computer or playback device connected to a large, bright digital screen. Invite you to consult the document details.
Trang 1Digital Singnage TECHNICAL REPORT
Nguyen Van Ca
Trang 2I Introduction
II Install and setup Environment for OpenCV III.Classifier Gender use OpenCV
Trang 3I Introduction
Digital signage, also called dynamic signage, is a specialized form narrowcasting
in which video or multimedia content is displayed in public places for informational
or advertising purposes. A digital signage usually consists of a computer or playback device connected to a large, bright digital screen
Digital signage is used in department stores, schools, libraries, office buildings, medical facilities, airports, train and bus stations, banks, auto dealerships and other public venues. If the display is connected to a computer, the data on the screen can be updated in real time by means of an Internet or proprietary network connection. Data transmission and storage are streamlined by compression to minimize file size. The system can employ multiple screens if an extralarge display is desired
There are several advantages to the use of digital signs instead of paper signs. Digital signs can be updated at will by remote control while paper signs require individual replacement and physical travel to sign sites by personnel. Because digital signs require no paper or paint, they are more environmentally friendly than traditional signs. Digital signs can be animated and can deliver sound as well as visual content
In this project, we develop
II Install and setup Environment for OpenCV
1 Install tools for compile OpenCV
1.1 Visual Studio
Download and install Visual Studio 2013 or Visual Studio 2015 (C/C++) It’s free and choosing all default options will work fine
1.2 OpenCV 2.4.12
Goto http://opencv.org/ and download the OpenCV latest version 2.4.12 for Windows. Then set "Extract to:" to your "C:\" directory
2 Setup Environment variables for OpenCV]
Goto “My Computer”, click ringt mouse select “Properties”
Then chose “Advanced system setting”
Then we chose “Environment Variables”
And edit “Path” field
Add “C:\opencv\build\x64\vc12\bin”. Because we are using Visual Studio
2013 (vc12)
Click “OK” to save
Pull up a Command Prompt and verify the bin directory is now in PATH, then reboot
3 Configure in Visual Studio and simple example
Start Visual Studio, choose File > New > Project
Choose Visual C++, Empty Project, name as you prefer, ex
“OpenCV_Classifier_Gender”, set preferred location, uncheck "Create directory
Trang 4 Then right click in Solution Explorer, choose Add > New Item
Choose "C++ File", name the C++ file as preferred, ex. "Main.cpp", choose
"Add"
In the Visual Studio toolbar, verify that "Solution Configurations" is set to
"Debug", then change "Solution Platforms" to "x64" (make sure not to skip this step, or 32bit vs 64bit errors will be encountered)
Chose “OpenCV_Classifier_Gender”, click “Properties”
Chose “Configuration Manager”
Click “Win 32” in “Platform”
Chose “x64”, then click OK
In “VC++ Directories”,
in “Include Directories ” field add “C:\opencv\build\include” and
in field “Library Directories” add “C:\opencv\build\x64\vc12\lib”
In “C/C++” field, at “Additional Include Directories” field
Add “C:\opencv\build\include”
In “Linker” field.
At “Additional Include Directories” field, add “C:\opencv\build\x64\vc12\lib”
At “Input” field
Add “
opencv_calib3d2412d.lib; opencv_contrib2412d.lib; opencv_core2412d.lib; opencv_features2d2412d.lib; opencv_flann2412d.lib; opencv_gpu2412d.lib; opencv_highgui2412d.lib; opencv_imgproc2412d.lib;opencv_legacy2412d.lib; opencv_ml2412d.lib; opencv_nonfree2412d.lib; opencv_objdetect2412d.lib; opencv_ocl2412d.lib; opencv_photo2412d.lib; opencv_stitching2412d.lib; opencv_superres2412d.lib; opencv_ts2412d.lib; opencv_video2412d.lib; opencv_videostab2412d.lib;
For mode “Debug”.
For mode “Release”:
opencv_calib3d2412.lib; opencv_contrib2412.lib; opencv_core2412.lib;
opencv_features2d2412.lib; opencv_flann2412.lib; opencv_gpu2412.lib;
opencv_highgui2412.lib; opencv_imgproc2412.lib; opencv_legacy2412.lib; opencv_ml2412.lib; opencv_nonfree2412.lib; opencv_objdetect2412.lib;
opencv_ocl2412.lib; opencv_photo2412.lib; opencv_stitching2412.lib;
opencv_superres2412.lib; opencv_ts2412.lib; opencv_video2412.lib;
Trang 5Click Apply and Save
So, we configured environment for Visual Studio
III Classifier Gender use OpenCV
Introduction to face recognition and face detection
Face recognition is the process of putting a label to a known face. These generally involve four main steps:
1 Face detection: It is the process of locating a face region in an image (a large rectangle near the center of the following screenshot). This step does not care who the person is, just that it is a human face
2 Face preprocessing: It is the process of adjusting the face image to look more clear and similar to other faces (a small grayscale face in the topcenter of the following screenshot)
3 Collect and learn faces: It is the process of saving many preprocessed faces (for each person that should be recognized), and then learning how to recognize them
4 Face recognition: It is the process that checks which of the collected people are most similar to the face in the camera (a small rectangle on the topright of the following screenshot)
Step 1: Face detection
Object detector was extended in OpenCV v2.0 to also use LBP features for detection based on work by Ahonen, Hadid and Pietikäinen in 2006, as LBPbased detectors are potentially several times faster than Haarbased detectors, and don't have the licensing issues that many Haar detectors have
Implementing face detection using OpenCV
OpenCVv2.4 comes with various pretrained XML detectors that you can use for different purposes. The following table lists some of the most popular XML fies: Type of cascade classifier XML filename
Face detector (default) haarcascade_frontalface_default.xml
Face detector (fast Haar) haarcascade_frontalface_alt2.xml
Face detector (fast LBP) lbpcascade_frontalface.xml
Profile (sidelooking) face detector haarcascade_profileface.xml
Eye detector (separate for left and right) haarcascade_lefteye_2splits.xml
Mouth detector haarcascade_mcs_mouth.xml
Nose detector haarcascade_mcs_nose.xml
Whole person detector haarcascade_fullbody.xml
Haarbased detectors are stored in the folder data\haarcascades and LBPbased
detectors are stored in the folder data\lbpcascades of the OpenCV root folder, such as C:\opencv\sources\data\lbpcascades\
Loading a Haar or LBP detector for object or face detection
To perform object or face detection, fist you must load the pretrained XML fie using
Trang 6CascadeClassifier faceDetector;
faceDetector.load(faceCascadeFilename);
Accessing the webcam
To grab frames from a computer's webcam or even from a video fie, you can simply call the VideoCapture::open() function with the camera number or video fiename, then grab the frames using the C++ stream operator
Detecting the face
Now that we have converted the image to grayscale, shrunk the image, and equalized the histogram, we are ready to detect the faces using the
CascadeClassifier::detectMultiScale() function! There are many parameters that we pass
to this function:
• minFeatureSize: This parameter determines the minimum face size that we care about, typically 20 x 20 or 30 x 30 pixels but this depends on your use case and image size. If you are performing face detection on a webcam or smartphone where the face will always be very close to the camera, you could enlarge this to
80 x 80 to have much faster detections, or if you want to detect far away faces, such as on a beach with friends, then leave this as 20 x 20
• searchScaleFactor: The parameter determines how many different sizes of faces
to look for; typically it would be 1.1 for good detection, or 1.2 for faster detection that does not fid the face as often
• minNeighbors: This parameter determines how sure the detector should be that
it has detected a face, typically a value of 3 but you can set it higher if you want more reliable faces, even if many faces are not detected
• flags: This parameter allows you to specify whether to look for all faces (default)
or only look for the largest face (CASCADE_FIND_BIGGEST_OBJECT). If you only look for the largest face, it should run faster There are several other parameters you can add to make the detection about one percent or two percent faster, such as CASCADE_DO_ROUGH_SEARCH or CASCADE_SCALE_IMAGE
The output of the detectMultiScale() function will be a std::vector of the cv::Rect type object For example, if it detects two faces then it will store an array of two rectangles in the output. The detectMultiScale() function is used as follows:
int flags = CASCADE_SCALE_IMAGE; // Search for many faces Size minFeatureSize(20, 20); // Smallest face size
float searchScaleFactor = 1.1f; // How many sizes to search
int minNeighbors = 4; // Reliability vs many faces
// Detect objects in the small grayscale image
std::vector<Rect> faces;
faceDetector.detectMultiScale(img,faces,searchScaleFactor
Trang 7Step 2: Face preprocessing
Face recognition is extremely vulnerable to changes in lighting conditions, face orientation, face expression, and so on, so it is very important to reduce these differences
as much as possible. Otherwise the face recognition algorithm will often think there is more similarity between faces of two different people in the same conditions than between two faces of the same person
The easiest form of face preprocessing is just to apply histogram equalization using the equalizeHist() function, like we just did for face detection. This may be suffiient for some projects where the lighting and positional conditions won't change by much But for reliability in realworld conditions, we need many sophisticated techniques, including facial feature detection (for example, detecting eyes, nose, mouth and eyebrows)
Eye detection
Eye detection can be very useful for face preprocessing, because for frontal faces you can always assume a person's eyes should be horizontal and on opposite locations of the face and should have a fairly standard position and size within a face, despite changes
in facial expressions, lighting conditions, camera properties, distance to camera, and so
on. It is also useful to discard false positives when the face detector says it has detected a face and it is actually something else. It is rare that the face detector and two eye detectors will all be fooled at the same time, so if you only process images with a detected face and two detected eyes then it will not have many false positives (but will also give fewer faces for processing, as the eye detector will not work as often as the face detector)
Eye detectors that detect open or closed eyes are as follows:
• haarcascade_mcs_lefteye.xml (and haarcascade_mcs_righteye.xml)
• haarcascade_lefteye_2splits.xml (and haarcascade_righteye_2splits.xml)
Eye detectors that detect open eyes only are as follows:
• haarcascade_eye.xml
• haarcascade_eye_tree_eyeglasses.xml
Step 3: Collecting faces and learning from them
Collecting preprocessed faces for training
In order for OpenCV to recognize your faces. An accurate alignment of your image data
is especially important in tasks like gender detection, were you need as much detail as possible. You don’t want to do this by hand. So we prepared a tiny Python script. The code is really easy
to use To scale, rotate and crop the face image you just need to call CropFace(image,
eye_left, eye_right, offset_pct, dest_sz), where:
eye_left is the position of the left eye
eye_right is the position of the right eye
offset_pct is the percent of the image you want to keep next to the eyes (horizontal, vertical direction)
dest_sz is the size of the output image
Trang 8If you are using the same offset_pct and dest_sz for your images, they are all aligned at the eyes. You should prepare one or more of your pictures with extensions *.jpg (OpenCV can detect and recognize more image types) and placed them in one folder. In order to run script CropFace.py correctly, you must provide the position of left eye and right eye. I used Paint to get the position of my left eye and right eye. You move your mouse to the left eye position, the status bar of Paint will display the position (x,y) of left eye. Please note this position. And do the same process for right eye and other images.
Imagine are given this photo of Arnold Schwarzenegger, which is under a Public Domain license. The (x,y)position of the eyes is approximately (252,364) for the left and (420,366) for the right eye. Now you only need to define the horizontal offset, vertical offset and the size your scaled, rotated & cropped face should have
Here are some examples:
Configuration Cropped, Scaled, Rotated Face
0.1 (10%), 0.1 (10%), (200,200)
0.2 (20%), 0.2 (20%), (200,200)
Trang 9Configuration Cropped, Scaled, Rotated Face
0.3 (30%), 0.3 (30%), (200,200)
0.2 (20%), 0.2 (20%), (70,70)
When you crop your faces successfully, you should put all your crop faces to one folder. Then
we will create a file .csv to read the images. Because the file .csv is the simplest platform independent approach. Basically all the CSV file needs to contain are lines composed of a
filename followed by a ; followed by the label (as integer number), making up a line like
this:
/path/to/image.ext;0
Let’s dissect the line /path/to/image.ext is the path to an image, probably something like this if you are in Windows: C:/faces/person0/image0.jpg. Then there is the separator “;” and finally we assign the label 0 to the image. Think of the label as the subject (the person) this image belongs to, so same subjects (persons) should have the same label (). Please notice that the csv.ext file must contain at least two objects
Training the face recognition system from collected faces
After you have collected enough faces for each person to recognize, you must train the system to learn the data using a machinelearning algorithm suited for face recognition. There are many different face recognition algorithms in literature. Here are the three face recognition algorithms available in OpenCV v2.4.12:
• FaceRecognizer.Eigenfaces: Eigenfaces, also referred to as PCA (Principal
Component Analysis), fist used by Turk and Pentland in 1991
• FaceRecognizer.Fisherfaces: Fisherfaces, also referred to as LDA (Linear
Discriminant Analysis), invented by Belhumeur, Hespanha and Kriegman in 1997
• FaceRecognizer.LBPH: Local Binary Pattern Histograms, invented by Ahonen, Hadid
and Pietikäinen in 2004
These face recognition algorithms are available through the FaceRecognizer class in OpenCV's contrib module
Trang 10To use one of the face recognition algorithms, we must create a FaceRecognizer object using the cv::Algorithm::create<FaceRecognizer>() function We pass the name of the face recognition algorithm we want to use, as a string to this create function:
Ptr<FaceRecognizer> model;
// Use OpenCV's new FaceRecognizer in the "contrib" module:
model = Algorithm::create<FaceRecognizer>(facerecAlgorithm);
Once we have loaded the FaceRecognizer algorithm, we simply call the FaceRecognizer::train() function with our collected face data as follows:
// Do the actual training from the collected faces
model->train(preprocessedFaces, faceLabels);
This one line of code will run the whole face recognition training algorithm that you selected (for example, Eigenfaces, Fisherfaces, or potentially other algorithms). If you have just a few people with less than 20 faces, then this training should return very quickly, but if you have many people with many faces, it is possible that train() function will take several seconds or even minutes to process all the data
Step 4: Face recognition
Face identifiation: Recognizing people from their face
We can identify the person in a photo simply by calling the FaceRecognizer::predict() function on a facial image as follows:
int identity = model->predict(preprocessedFace);
This identity value will be the label number that we originally used when collecting faces for training. For example, 0 for the man, 1 for the woman, and so on