Added the cv::FaceRecognizer documentation (API, Face Recognition Guide, Tutorials).
@ -7,4 +7,5 @@ The module contains some recently added functionality that has not been stabiliz
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
stereo
|
||||
stereo
|
||||
FaceRecognizer Documentation <facerec/index>
|
||||
|
@ -1,550 +0,0 @@
|
||||
Face Recognition with OpenCV
|
||||
=============================
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
This document is going to explain how to perform face recognition with the new
|
||||
:ocv:class:`FaceRecognizer`, which is available from OpenCV 2.4 onwards.
|
||||
|
||||
FaceRecognizer
|
||||
--------------
|
||||
|
||||
All face recognition models in OpenCV are derived from the abstract base
|
||||
class :ocv:class:`FaceRecognizer`, which provides a unified access to all face
|
||||
recongition algorithms in OpenCV.
|
||||
|
||||
.. ocv:class:: FaceRecognizer : public Algorithm
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
class FaceRecognizer : public Algorithm
|
||||
{
|
||||
public:
|
||||
//! virtual destructor
|
||||
virtual ~FaceRecognizer() {}
|
||||
|
||||
// Trains a FaceRecognizer.
|
||||
virtual void train(InputArray src, InputArray labels) = 0;
|
||||
|
||||
// Gets a prediction from a FaceRecognizer.
|
||||
virtual int predict(InputArray src) const = 0;
|
||||
|
||||
// Predicts the label and confidence for a given sample.
|
||||
virtual void predict(InputArray src, int &label, double &dist) const = 0;
|
||||
|
||||
// Serializes this object to a given filename.
|
||||
virtual void save(const string& filename) const;
|
||||
|
||||
// Deserializes this object from a given filename.
|
||||
virtual void load(const string& filename);
|
||||
|
||||
// Serializes this object to a given cv::FileStorage.
|
||||
virtual void save(FileStorage& fs) const = 0;
|
||||
|
||||
// Deserializes this object from a given cv::FileStorage.
|
||||
virtual void load(const FileStorage& fs) = 0;
|
||||
|
||||
};
|
||||
|
||||
Ptr<FaceRecognizer> createEigenFaceRecognizer(int num_components = 0, double threshold = DBL_MAX);
|
||||
Ptr<FaceRecognizer> createFisherFaceRecognizer(int num_components = 0, double threshold = DBL_MAX);
|
||||
Ptr<FaceRecognizer> createLBPHFaceRecognizer(int radius=1, int neighbors=8, int grid_x=8, int grid_y=8, double threshold = DBL_MAX);
|
||||
|
||||
FaceRecognizer::train
|
||||
---------------------
|
||||
|
||||
Trains a FaceRecognizer with given data and associated labels.
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::train(InputArray src, InputArray labels)
|
||||
|
||||
Every model subclassing :ocv:class:`FaceRecognizer` is able to work with
|
||||
image data in ``src``, which must be given as a ``vector<Mat>``. A ``vector<Mat>``
|
||||
was chosen, so that no preliminary assumptions about the input samples is made.
|
||||
The Local Binary Patterns for example process 2D images of different size, while
|
||||
Eigenfaces and Fisherfaces method reshape all images in ``src`` to a data matrix.
|
||||
|
||||
The associated labels in ``labels`` have to be given either in a ``Mat``
|
||||
with one row/column of type ``CV_32SC1`` or simply as a ``vector<int>``.
|
||||
|
||||
The following example shows how to learn a Eigenfaces model with OpenCV:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// holds images and labels
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// images for first person
|
||||
images.push_back(imread("person0/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0);
|
||||
images.push_back(imread("person0/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0);
|
||||
images.push_back(imread("person0/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0);
|
||||
// images for second person
|
||||
images.push_back(imread("person1/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1);
|
||||
images.push_back(imread("person1/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1);
|
||||
images.push_back(imread("person1/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1);
|
||||
// create a new Fisherfaces model
|
||||
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
|
||||
// and learn it
|
||||
model->train(images,labels);
|
||||
|
||||
FaceRecognizer::predict
|
||||
-----------------------
|
||||
|
||||
Predicts the label for a given query image in ``src``.
|
||||
|
||||
.. ocv:function:: int FaceRecognizer::predict(InputArray src) const
|
||||
|
||||
Predicts the label for a given query image in ``src``.
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::predict(InputArray src, int &label, double &dist) const
|
||||
|
||||
|
||||
The suffix ``const`` means that prediction does not affect the internal model
|
||||
state, so the method can be safely called from within different threads.
|
||||
|
||||
The following example shows how to get a prediction from a trained model:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
int predictedLabel = model->predict(testSample);
|
||||
|
||||
To get the confidence of a prediction call the model with:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
int predictedLabel = -1;
|
||||
double confidence = 0.0;
|
||||
model->predict(testSample, predictedLabel, confidence);
|
||||
|
||||
FaceRecognizer::save
|
||||
--------------------
|
||||
|
||||
Saves a :ocv:class:`FaceRecognizer` and its model state.
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::save(const string& filename) const
|
||||
.. ocv:function:: void FaceRecognizer::save(FileStorage& fs) const
|
||||
|
||||
Every :ocv:class:`FaceRecognizer` overwrites ``FaceRecognizer::save(FileStorage& fs)``
|
||||
to save its internal model state. You can then either call ``FaceRecognizer::save(FileStorage& fs)``
|
||||
to save the model or use ``FaceRecognizer::save(const string& filename)``, which eases saving a
|
||||
model.
|
||||
|
||||
The suffix ``const`` means that prediction does not affect the internal model
|
||||
state, so the method can be safely called from within different threads.
|
||||
|
||||
FaceRecognizer::load
|
||||
--------------------
|
||||
|
||||
Loads a :ocv:class:`FaceRecognizer` and its model state.
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::load(const string& filename)
|
||||
.. ocv:function:: void FaceRecognizer::load(const FileStorage& fs)
|
||||
|
||||
Loads a persisted model and state from a given XML or YAML file . Every
|
||||
:ocv:class:`FaceRecognizer` has overwrites ``FaceRecognizer::load(FileStorage& fs)``
|
||||
to enable loading the internal model state. ``FaceRecognizer::load(const string& filename)``
|
||||
eases saving a model, so you just need to call it on the filename.
|
||||
|
||||
createEigenFaceRecognizer
|
||||
-------------------------
|
||||
|
||||
Creates an Eigenfaces model with given number of components (if given) and threshold (if given).
|
||||
|
||||
.. ocv:function:: Ptr<FaceRecognizer> createEigenFaceRecognizer(int num_components = 0, double threshold = DBL_MAX)
|
||||
|
||||
This model implements the Eigenfaces method as described in [TP91]_.
|
||||
|
||||
* ``num_components`` (default 0) number of components are kept for classification. If no number of
|
||||
components is given, it is automatically determined from given data in
|
||||
:ocv:func:`FaceRecognizer::train`. If (and only if) ``num_components`` <= 0, then
|
||||
``num_components`` is set to (N-1) in :ocv:func:`Eigenfaces::train`, with *N* being the
|
||||
total number of samples in ``src``.
|
||||
|
||||
* ``threshold`` (default DBL_MAX)
|
||||
|
||||
Internal model data, which can be accessed through cv::Algorithm:
|
||||
|
||||
* ``ncomponents``
|
||||
|
||||
* ``threshold``
|
||||
|
||||
* ``eigenvectors``
|
||||
|
||||
* ``eigenvalues``
|
||||
|
||||
* ``mean``
|
||||
|
||||
* ``labels``
|
||||
|
||||
* ``projections``
|
||||
|
||||
createFisherFaceRecognizer
|
||||
--------------------------
|
||||
|
||||
Creates a Fisherfaces model for given a given number of components and threshold.
|
||||
|
||||
.. ocv:function:: Ptr<FaceRecognizer> createFisherFaceRecognizer(int num_components = 0, double threshold = DBL_MAX)
|
||||
|
||||
This model implements the Fisherfaces method as described in [BHK97]_.
|
||||
|
||||
* ``num_components`` number of components are kept for classification. If no number
|
||||
of components is given (default 0), it is automatically determined from given data
|
||||
in :ocv:func:`Fisherfaces::train` (model implementation). If (and only if)
|
||||
``num_components`` <= 0, then ``num_components`` is set to (C-1) in
|
||||
:ocv:func:`train`, with *C* being the number of unique classes in ``labels``.
|
||||
|
||||
* ``threshold`` (default DBL_MAX)
|
||||
|
||||
Internal model data, which can be accessed through cv::Algorithm:
|
||||
|
||||
* ``ncomponents``
|
||||
|
||||
* ``threshold``
|
||||
|
||||
* ``projections``
|
||||
|
||||
* ``labels``
|
||||
|
||||
* ``eigenvectors``
|
||||
|
||||
* ``eigenvalues``
|
||||
|
||||
* ``mean``
|
||||
|
||||
createLBPHFaceRecognizer
|
||||
------------------------
|
||||
|
||||
Implements face recognition with Local Binary Patterns Histograms as described in [Ahonen04]_.
|
||||
|
||||
.. ocv:function:: Ptr<FaceRecognizer> createLBPHFaceRecognizer(int radius=1, int neighbors=8, int grid_x=8, int grid_y=8, double threshold = DBL_MAX)
|
||||
|
||||
Internal model data, which can be accessed through cv::Algorithm:
|
||||
|
||||
|
||||
* ``radius``
|
||||
|
||||
* ``neighbors``
|
||||
|
||||
* ``grid_x``
|
||||
|
||||
* ``grid_y``
|
||||
|
||||
* ``threshold``
|
||||
|
||||
* ``histograms``
|
||||
|
||||
* ``labels``
|
||||
|
||||
Example: Working with a cv::FaceRecognizer
|
||||
===========================================
|
||||
|
||||
In this tutorial you'll see how to do face recognition with OpenCV on real image data. We'll work through a complete example, so you know how to work with it. While this example is based on Eigenfaces, it works the same for all the other available :ocv:class:`FaceRecognizer` implementations.
|
||||
|
||||
Getting Image Data
|
||||
------------------
|
||||
|
||||
We are doing face recognition, so you'll need some face images first! You can decide to either create your own database or start with one of the many available datasets. `face-rec.org/databases <http://face-rec.org/databases/>`_ gives an up-to-date overview of public available datasets (parts of the following descriptions are quoted from there).
|
||||
|
||||
Three interesting databases are:
|
||||
|
||||
* `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_ The AT&T Facedatabase, sometimes also referred to as *ORL Database of Faces*, contains ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
|
||||
|
||||
* `Yale Facedatabase A <http://cvc.yale.edu/projects/yalefaces/yalefaces.html>`_ The AT&T Facedatabase is good for initial tests, but it's a fairly easy database. The Eigenfaces method already has a 97% recognition rate, so you won't see any improvements with other algorithms. The Yale Facedatabase A is a more appropriate dataset for initial experiments, because the recognition problem is harder. The database consists of 15 people (14 male, 1 female) each with 11 grayscale images sized :math:`320 \times 243` pixel. There are changes in the light conditions (center light, left light, right light), facial expressions (happy, normal, sad, sleepy, surprised, wink) and glasses (glasses, no-glasses).
|
||||
|
||||
* `Extended Yale Facedatabase B <http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html>`_ The Extended Yale Facedatabase B contains 2414 images of 38 different people in its cropped version. The focus of this database is set on extracting features that are robust to illumination, the images have almost no variation in emotion/occlusion/... . I personally think, that this dataset is too large for the experiments I perform in this document. You better use the `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_ for intial testing. A first version of the Yale Facedatabase B was used in [Belhumeur97]_ to see how the Eigenfaces and Fisherfaces method perform under heavy illumination changes. [Lee2005]_ used the same setup to take 16128 images of 28 people. The Extended Yale Facedatabase B is the merge of the two databases, which is now known as Extended Yalefacedatabase B.
|
||||
|
||||
For this tutorial I am going to use the `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_, which is available from: `http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_. All credit for this dataset is given to the *AT&T Laboratories, Cambridge*, also make sure to read the README
|
||||
|
||||
Reading the Image Data
|
||||
-----------------------
|
||||
|
||||
In the demo I have decided to read the images from a very simple CSV file. Why? Because it's the simplest platform-independent approach I can think of. However, if you know a simpler solution please ping me about it. Basically all the CSV file needs to contain are lines composed of a **filename** followed by a **;** followed by the **label** (as integer number), making up a line like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/path/to/at/s1/1.pgm;0
|
||||
|
||||
Think of the **label** as the subject (the person) this image belongs to, so same subjects (persons) should have the same label. Let's make up an example. Download the AT&T Facedatabase from `http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_ and extract it to a folder of your choice. I am referring to the path you have chosen as **/path/to** in the following listings. You'll now have a folder structure like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/path/to/at$ tree
|
||||
.
|
||||
|-- README
|
||||
|-- s1
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|-- s2
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
...
|
||||
|-- s40
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|
||||
So that's actually very simple to map to the CSV file. You don't have to take care about the order of the label or anything, just make sure the same persons have the same label:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/path/to/at/s1/1.pgm;0
|
||||
/path/to/at/s1/2.pgm;0
|
||||
...
|
||||
/path/to/at/s2/1.pgm;1
|
||||
/path/to/at/s2/2.pgm;1
|
||||
...
|
||||
/path/to/at/s40/1.pgm;39
|
||||
/path/to/at/s40/2.pgm;39
|
||||
|
||||
You don't need to create this file yourself for the AT&T Face Database, because there's already a template file in ``opencv/samples/cpp/facerec_at_t.txt``. You just need to replace the **/path/to** with the folder, where you extracted the archive to. An example: imagine I have extracted the files to D:/data/at. Then I would simply Search & Replace **/path/to** with **D:/data**. You can do that in an editor of your choice, every sufficiently advanced editor can do this! Once you have a CSV file with *valid filenames* and labels, you can run the demo by with the path to the CSV file as parameter.
|
||||
|
||||
The demo application (opencv/samples/cpp/facerec_demo.cpp)
|
||||
----------------------------------------------------------
|
||||
|
||||
The following is the demo application which can be found in ``opencv/samples/cpp/facerec_demo.cpp``. If you have chosen to build OpenCV with the samples, chances are good you have the executable already! However you don't need to copy and paste this code, because it's the same as in ``opencv/samples/cpp/facerec_demo.cpp``. I am going to simply paste the source code listing here, as there is an extensive description in the comments within the file.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include "opencv2/core/core.hpp"
|
||||
#include "opencv2/highgui/highgui.hpp"
|
||||
#include "opencv2/contrib/contrib.hpp"
|
||||
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
static Mat toGrayscale(InputArray _src) {
|
||||
Mat src = _src.getMat();
|
||||
// only allow one channel
|
||||
if(src.channels() != 1) {
|
||||
CV_Error(CV_StsBadArg, "Only Matrices with one channel are supported");
|
||||
}
|
||||
// create and return normalized image
|
||||
Mat dst;
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1);
|
||||
return dst;
|
||||
}
|
||||
|
||||
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
|
||||
std::ifstream file(filename.c_str(), ifstream::in);
|
||||
if (!file) {
|
||||
string error_message = "No valid input file was given, please check the given filename.";
|
||||
CV_Error(CV_StsBadArg, error_message);
|
||||
}
|
||||
string line, path, classlabel;
|
||||
while (getline(file, line)) {
|
||||
stringstream liness(line);
|
||||
getline(liness, path, separator);
|
||||
getline(liness, classlabel);
|
||||
if(!path.empty() && !classlabel.empty()) {
|
||||
images.push_back(imread(path, 0));
|
||||
labels.push_back(atoi(classlabel.c_str()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Check for valid command line arguments, print usage
|
||||
// if no arguments were given.
|
||||
if (argc != 2) {
|
||||
cout << "usage: " << argv[0] << " <csv.ext>" << endl;
|
||||
exit(1);
|
||||
}
|
||||
// Get the path to your CSV.
|
||||
string fn_csv = string(argv[1]);
|
||||
// These vectors hold the images and corresponding labels.
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// Read in the data. This can fail if no valid
|
||||
// input filename is given.
|
||||
try {
|
||||
read_csv(fn_csv, images, labels);
|
||||
} catch (cv::Exception& e) {
|
||||
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
|
||||
// nothing more we can do
|
||||
exit(1);
|
||||
}
|
||||
// Quit if there are not enough images for this demo.
|
||||
if(images.size() <= 1) {
|
||||
string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!";
|
||||
CV_Error(CV_StsError, error_message);
|
||||
}
|
||||
// Get the height from the first image. We'll need this
|
||||
// later in code to reshape the images to their original
|
||||
// size:
|
||||
int height = images[0].rows;
|
||||
// The following lines simply get the last images from
|
||||
// your dataset and remove it from the vector. This is
|
||||
// done, so that the training data (which we learn the
|
||||
// cv::FaceRecognizer on) and the test data we test
|
||||
// the model with, do not overlap.
|
||||
Mat testSample = images[images.size() - 1];
|
||||
int testLabel = labels[labels.size() - 1];
|
||||
images.pop_back();
|
||||
labels.pop_back();
|
||||
// The following lines create an Eigenfaces model for
|
||||
// face recognition and train it with the images and
|
||||
// labels read from the given CSV file.
|
||||
// This here is a full PCA, if you just want to keep
|
||||
// 10 principal components (read Eigenfaces), then call
|
||||
// the factory method like this:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10);
|
||||
//
|
||||
// If you want to create a FaceRecognizer with a
|
||||
// confidennce threshold, call it with:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10, 123.0);
|
||||
//
|
||||
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
|
||||
model->train(images, labels);
|
||||
// The following line predicts the label of a given
|
||||
// test image:
|
||||
int predictedLabel = model->predict(testSample);
|
||||
//
|
||||
// To get the confidence of a prediction call the model with:
|
||||
//
|
||||
// int predictedLabel = -1;
|
||||
// double confidence = 0.0;
|
||||
// model->predict(testSample, predictedLabel, confidence);
|
||||
//
|
||||
string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel);
|
||||
cout << result_message << endl;
|
||||
// Sometimes you'll need to get/set internal model data,
|
||||
// which isn't exposed by the public cv::FaceRecognizer.
|
||||
// Since each cv::FaceRecognizer is derived from a
|
||||
// cv::Algorithm, you can query the data.
|
||||
//
|
||||
// First we'll use it to set the threshold of the FaceRecognizer
|
||||
// to 0.0 without retraining the model. This can be useful if
|
||||
// you are evaluating the model:
|
||||
//
|
||||
model->set("threshold", 0.0);
|
||||
// Now the threshold of this model is set to 0.0. A prediction
|
||||
// now returns -1, as it's impossible to have a distance below
|
||||
// it
|
||||
predictedLabel = model->predict(testSample);
|
||||
cout << "Predicted class = " << predictedLabel << endl;
|
||||
// Here is how to get the eigenvalues of this Eigenfaces model:
|
||||
Mat eigenvalues = model->getMat("eigenvalues");
|
||||
// And we can do the same to display the Eigenvectors (read Eigenfaces):
|
||||
Mat W = model->getMat("eigenvectors");
|
||||
// From this we will display the (at most) first 10 Eigenfaces:
|
||||
for (int i = 0; i < min(10, W.cols); i++) {
|
||||
string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at<double>(i));
|
||||
cout << msg << endl;
|
||||
// get eigenvector #i
|
||||
Mat ev = W.col(i).clone();
|
||||
// Reshape to original size & normalize to [0...255] for imshow.
|
||||
Mat grayscale = toGrayscale(ev.reshape(1, height));
|
||||
// Show the image & apply a Jet colormap for better sensing.
|
||||
Mat cgrayscale;
|
||||
applyColorMap(grayscale, cgrayscale, COLORMAP_JET);
|
||||
imshow(format("%d", i), cgrayscale);
|
||||
}
|
||||
waitKey(0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
Running the demo application
|
||||
----------------------------
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
TODO
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
TODO
|
||||
|
||||
Saving and Loading a cv::FaceRecognizer
|
||||
=======================================
|
||||
|
||||
Saving and loading a :ocv:class:`FaceRecognizer` is a very important task, because
|
||||
training a :ocv:class:`FaceRecognizer` can be a very time-intense task for large
|
||||
datasets (depending on your algorithm). In OpenCV you only have to call
|
||||
:ocv:func:`FaceRecognizer::load` for loading, and :ocv:func:`FaceRecognizer::save`
|
||||
for saving the internal state of a :ocv:class:`FaceRecognizer`.
|
||||
|
||||
Imagine we are using the same example as above. We want to learn the Eigenfaces of
|
||||
the `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_,
|
||||
but store the model to a YAML file so we can load it from somewhere else.
|
||||
|
||||
To see if everything went fine, we'll have a look at the stored data and the first 10 Eigenfaces.
|
||||
|
||||
Demo application
|
||||
----------------
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
TODO
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
``eigenfaces_at.yml`` contains the model state, we'll simply show the first 10
|
||||
lines with ``head eigenfaces_at.yml``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/github/libfacerec-build$ head eigenfaces_at.yml
|
||||
%YAML:1.0
|
||||
num_components: 399
|
||||
mean: !!opencv-matrix
|
||||
rows: 1
|
||||
cols: 10304
|
||||
dt: d
|
||||
data: [ 8.5558897243107765e+01, 8.5511278195488714e+01,
|
||||
8.5854636591478695e+01, 8.5796992481203006e+01,
|
||||
8.5952380952380949e+01, 8.6162907268170414e+01,
|
||||
8.6082706766917283e+01, 8.5776942355889716e+01,
|
||||
|
||||
And here are the Eigenfaces:
|
||||
|
||||
|
||||
Credits
|
||||
=======
|
||||
|
||||
The Database of Faces
|
||||
---------------------
|
||||
|
||||
*Important: when using these images, please give credit to "AT&T Laboratories, Cambridge."*
|
||||
|
||||
The Database of Faces, formerly "The ORL Database of Faces", contains a set of face images taken between April 1992 and April 1994. The database was used in the context of a face recognition project carried out in collaboration with the Speech, Vision and Robotics Group of the Cambridge University Engineering Department.
|
||||
|
||||
There are ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
|
||||
|
||||
The files are in PGM format. The size of each image is 92x112 pixels, with 256 grey levels per pixel. The images are organised in 40 directories (one for each subject), which have names of the form sX, where X indicates the subject number (between 1 and 40). In each of these directories, there are ten different images of that subject, which have names of the form Y.pgm, where Y is the image number for that subject (between 1 and 10).
|
||||
|
||||
A copy of the database can be retrieved from:
|
||||
|
||||
`http://www.cl.cam.ac.uk/research/dtg/attarchive/pub/data/att_faces.zip <http://www.cl.cam.ac.uk/research/dtg/attarchive/pub/data/att_faces.zip>_`
|
||||
|
||||
Literature
|
||||
==========
|
||||
|
||||
.. [Ahonen04] Ahonen, T., Hadid, A., and Pietikainen, M. *Face Recognition with Local Binary Patterns.* Computer Vision - ECCV 2004 (2004), 469–481.
|
||||
|
||||
.. [Fisher36] Fisher, R. A. *The use of multiple measurements in taxonomic problems.* Annals Eugen. 7 (1936), 179–188.
|
||||
|
||||
.. [BHK97] Belhumeur, P. N., Hespanha, J., and Kriegman, D. *Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection.* IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 7 (1997), 711–720.
|
||||
|
||||
.. [TP91] Turk, M., and Pentland, A. *Eigenfaces for recognition.* Journal of Cognitive Neuroscience 3 (1991), 71–86.
|
||||
|
||||
.. [Tan10] Tan, X., and Triggs, B. *Enhanced local texture feature sets for face recognition under difficult lighting conditions.* IEEE Transactions on Image Processing 19 (2010), 1635–650.
|
||||
|
||||
.. [Zhao03] Zhao, W., Chellappa, R., Phillips, P., and Rosenfeld, A. Face recognition: A literature survey. ACM Computing Surveys (CSUR) 35, 4 (2003), 399–458.
|
||||
|
||||
.. [Tu06] Chiara Turati, Viola Macchi Cassia, F. S., and Leo, I. *Newborns face recognition: Role of inner and outer facial features. Child Development* 77, 2 (2006), 297–311.
|
||||
|
||||
.. [Kanade73] Kanade, T. *Picture processing system by computer complex and recognition of human faces.* PhD thesis, Kyoto University, November 1973
|
||||
|
||||
|
107
modules/contrib/doc/facerec/colormaps.rst
Normal file
@ -0,0 +1,107 @@
|
||||
ColorMaps in OpenCV
|
||||
===================
|
||||
|
||||
applyColorMap
|
||||
---------------------
|
||||
|
||||
Trains a FaceRecognizer with given data and associated labels.
|
||||
|
||||
.. ocv:function:: void applyColorMap(InputArray src, OutputArray dst, int colormap)
|
||||
|
||||
:param src: The source image, grayscale or colored does not matter.
|
||||
:param dst: The result is the colormapped source image. Note: :ocv:func:`Mat::create` is called on dst.
|
||||
:param colormap: The colormap to apply, see the list of available colormaps below.
|
||||
|
||||
Currently the following GNU Octave/MATLAB equivalent colormaps are implemented:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
enum
|
||||
{
|
||||
COLORMAP_AUTUMN = 0,
|
||||
COLORMAP_BONE = 1,
|
||||
COLORMAP_JET = 2,
|
||||
COLORMAP_WINTER = 3,
|
||||
COLORMAP_RAINBOW = 4,
|
||||
COLORMAP_OCEAN = 5,
|
||||
COLORMAP_SUMMER = 6,
|
||||
COLORMAP_SPRING = 7,
|
||||
COLORMAP_COOL = 8,
|
||||
COLORMAP_HSV = 9,
|
||||
COLORMAP_PINK = 10,
|
||||
COLORMAP_HOT = 11
|
||||
}
|
||||
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
The human perception isn't built for observing fine changes in grayscale images. Human eyes are more sensitive to observing changes between colors, so you often need to recolor your grayscale images to get a clue about them. OpenCV now comes with various colormaps to enhance the visualization in your computer vision application.
|
||||
|
||||
In OpenCV 2.4 you only need :ocv:func:`applyColorMap` to apply a colormap on a given image. The following sample code reads the path to an image from command line, applies a Jet colormap on it and shows the result:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
#include <opencv2/contrib/contrib.hpp>
|
||||
#include <opencv2/core/core.hpp>
|
||||
#include <opencv2/highgui/highgui.hpp>
|
||||
|
||||
using namespace cv;
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Get the path to the image, if it was given
|
||||
// if no arguments were given.
|
||||
string filename;
|
||||
if (argc > 1) {
|
||||
filename = string(argv[1]);
|
||||
}
|
||||
// The following lines show how to apply a colormap on a given image
|
||||
// and show it with cv::imshow example with an image. An exception is
|
||||
// thrown if the path to the image is invalid.
|
||||
if(!filename.empty()) {
|
||||
Mat img0 = imread(filename);
|
||||
// Throw an exception, if the image can't be read:
|
||||
if(img0.empty()) {
|
||||
CV_Error(CV_StsBadArg, "Sample image is empty. Please adjust your path, so it points to a valid input image!");
|
||||
}
|
||||
// Holds the colormap version of the image:
|
||||
Mat cm_img0;
|
||||
// Apply the colormap:
|
||||
applyColorMap(img0, cm_img0, COLORMAP_JET);
|
||||
// Show the result:
|
||||
imshow("cm_img0", cm_img0);
|
||||
waitKey(0);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
And here are the color scales for each of the available colormaps:
|
||||
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| Class | Scale |
|
||||
+=======================+===================================================+
|
||||
| COLORMAP_AUTUMN | .. image:: img/colormaps/colorscale_autumn.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_BONE | .. image:: img/colormaps/colorscale_bone.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_COOL | .. image:: img/colormaps/colorscale_cool.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_HOT | .. image:: img/colormaps/colorscale_hot.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_HSV | .. image:: img/colormaps/colorscale_hsv.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_JET | .. image:: img/colormaps/colorscale_jet.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_OCEAN | .. image:: img/colormaps/colorscale_ocean.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_PINK | .. image:: img/colormaps/colorscale_pink.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_RAINBOW | .. image:: img/colormaps/colorscale_rainbow.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_SPRING | .. image:: img/colormaps/colorscale_spring.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_SUMMER | .. image:: img/colormaps/colorscale_summer.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
||||
| COLORMAP_WINTER | .. image:: img/colormaps/colorscale_winter.jpg |
|
||||
+-----------------------+---------------------------------------------------+
|
400
modules/contrib/doc/facerec/etc/at.txt
Normal file
@ -0,0 +1,400 @@
|
||||
/home/philipp/facerec/data/at/s13/2.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/7.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/6.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/9.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/5.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/3.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/4.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/10.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/8.pgm;12
|
||||
/home/philipp/facerec/data/at/s13/1.pgm;12
|
||||
/home/philipp/facerec/data/at/s17/2.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/7.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/6.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/9.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/5.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/3.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/4.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/10.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/8.pgm;16
|
||||
/home/philipp/facerec/data/at/s17/1.pgm;16
|
||||
/home/philipp/facerec/data/at/s32/2.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/7.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/6.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/9.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/5.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/3.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/4.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/10.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/8.pgm;31
|
||||
/home/philipp/facerec/data/at/s32/1.pgm;31
|
||||
/home/philipp/facerec/data/at/s10/2.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/7.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/6.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/9.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/5.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/3.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/4.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/10.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/8.pgm;9
|
||||
/home/philipp/facerec/data/at/s10/1.pgm;9
|
||||
/home/philipp/facerec/data/at/s27/2.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/7.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/6.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/9.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/5.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/3.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/4.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/10.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/8.pgm;26
|
||||
/home/philipp/facerec/data/at/s27/1.pgm;26
|
||||
/home/philipp/facerec/data/at/s5/2.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/7.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/6.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/9.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/5.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/3.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/4.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/10.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/8.pgm;4
|
||||
/home/philipp/facerec/data/at/s5/1.pgm;4
|
||||
/home/philipp/facerec/data/at/s20/2.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/7.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/6.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/9.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/5.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/3.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/4.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/10.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/8.pgm;19
|
||||
/home/philipp/facerec/data/at/s20/1.pgm;19
|
||||
/home/philipp/facerec/data/at/s30/2.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/7.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/6.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/9.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/5.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/3.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/4.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/10.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/8.pgm;29
|
||||
/home/philipp/facerec/data/at/s30/1.pgm;29
|
||||
/home/philipp/facerec/data/at/s39/2.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/7.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/6.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/9.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/5.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/3.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/4.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/10.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/8.pgm;38
|
||||
/home/philipp/facerec/data/at/s39/1.pgm;38
|
||||
/home/philipp/facerec/data/at/s35/2.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/7.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/6.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/9.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/5.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/3.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/4.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/10.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/8.pgm;34
|
||||
/home/philipp/facerec/data/at/s35/1.pgm;34
|
||||
/home/philipp/facerec/data/at/s23/2.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/7.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/6.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/9.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/5.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/3.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/4.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/10.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/8.pgm;22
|
||||
/home/philipp/facerec/data/at/s23/1.pgm;22
|
||||
/home/philipp/facerec/data/at/s4/2.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/7.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/6.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/9.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/5.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/3.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/4.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/10.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/8.pgm;3
|
||||
/home/philipp/facerec/data/at/s4/1.pgm;3
|
||||
/home/philipp/facerec/data/at/s9/2.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/7.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/6.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/9.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/5.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/3.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/4.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/10.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/8.pgm;8
|
||||
/home/philipp/facerec/data/at/s9/1.pgm;8
|
||||
/home/philipp/facerec/data/at/s37/2.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/7.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/6.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/9.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/5.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/3.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/4.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/10.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/8.pgm;36
|
||||
/home/philipp/facerec/data/at/s37/1.pgm;36
|
||||
/home/philipp/facerec/data/at/s24/2.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/7.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/6.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/9.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/5.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/3.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/4.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/10.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/8.pgm;23
|
||||
/home/philipp/facerec/data/at/s24/1.pgm;23
|
||||
/home/philipp/facerec/data/at/s19/2.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/7.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/6.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/9.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/5.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/3.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/4.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/10.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/8.pgm;18
|
||||
/home/philipp/facerec/data/at/s19/1.pgm;18
|
||||
/home/philipp/facerec/data/at/s8/2.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/7.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/6.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/9.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/5.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/3.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/4.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/10.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/8.pgm;7
|
||||
/home/philipp/facerec/data/at/s8/1.pgm;7
|
||||
/home/philipp/facerec/data/at/s21/2.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/7.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/6.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/9.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/5.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/3.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/4.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/10.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/8.pgm;20
|
||||
/home/philipp/facerec/data/at/s21/1.pgm;20
|
||||
/home/philipp/facerec/data/at/s1/2.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/7.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/6.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/9.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/5.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/3.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/4.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/10.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/8.pgm;0
|
||||
/home/philipp/facerec/data/at/s1/1.pgm;0
|
||||
/home/philipp/facerec/data/at/s7/2.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/7.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/6.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/9.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/5.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/3.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/4.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/10.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/8.pgm;6
|
||||
/home/philipp/facerec/data/at/s7/1.pgm;6
|
||||
/home/philipp/facerec/data/at/s16/2.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/7.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/6.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/9.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/5.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/3.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/4.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/10.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/8.pgm;15
|
||||
/home/philipp/facerec/data/at/s16/1.pgm;15
|
||||
/home/philipp/facerec/data/at/s36/2.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/7.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/6.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/9.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/5.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/3.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/4.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/10.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/8.pgm;35
|
||||
/home/philipp/facerec/data/at/s36/1.pgm;35
|
||||
/home/philipp/facerec/data/at/s25/2.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/7.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/6.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/9.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/5.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/3.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/4.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/10.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/8.pgm;24
|
||||
/home/philipp/facerec/data/at/s25/1.pgm;24
|
||||
/home/philipp/facerec/data/at/s14/2.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/7.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/6.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/9.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/5.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/3.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/4.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/10.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/8.pgm;13
|
||||
/home/philipp/facerec/data/at/s14/1.pgm;13
|
||||
/home/philipp/facerec/data/at/s34/2.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/7.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/6.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/9.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/5.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/3.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/4.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/10.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/8.pgm;33
|
||||
/home/philipp/facerec/data/at/s34/1.pgm;33
|
||||
/home/philipp/facerec/data/at/s11/2.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/7.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/6.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/9.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/5.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/3.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/4.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/10.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/8.pgm;10
|
||||
/home/philipp/facerec/data/at/s11/1.pgm;10
|
||||
/home/philipp/facerec/data/at/s26/2.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/7.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/6.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/9.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/5.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/3.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/4.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/10.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/8.pgm;25
|
||||
/home/philipp/facerec/data/at/s26/1.pgm;25
|
||||
/home/philipp/facerec/data/at/s18/2.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/7.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/6.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/9.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/5.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/3.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/4.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/10.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/8.pgm;17
|
||||
/home/philipp/facerec/data/at/s18/1.pgm;17
|
||||
/home/philipp/facerec/data/at/s29/2.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/7.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/6.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/9.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/5.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/3.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/4.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/10.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/8.pgm;28
|
||||
/home/philipp/facerec/data/at/s29/1.pgm;28
|
||||
/home/philipp/facerec/data/at/s33/2.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/7.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/6.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/9.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/5.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/3.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/4.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/10.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/8.pgm;32
|
||||
/home/philipp/facerec/data/at/s33/1.pgm;32
|
||||
/home/philipp/facerec/data/at/s12/2.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/7.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/6.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/9.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/5.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/3.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/4.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/10.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/8.pgm;11
|
||||
/home/philipp/facerec/data/at/s12/1.pgm;11
|
||||
/home/philipp/facerec/data/at/s6/2.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/7.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/6.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/9.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/5.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/3.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/4.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/10.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/8.pgm;5
|
||||
/home/philipp/facerec/data/at/s6/1.pgm;5
|
||||
/home/philipp/facerec/data/at/s22/2.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/7.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/6.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/9.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/5.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/3.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/4.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/10.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/8.pgm;21
|
||||
/home/philipp/facerec/data/at/s22/1.pgm;21
|
||||
/home/philipp/facerec/data/at/s15/2.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/7.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/6.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/9.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/5.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/3.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/4.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/10.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/8.pgm;14
|
||||
/home/philipp/facerec/data/at/s15/1.pgm;14
|
||||
/home/philipp/facerec/data/at/s2/2.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/7.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/6.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/9.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/5.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/3.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/4.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/10.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/8.pgm;1
|
||||
/home/philipp/facerec/data/at/s2/1.pgm;1
|
||||
/home/philipp/facerec/data/at/s31/2.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/7.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/6.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/9.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/5.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/3.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/4.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/10.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/8.pgm;30
|
||||
/home/philipp/facerec/data/at/s31/1.pgm;30
|
||||
/home/philipp/facerec/data/at/s28/2.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/7.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/6.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/9.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/5.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/3.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/4.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/10.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/8.pgm;27
|
||||
/home/philipp/facerec/data/at/s28/1.pgm;27
|
||||
/home/philipp/facerec/data/at/s40/2.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/7.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/6.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/9.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/5.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/3.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/4.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/10.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/8.pgm;39
|
||||
/home/philipp/facerec/data/at/s40/1.pgm;39
|
||||
/home/philipp/facerec/data/at/s3/2.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/7.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/6.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/9.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/5.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/3.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/4.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/10.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/8.pgm;2
|
||||
/home/philipp/facerec/data/at/s3/1.pgm;2
|
||||
/home/philipp/facerec/data/at/s38/2.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/7.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/6.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/9.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/5.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/3.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/4.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/10.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/8.pgm;37
|
||||
/home/philipp/facerec/data/at/s38/1.pgm;37
|
309
modules/contrib/doc/facerec/facerec_api.rst
Normal file
@ -0,0 +1,309 @@
|
||||
FaceRecognizer
|
||||
==============
|
||||
|
||||
.. highlight:: cpp
|
||||
|
||||
FaceRecognizer
|
||||
--------------
|
||||
|
||||
.. ocv:class:: FaceRecognizer
|
||||
|
||||
All face recognition models in OpenCV are derived from the abstract base class :ocv:class:`FaceRecognizer`, which provides
|
||||
a unified access to all face recongition algorithms in OpenCV. ::
|
||||
|
||||
class FaceRecognizer : public Algorithm
|
||||
{
|
||||
public:
|
||||
//! virtual destructor
|
||||
virtual ~FaceRecognizer() {}
|
||||
|
||||
// Trains a FaceRecognizer.
|
||||
virtual void train(InputArray src, InputArray labels) = 0;
|
||||
|
||||
// Gets a prediction from a FaceRecognizer.
|
||||
virtual int predict(InputArray src) const = 0;
|
||||
|
||||
// Predicts the label and confidence for a given sample.
|
||||
virtual void predict(InputArray src, int &label, double &confidence) const = 0;
|
||||
|
||||
// Serializes this object to a given filename.
|
||||
virtual void save(const string& filename) const;
|
||||
|
||||
// Deserializes this object from a given filename.
|
||||
virtual void load(const string& filename);
|
||||
|
||||
// Serializes this object to a given cv::FileStorage.
|
||||
virtual void save(FileStorage& fs) const = 0;
|
||||
|
||||
// Deserializes this object from a given cv::FileStorage.
|
||||
virtual void load(const FileStorage& fs) = 0;
|
||||
};
|
||||
|
||||
|
||||
I'll go a bit more into detail explaining :ocv:class:`FaceRecognizer`, because it doesn't look like a powerful interface at first sight. But: Every :ocv:class:`FaceRecognizer` is an :ocv:class:`Algorithm`, so you can easily get/set all model internals (if allowed by the implementation). :ocv:class:`Algorithm` is a relatively new OpenCV concept, which is available since the 2.4 release. I suggest you take a look at its description.
|
||||
|
||||
:ocv:class:`Algorithm` provides the following features for all derived classes:
|
||||
|
||||
* So called “virtual constructor”. That is, each Algorithm derivative is registered at program start and you can get the list of registered algorithms and create instance of a particular algorithm by its name (see :ocv:func:`Algorithm::create`). If you plan to add your own algorithms, it is good practice to add a unique prefix to your algorithms to distinguish them from other algorithms.
|
||||
|
||||
* Setting/Retrieving algorithm parameters by name. If you used video capturing functionality from OpenCV highgui module, you are probably familar with :ocv:cfunc:`cvSetCaptureProperty`, :ocv:cfunc:`cvGetCaptureProperty`, :ocv:func:`VideoCapture::set` and :ocv:func:`VideoCapture::get`. :ocv:class:`Algorithm` provides similar method where instead of integer id's you specify the parameter names as text strings. See :ocv:func:`Algorithm::set` and :ocv:func:`Algorithm::get for details.
|
||||
|
||||
* Reading and writing parameters from/to XML or YAML files. Every Algorithm derivative can store all its parameters and then read them back. There is no need to re-implement it each time.
|
||||
|
||||
Moreover every :ocv:class:`FaceRecognizer` supports the:
|
||||
|
||||
* **Training** of a :ocv:class:`FaceRecognizer` with :ocv:func:`FaceRecognizer::train` on a given set of images (your face database!).
|
||||
|
||||
* **Prediction** of a given sample image, that means a face. The image is given as a :ocv:class:`Mat`.
|
||||
|
||||
* **Loading/Saving** the model state from/to a given XML or YAML.
|
||||
|
||||
Sometimes you run into the situation, when you want to apply a threshold on the prediction. A common scenario in face recognition is to tell, wether a face belongs to the training dataset or if it is unknown. You might wonder, why there's no public API in :ocv:class:`FaceRecognizer` to set the threshold for the prediction, but rest assured: It's supported. It just means there's no generic way in an abstract class to provide an interface for setting/getting the thresholds of *every possible* :ocv:class:`FaceRecognizer` algorithm. The appropriate place to set the thresholds is in the constructor of the specific :ocv:class:`FaceRecognizer` and since every :ocv:class:`FaceRecognizer` is a :ocv:class:`Algorithm` (see above), you can get/set the thresholds at runtime!
|
||||
|
||||
Here is an example of setting a threshold for the Eigenfaces method, when creating the model:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Let's say we want to keep 10 Eigenfaces and have a threshold value of 10.0
|
||||
int num_components = 10;
|
||||
double threshold = 10.0;
|
||||
// Then if you want to have a cv::FaceRecognizer with a confidence threshold,
|
||||
// create the concrete implementation with the appropiate parameters:
|
||||
Ptr<FaceRecognizer> model = createEigenFaceRecognizer(num_components, threshold);
|
||||
|
||||
Sometimes it's impossible to train the model, just to experiment with threshold values. Thanks to :ocv:class:`Algorithm` it's possible to set internal model thresholds during runtime. Let's see how we would set/get the prediction for the Eigenface model, we've created above:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// The following line reads the threshold from the Eigenfaces model:
|
||||
double current_threshold = model->getDouble("threshold");
|
||||
// And this line sets the threshold to 0.0:
|
||||
model->set("threshold", 0.0);
|
||||
|
||||
If you've set the threshold to ``0.0`` as we did above, then:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
//
|
||||
Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE);
|
||||
// Get a prediction from the model. Note: We've set a threshold of 0.0 above,
|
||||
// since the distance is almost always larger than 0.0, you'll get -1 as
|
||||
// label, which indicates, this face is unknown
|
||||
int predicted_label = model->predict(img);
|
||||
// ...
|
||||
|
||||
is going to yield ``-1`` as predicted label, which states this face is unknown.
|
||||
|
||||
FaceRecognizer::train
|
||||
---------------------
|
||||
|
||||
Trains a FaceRecognizer with given data and associated labels.
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::train(InputArray src, InputArray labels)
|
||||
|
||||
:param src: The training images, that means the faces you want to learn. The data has to be given as a ``vector<Mat>``.
|
||||
|
||||
:param labels: The labels corresponding to the images have to be given either as a ``vector<int>`` or a
|
||||
|
||||
The following source code snippet shows you how to learn a Fisherfaces model on a given set of images. The images are read with ocv:func:`imread` and pushed into a `std::vector<Mat>`. The labels of each image are stored within a ``std::vector<int>`` (you could also use a :ocv:class:`Mat` of type `CV_32SC1`). Think of the label as the subject (the person) this image belongs to, so same subjects (persons) should have the same label. For the available :ocv:class:`FaceRecognizer` you don't have to pay any attention to the order of the labels, just make sure same persons have the same label:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// holds images and labels
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// images for first person
|
||||
images.push_back(imread("person0/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0);
|
||||
images.push_back(imread("person0/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0);
|
||||
images.push_back(imread("person0/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(0);
|
||||
// images for second person
|
||||
images.push_back(imread("person1/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1);
|
||||
images.push_back(imread("person1/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1);
|
||||
images.push_back(imread("person1/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); labels.push_back(1);
|
||||
|
||||
|
||||
Now that you have read some images, we can create a new :ocv:class:`FaceRecognizer`. In this example I'll create a Fisherfaces model and decide to keep all of the possible Fisherfaces:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Create a new Fisherfaces model and retain all available Fisherfaces,
|
||||
// this is the most common usage of this specific FaceRecognizer:
|
||||
//
|
||||
Ptr<FaceRecognizer> model = createFisherFaceRecognizer();
|
||||
|
||||
And finally train it on the given dataset (the face images and labels):
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// This is the common interface to train all of the available cv::FaceRecognizer
|
||||
// implementations:
|
||||
//
|
||||
model->train(images, labels);
|
||||
|
||||
FaceRecognizer::predict
|
||||
-----------------------
|
||||
|
||||
.. ocv:function:: int FaceRecognizer::predict(InputArray src) const
|
||||
|
||||
Predicts a label for a given input image.
|
||||
|
||||
:param src: Sample image to get a prediction from.
|
||||
|
||||
|
||||
.. ocv:function:: void predict(InputArray src, int &label, double &confidence) const
|
||||
|
||||
Predicts a label and associated confidence (e.g. distance) for a given input image.
|
||||
|
||||
:param src: Sample image to get a prediction from.
|
||||
:param label: The predicted label for the given image.
|
||||
:param confidence: Associated confidence (e.g. distance) for the predicted label.
|
||||
|
||||
|
||||
|
||||
The suffix ``const`` means that prediction does not affect the internal model
|
||||
state, so the method can be safely called from within different threads.
|
||||
|
||||
The following example shows how to get a prediction from a trained model:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
using namespace cv;
|
||||
// Do your initialization here (create the cv::FaceRecognizer model) ...
|
||||
// ...
|
||||
// Read in a sample image:
|
||||
Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE);
|
||||
// And get a prediction from the cv::FaceRecognizer:
|
||||
int predicted = model->predict(img);
|
||||
|
||||
Or to get a prediction and the associated confidence (e.g. distance):
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
using namespace cv;
|
||||
// Do your initialization here (create the cv::FaceRecognizer model) ...
|
||||
// ...
|
||||
Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE);
|
||||
// Some variables for the predicted label and associated confidence (e.g. distance):
|
||||
int predicted_label = -1;
|
||||
double predicted_confidence = 0.0;
|
||||
// Get the prediction and associated confidence from the model
|
||||
model->predict(img, predicted_label, predicted_confidence);
|
||||
|
||||
FaceRecognizer::save
|
||||
--------------------
|
||||
|
||||
Saves a :ocv:class:`FaceRecognizer` and its model state.
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::save(const string& filename) const
|
||||
|
||||
Saves this model to a given filename, either as XML or YAML.
|
||||
|
||||
:param filename: The filename to store this :ocv:class:`FaceRecognizer` to (either XML/YAML).
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::save(FileStorage& fs) const
|
||||
|
||||
Saves this model to a given :ocv:class:`FileStorage`.
|
||||
|
||||
:param fs: The :ocv:class:`FileStorage` to store this :ocv:class:`FaceRecognizer` to.
|
||||
|
||||
|
||||
Every :ocv:class:`FaceRecognizer` overwrites ``FaceRecognizer::save(FileStorage& fs)``
|
||||
to save the internal model state. ``FaceRecognizer::save(const string& filename)`` saves
|
||||
the state of a model to the given filename.
|
||||
|
||||
The suffix ``const`` means that prediction does not affect the internal model
|
||||
state, so the method can be safely called from within different threads.
|
||||
|
||||
FaceRecognizer::load
|
||||
--------------------
|
||||
|
||||
Loads a :ocv:class:`FaceRecognizer` and its model state.
|
||||
|
||||
.. ocv:function:: void FaceRecognizer::load(const string& filename)
|
||||
.. ocv:function:: void FaceRecognizer::load(FileStorage& fs)
|
||||
|
||||
Loads a persisted model and state from a given XML or YAML file . Every
|
||||
:ocv:class:`FaceRecognizer` has to overwrite ``FaceRecognizer::load(FileStorage& fs)``
|
||||
to enable loading the model state. ``FaceRecognizer::load(FileStorage& fs)`` in
|
||||
turn gets called by ``FaceRecognizer::load(const string& filename)``, to ease
|
||||
saving a model.
|
||||
|
||||
createEigenFaceRecognizer
|
||||
-------------------------
|
||||
|
||||
.. ocv:function:: Ptr<FaceRecognizer> createEigenFaceRecognizer(int num_components = 0, double threshold = DBL_MAX)
|
||||
|
||||
:param num_components: The number of components (read: Eigenfaces) kept for this Prinicpal Component Analysis. As a hint: There's no rule how many components (read: Eigenfaces) should be kept for good reconstruction capabilities. It is based on your input data, so experiment with the number. Keeping 80 components should almost always be sufficient.
|
||||
|
||||
:param threshold: The threshold applied in the prediciton.
|
||||
|
||||
Notes:
|
||||
++++++
|
||||
|
||||
* Training and prediction must be done on grayscale images, use :ocv:func:`cvtColor` to convert between the color spaces.
|
||||
* **THE EIGENFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE.** (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown. Use :ocv:func:`resize` to resize the images.
|
||||
|
||||
Model internal data:
|
||||
++++++++++++++++++++
|
||||
|
||||
* ``num_components`` see :ocv:func:`createEigenFaceRecognizer`.
|
||||
* ``threshold`` see :ocv:func:`createEigenFaceRecognizer`.
|
||||
* ``eigenvalues`` The eigenvalues for this Principal Component Analysis (ordered descending).
|
||||
* ``eigenvectors`` The eigenvectors for this Principal Component Analysis (ordered by their eigenvalue).
|
||||
* ``mean`` The sample mean calculated from the training data.
|
||||
* ``projections`` The projections of the training data.
|
||||
* ``labels`` The threshold applied in the prediction. If the distance to the nearest neighbor is larger than the threshold, this method returns -1.
|
||||
|
||||
createFisherFaceRecognizer
|
||||
-------------------------
|
||||
|
||||
.. ocv:function:: Ptr<FaceRecognizer> createFisherFaceRecognizer(int num_components = 0, double threshold = DBL_MAX)
|
||||
|
||||
:param num_components: The number of components (read: Fisherfaces) kept for this Linear Discriminant Analysis with the Fisherfaces criterion. It's useful to keep all components, that means the number of your classes ``c`` (read: subjects, persons you want to recognize). If you leave this at the default (``0``) or set it to a value less-equal ``0`` or greater ``(c-1)``, it will be set to the correct number ``(c-1)`` automatically.
|
||||
|
||||
:param threshold: The threshold applied in the prediction. If the distance to the nearest neighbor is larger than the threshold, this method returns -1.
|
||||
|
||||
Notes:
|
||||
++++++
|
||||
|
||||
* Training and prediction must be done on grayscale images, use :ocv:func:`cvtColor` to convert between the color spaces.
|
||||
* **THE FISHERFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE.** (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown. Use :ocv:func:`resize` to resize the images.
|
||||
|
||||
Model internal data:
|
||||
++++++++++++++++++++
|
||||
|
||||
* ``num_components`` see :ocv:func:`createFisherFaceRecognizer`.
|
||||
* ``threshold`` see :ocv:func:`createFisherFaceRecognizer`.
|
||||
* ``eigenvalues`` The eigenvalues for this Linear Discriminant Analysis (ordered descending).
|
||||
* ``eigenvectors`` The eigenvectors for this Linear Discriminant Analysis (ordered by their eigenvalue).
|
||||
* ``mean`` The sample mean calculated from the training data.
|
||||
* ``projections`` The projections of the training data.
|
||||
* ``labels`` The labels corresponding to the projections.
|
||||
|
||||
|
||||
createLBPHFaceRecognizer
|
||||
-------------------------
|
||||
|
||||
.. ocv:function:: Ptr<FaceRecognizer> createLBPHFaceRecognizer(int radius=1, int neighbors=8, int grid_x=8, int grid_y=8, double threshold = DBL_MAX);
|
||||
|
||||
:param radius: The radius used for building the Circular Local Binary Pattern. The greater the radius, the
|
||||
:param neighbors: The number of sample points to build a Circular Local Binary Pattern from. An appropriate value is to use `` 8`` sample points. Keep in mind: the more sample points you include, the higher the computational cost.
|
||||
:param grid_x: The number of cells in the horizontal direction, ``8`` is a common value used in publications. The more cells, the finer the grid, the higher the dimensionality of the resulting feature vector.
|
||||
:param grid_y: The number of cells in the vertical direction, ``8`` is a common value used in publications. The more cells, the finer the grid, the higher the dimensionality of the resulting feature vector.
|
||||
:param threshold: The threshold applied in the prediction. If the distance to the nearest neighbor is larger than the threshold, this method returns -1.
|
||||
|
||||
Notes:
|
||||
++++++
|
||||
|
||||
* The Circular Local Binary Patterns (used in training and prediction) expect the data given as grayscale images, use :ocv:func:`cvtColor` to convert between the color spaces.
|
||||
|
||||
Model internal data:
|
||||
++++++++++++++++++++
|
||||
|
||||
* ``radius`` see :ocv:func:`createLBPHFaceRecognizer`.
|
||||
* ``neighbors`` see :ocv:func:`createLBPHFaceRecognizer`.
|
||||
* ``grid_x`` see :ocv:func:`createLBPHFaceRecognizer`.
|
||||
* ``grid_y`` see :ocv:func:`createLBPHFaceRecognizer`.
|
||||
* ``threshold see :ocv:func:`createLBPHFaceRecognizer`.``
|
||||
* ``histograms`` Local Binary Patterns Histograms calculated from the given training data (empty if none was given).
|
||||
* ``labels`` Labels corresponding to the calculated Local Binary Patterns Histograms.
|
86
modules/contrib/doc/facerec/facerec_changelog.rst
Normal file
@ -0,0 +1,86 @@
|
||||
Changelog
|
||||
=========
|
||||
|
||||
Release 0.05
|
||||
------------
|
||||
|
||||
This library is now included in the official OpenCV distribution (from 2.4 on).
|
||||
The cv::FaceRecognizer is now a cv::Algorithm, which better fits into the overall
|
||||
OpenCV API.
|
||||
|
||||
To reduce the confusion on user side and minimize my work, libfacerec and OpenCV
|
||||
have been synchronized and are now based on the same interfaces and implementation.
|
||||
|
||||
The library now has an extensive documentation:
|
||||
|
||||
* The API is explained in detail and with a lot of code examples.
|
||||
* The face recognition guide I had written for Python and GNU Octave/MATLAB has been adapted to the new OpenCV C++ ``cv::FaceRecognizer``.
|
||||
* A tutorial for gender classification with Fisherfaces.
|
||||
* A tutorial for face recognition in videos (e.g. webcam).
|
||||
|
||||
|
||||
Release highlights
|
||||
++++++++++++++++++
|
||||
|
||||
- There are no single highlights to pick from, this release is a highlight itself.
|
||||
|
||||
Release 0.04
|
||||
------------
|
||||
|
||||
This version is fully Windows-compatible and works with OpenCV 2.3.1. Several
|
||||
bugfixes, but none influenced the recognition rate.
|
||||
|
||||
Release highlights
|
||||
++++++++++++++++++
|
||||
|
||||
- A whole lot of exceptions with meaningful error messages.
|
||||
- A tutorial for Windows users: `http://bytefish.de/blog/opencv_visual_studio_and_libfacerec <http://bytefish.de/blog/opencv_visual_studio_and_libfacerec>`_
|
||||
|
||||
|
||||
Release 0.03
|
||||
------------
|
||||
|
||||
Reworked the library to provide separate implementations in cpp files, because
|
||||
it's the preferred way of contributing OpenCV libraries. This means the library
|
||||
is not header-only anymore. Slight API changes were done, please see the
|
||||
documentation for details.
|
||||
|
||||
Release highlights
|
||||
++++++++++++++++++
|
||||
|
||||
- New Unit Tests (for LBP Histograms) make the library more robust.
|
||||
- Added more documentation.
|
||||
|
||||
|
||||
Release 0.02
|
||||
------------
|
||||
|
||||
Reworked the library to provide separate implementations in cpp files, because
|
||||
it's the preferred way of contributing OpenCV libraries. This means the library
|
||||
is not header-only anymore. Slight API changes were done, please see the
|
||||
documentation for details.
|
||||
|
||||
Release highlights
|
||||
++++++++++++++++++
|
||||
|
||||
- New Unit Tests (for LBP Histograms) make the library more robust.
|
||||
- Added a documentation and changelog in reStructuredText.
|
||||
|
||||
Release 0.01
|
||||
------------
|
||||
|
||||
Initial release as header-only library.
|
||||
|
||||
Release highlights
|
||||
++++++++++++++++++
|
||||
|
||||
- Colormaps for OpenCV to enhance the visualization.
|
||||
- Face Recognition algorithms implemented:
|
||||
|
||||
- Eigenfaces [TP91]_
|
||||
- Fisherfaces [Belhumeur97]_
|
||||
- Local Binary Patterns Histograms [Ahonen04]_
|
||||
|
||||
- Added persistence facilities to store the models with a common API.
|
||||
- Unit Tests (using `gtest <http://code.google.com/p/googletest/>`_).
|
||||
- Providing a CMakeLists.txt to enable easy cross-platform building.
|
524
modules/contrib/doc/facerec/facerec_tutorial.rst
Normal file
@ -0,0 +1,524 @@
|
||||
Face Recognition with OpenCV
|
||||
############################
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:depth: 3
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
`OpenCV (Open Source Computer Vision) <http://opencv.willowgarage.com>`_ is a popular computer vision library started by `Intel <http://www.intel.com>`_ in 1999. The cross-platform library sets its focus on real-time image processing and includes patent-free implementations of the latest computer vision algorithms. In 2008 `Willow Garage <http://www.willowgarage.com>`_ took over support and OpenCV 2.3.1 now comes with a programming interface to C, C++, `Python <http://www.python.org>`_ and `Android <http://www.android.com>`_. OpenCV is released under a BSD license so it is used in academic projects and commercial products alike.
|
||||
|
||||
OpenCV 2.4 now comes with the very new :ocv:class:`FaceRecognizer` class for face recognition, so you can start experimenting with face recognition right away. This document is the guide I've wished for, when I was working myself into face recognition. It shows you how to perform face recognition with :ocv:class:`FaceRecognizer` in OpenCV (with full source code listings) and gives you an introduction into the algorithms behind. I'll also show how to create the visualizations you can find in many publications, because a lot of people asked for.
|
||||
|
||||
The currently available algorithms are:
|
||||
|
||||
* Eigenfaces (see :ocv:func:`createEigenFaceRecognizer`)
|
||||
* Fisherfaces (see :ocv:func:`createFisherFaceRecognizer`)
|
||||
* Local Binary Patterns Histograms (see :ocv:func:`createLBPHFaceRecognizer`)
|
||||
|
||||
You don't need to copy and paste the source code examples from this page, because they are available in the ``samples/cpp`` folder coming with OpenCV. If you have built OpenCV with the samples turned on, chances are good you have them compiled already! Although it might be interesting for very advanced users, I've decided to leave the implementation details out as I am afraid they confuse new users.
|
||||
|
||||
All code in this document is released under the `BSD license <http://www.opensource.org/licenses/bsd-license>`_, so feel free to use it for your projects.
|
||||
|
||||
Face Recognition
|
||||
================
|
||||
|
||||
Face recognition is an easy task for humans. Experiments in [Tu06]_ have shown, that even one to three day old babies are able to distinguish between known faces. So how hard could it be for a computer? It turns out we know little about human recognition to date. Are inner features (eyes, nose, mouth) or outer features (head shape, hairline) used for a successful face recognition? How do we analyze an image and how does the brain encode it? It was shown by `David Hubel <http://en.wikipedia.org/wiki/David_H._Hubel>`_ and `Torsten Wiesel <http://en.wikipedia.org/wiki/Torsten_Wiesel>`_, that our brain has specialized nerve cells responding to specific local features of a scene, such as lines, edges, angles or movement. Since we don't see the world as scattered pieces, our visual cortex must somehow combine the different sources of information into useful patterns. Automatic face recognition is all about extracting those meaningful features from an image, putting them into a useful representation and performing some kind of classification on them.
|
||||
|
||||
Face recognition based on the geometric features of a face is probably the most intuitive approach to face recognition. One of the first automated face recognition systems was described in [Kanade73]_: marker points (position of eyes, ears, nose, ...) were used to build a feature vector (distance between the points, angle between them, ...). The recognition was performed by calculating the euclidean distance between feature vectors of a probe and reference image. Such a method is robust against changes in illumination by its nature, but has a huge drawback: the accurate registration of the marker points is complicated, even with state of the art algorithms. Some of the latest work on geometric face recognition was carried out in [Bru92]_. A 22-dimensional feature vector was used and experiments on large datasets have shown, that geometrical features alone my not carry enough information for face recognition.
|
||||
|
||||
The Eigenfaces method described in [TP91]_ took a holistic approach to face recognition: A facial image is a point from a high-dimensional image space and a lower-dimensional representation is found, where classification becomes easy. The lower-dimensional subspace is found with Principal Component Analysis, which identifies the axes with maximum variance. While this kind of transformation is optimal from a reconstruction standpoint, it doesn't take any class labels into account. Imagine a situation where the variance is generated from external sources, let it be light. The axes with maximum variance do not necessarily contain any discriminative information at all, hence a classification becomes impossible. So a class-specific projection with a Linear Discriminant Analysis was applied to face recognition in [BHK97]_. The basic idea is to minimize the variance within a class, while maximizing the variance between the classes at the same time.
|
||||
|
||||
Recently various methods for a local feature extraction emerged. To avoid the high-dimensionality of the input data only local regions of an image are described, the extracted features are (hopefully) more robust against partial occlusion, illumation and small sample size. Algorithms used for a local feature extraction are Gabor Wavelets ([Wiskott97]_), Discrete Cosinus Transform ([Messer06]_) and Local Binary Patterns ([Ahonen04]_). It's still an open research question what's the best way to preserve spatial information when applying a local feature extraction, because spatial information is potentially useful information.
|
||||
|
||||
Face Database
|
||||
==============
|
||||
|
||||
Let's get some data to experiment with first. I don't want to do a toy example here. We are doing face recognition, so you'll need some face images! You can either create your own dataset or start with one of the available face databases, `http://face-rec.org/databases/ <http://face-rec.org/databases>`_ gives you an up-to-date overview. Three interesting databases are (parts of the description are quoted from `http://face-rec.org <http://face-rec.org>`_):
|
||||
|
||||
* `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_ The AT&T Facedatabase, sometimes also referred to as *ORL Database of Faces*, contains ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
|
||||
|
||||
* `Yale Facedatabase A <http://cvc.yale.edu/projects/yalefaces/yalefaces.html>`_ The AT&T Facedatabase is good for initial tests, but it's a fairly easy database. The Eigenfaces method already has a 97% recognition rate, so you won't see any improvements with other algorithms. The Yale Facedatabase A is a more appropriate dataset for initial experiments, because the recognition problem is harder. The database consists of 15 people (14 male, 1 female) each with 11 grayscale images sized :math:`320 \times 243` pixel. There are changes in the light conditions (center light, left light, right light), facial expressions (happy, normal, sad, sleepy, surprised, wink) and glasses (glasses, no-glasses).
|
||||
|
||||
Bad news is it's not available for public download anymore, because the original server seems to be down. You can find some sites mirroring it (`like the MIT <http://vismod.media.mit.edu/vismod/classes/mas622-00/datasets/>`_), but I can't make guarantees about the integrity. If you need to crop and align the images yourself, read my notes at `bytefish.de/blog/fisherfaces <http://bytefish.de/blog/fisherfaces>`_.
|
||||
|
||||
* `Extended Yale Facedatabase B <http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html>`_ The Extended Yale Facedatabase B contains 2414 images of 38 different people in its cropped version. The focus of this database is set on extracting features that are robust to illumination, the images have almost no variation in emotion/occlusion/... . I personally think, that this dataset is too large for the experiments I perform in this document. You better use the `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_ for intial testing. A first version of the Yale Facedatabase B was used in [Belhumeur97]_ to see how the Eigenfaces and Fisherfaces method perform under heavy illumination changes. [Lee2005]_ used the same setup to take 16128 images of 28 people. The Extended Yale Facedatabase B is the merge of the two databases, which is now known as Extended Yalefacedatabase B.
|
||||
|
||||
Preparing the data
|
||||
-------------------
|
||||
|
||||
Once we have acquired some data, we'll need to read it in our program. In the demo I have decided to read the images from a very simple CSV file. Why? Because it's the simplest platform-independent approach I can think of. However, if you know a simpler solution please ping me about it. Basically all the CSV file needs to contain are lines composed of a ``filename`` followed by a ``;`` followed by the ``label`` (as *integer number*), making up a line like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/path/to/image.ext;0
|
||||
|
||||
Let's dissect the line. ``/path/to/image.ext`` is the path to an image, probably something like this if you are in Windows: ``C:/faces/person0/image0.jpg``. Then there is the separator ``;`` and finally we assign the label ``0`` to the image. Think of the label as the subject (the person) this image belongs to, so same subjects (persons) should have the same label.
|
||||
|
||||
Download the AT&T Facedatabase from AT&T Facedatabase and the corresponding CSV file from at.txt, which looks like this (file is without ... of course):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
./at/s1/1.pgm;0
|
||||
./at/s1/2.pgm;0
|
||||
...
|
||||
./at/s2/1.pgm;1
|
||||
./at/s2/2.pgm;1
|
||||
...
|
||||
./at/s40/1.pgm;39
|
||||
./at/s40/2.pgm;39
|
||||
|
||||
Imagine I have extracted the files to ``D:/data/at`` and have downloaded the CSV file to ``D:/data/at.txt``. Then you would simply need to Search & Replace ``./`` with ``D:/data/``. You can do that in an editor of your choice, every sufficiently advanced editor can do this. Once you have a CSV file with valid filenames and labels, you can run any of the demos by passing the path to the CSV file as parameter:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
facerec_demo.exe D:/data/at.txt
|
||||
|
||||
Creating the CSV File
|
||||
+++++++++++++++++++++
|
||||
|
||||
You don't really want to create the CSV file by hand. I have prepared you a little Python script ``create_csv.py`` (you find it at ``src/create_csv.py`` coming with this tutorial) that automatically creates you a CSV file. If you have your images in hierarchie like this (``/basepath/<subject>/<image.ext>``):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/facerec/data/at$ tree
|
||||
.
|
||||
|-- s1
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|-- s2
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
...
|
||||
|-- s40
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|
||||
|
||||
Then simply call create_csv.py with the path to the folder, just like this and you could save the output:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/facerec/data$ python create_csv.py
|
||||
at/s13/2.pgm;0
|
||||
at/s13/7.pgm;0
|
||||
at/s13/6.pgm;0
|
||||
at/s13/9.pgm;0
|
||||
at/s13/5.pgm;0
|
||||
at/s13/3.pgm;0
|
||||
at/s13/4.pgm;0
|
||||
at/s13/10.pgm;0
|
||||
at/s13/8.pgm;0
|
||||
at/s13/1.pgm;0
|
||||
at/s17/2.pgm;1
|
||||
at/s17/7.pgm;1
|
||||
at/s17/6.pgm;1
|
||||
at/s17/9.pgm;1
|
||||
at/s17/5.pgm;1
|
||||
at/s17/3.pgm;1
|
||||
[...]
|
||||
|
||||
|
||||
Eigenfaces
|
||||
==========
|
||||
|
||||
The problem with the image representation we are given is its high dimensionality. Two-dimensional :math:`p \times q` grayscale images span a :math:`m = pq`-dimensional vector space, so an image with :math:`100 \times 100` pixels lies in a :math:`10,000`-dimensional image space already. The question is: Are all dimensions equally useful for us? We can only make a decision if there's any variance in data, so what we are looking for are the components that account for most of the information. The Principal Component Analysis (PCA) was independently proposed by `Karl Pearson <http://en.wikipedia.org/wiki/Karl_Pearson>`_ (1901) and `Harold Hotelling <http://en.wikipedia.org/wiki/Harold_Hotelling>`_ (1933) to turn a set of possibly correlated variables into a smaller set of uncorrelated variables. The idea is, that a high-dimensional dataset is often described by correlated variables and therefore only a few meaningful dimensions account for most of the information. The PCA method finds the directions with the greatest variance in the data, called principal components.
|
||||
|
||||
Algorithmic Description
|
||||
-----------------------
|
||||
|
||||
Let :math:`X = \{ x_{1}, x_{2}, \ldots, x_{n} \}` be a random vector with observations :math:`x_i \in R^{d}`.
|
||||
|
||||
1. Compute the mean :math:`\mu`
|
||||
|
||||
.. math::
|
||||
|
||||
\mu = \frac{1}{n} \sum_{i=1}^{n} x_{i}
|
||||
|
||||
2. Compute the the Covariance Matrix `S`
|
||||
|
||||
.. math::
|
||||
|
||||
S = \frac{1}{n} \sum_{i=1}^{n} (x_{i} - \mu) (x_{i} - \mu)^{T}`
|
||||
|
||||
3. Compute the eigenvalues :math:`\lambda_{i}` and eigenvectors :math:`v_{i}` of :math:`S`
|
||||
|
||||
.. math::
|
||||
|
||||
S v_{i} = \lambda_{i} v_{i}, i=1,2,\ldots,n
|
||||
|
||||
4. Order the eigenvectors descending by their eigenvalue. The :math:`k` principal components are the eigenvectors corresponding to the :math:`k` largest eigenvalues.
|
||||
|
||||
The :math:`k` principal components of the observed vector :math:`x` are then given by:
|
||||
|
||||
.. math::
|
||||
|
||||
y = W^{T} (x - \mu)
|
||||
|
||||
|
||||
where :math:`W = (v_{1}, v_{2}, \ldots, v_{k})`.
|
||||
|
||||
The reconstruction from the PCA basis is given by:
|
||||
|
||||
.. math::
|
||||
|
||||
x = W y + \mu
|
||||
|
||||
where :math:`W = (v_{1}, v_{2}, \ldots, v_{k})`.
|
||||
|
||||
|
||||
The Eigenfaces method then performs face recognition by:
|
||||
|
||||
* Projecting all training samples into the PCA subspace.
|
||||
* Projecting the query image into the PCA subspace.
|
||||
* Finding the nearest neighbor between the projected training images and the projected query image.
|
||||
|
||||
Still there's one problem left to solve. Imagine we are given :math:`400` images sized :math:`100 \times 100` pixel. The Principal Component Analysis solves the covariance matrix :math:`S = X X^{T}`, where :math:`{size}(X) = 10000 \times 400` in our example. You would end up with a :math:`10000 \times 10000` matrix, roughly :math:`0.8 GB`. Solving this problem isn't feasible, so we'll need to apply a trick. From your linear algebra lessons you know that a :math:`M \times N` matrix with :math:`M > N` can only have :math:`N - 1` non-zero eigenvalues. So it's possible to take the eigenvalue decomposition :math:`S = X^{T} X` of size :math:`N \times N` instead:
|
||||
|
||||
.. math::
|
||||
|
||||
X^{T} X v_{i} = \lambda_{i} v{i}
|
||||
|
||||
|
||||
and get the original eigenvectors of :math:`S = X X^{T}` with a left multiplication of the data matrix:
|
||||
|
||||
.. math::
|
||||
|
||||
X X^{T} (X v_{i}) = \lambda_{i} (X v_{i})
|
||||
|
||||
The resulting eigenvectors are orthogonal, to get orthonormal eigenvectors they need to be normalized to unit length. I don't want to turn this into a publication, so please look into [Duda01]_ for the derivation and proof of the equations.
|
||||
|
||||
Eigenfaces in OpenCV
|
||||
--------------------
|
||||
|
||||
For the first source code example, I'll go through it with you. I am first giving you the whole source code listing, and after this we'll look at the most important lines in detail. Please note: every source code listing is commented in detail, so you should have no problems following it.
|
||||
|
||||
.. literalinclude:: src/facerec_eigenfaces.cpp
|
||||
:language: cpp
|
||||
:linenos:
|
||||
|
||||
I've used the jet colormap, so you can see how the grayscale values are distributed within the specific Eigenfaces. You can see, that the Eigenfaces do not only encode facial features, but also the illumination in the images (see the left light in Eigenface \#4, right light in Eigenfaces \#5):
|
||||
|
||||
.. image:: img/eigenfaces_opencv.png
|
||||
:align: center
|
||||
|
||||
We've already seen in Equation \ref{eqn:pca_reconstruction}, that we can reconstruct a face from its lower dimensional approximation. So let's see how many Eigenfaces are needed for a good reconstruction. I'll do a subplot with :math:`10,30,\ldots,310` Eigenfaces:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Display or save the image reconstruction at some predefined steps:
|
||||
for(int num_components = 10; num_components < 300; num_components+=15) {
|
||||
// slice the eigenvectors from the model
|
||||
Mat evs = Mat(W, Range::all(), Range(0, num_components));
|
||||
Mat projection = subspaceProject(evs, mean, images[0].reshape(1,1));
|
||||
Mat reconstruction = subspaceReconstruct(evs, mean, projection);
|
||||
// Normalize the result:
|
||||
reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows));
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("eigenface_reconstruction_%d", num_components), reconstruction);
|
||||
} else {
|
||||
imwrite(format("%s/eigenface_reconstruction_%d.png", output_folder.c_str(), num_components), reconstruction);
|
||||
}
|
||||
}
|
||||
|
||||
10 Eigenvectors are obviously not sufficient for a good image reconstruction, 50 Eigenvectors may already be sufficient to encode important facial features. You'll get a good reconstruction with approximately 300 Eigenvectors for the AT&T Facedatabase. There are rule of thumbs how many Eigenfaces you should choose for a successful face recognition, but it heavily depends on the input data. [Zhao03]_ is the perfect point to start researching for this:
|
||||
|
||||
.. image:: img/eigenface_reconstruction_opencv.png
|
||||
:align: center
|
||||
|
||||
|
||||
Fisherfaces
|
||||
============
|
||||
|
||||
The Principal Component Analysis (PCA), which is the core of the Eigenfaces method, finds a linear combination of features that maximizes the total variance in data. While this is clearly a powerful way to represent data, it doesn't consider any classes and so a lot of discriminative information *may* be lost when throwing components away. Imagine a situation where the variance in your data is generated by an external source, let it be the light. The components identified by a PCA do not necessarily contain any discriminative information at all, so the projected samples are smeared together and a classification becomes impossible (see `http://www.bytefish.de/wiki/pca_lda_with_gnu_octave <http://www.bytefish.de/wiki/pca_lda_with_gnu_octave>`_ for an example).
|
||||
|
||||
The Linear Discriminant Analysis performs a class-specific dimensionality reduction and was invented by the great statistician `Sir R. A. Fisher <http://en.wikipedia.org/wiki/Ronald_Fisher>`_. He successfully used it for classifying flowers in his 1936 paper *The use of multiple measurements in taxonomic problems* [Fisher36]_. In order to find the combination of features that separates best between classes the Linear Discriminant Analysis maximizes the ratio of between-classes to within-classes scatter, instead of maximizing the overall scatter. The idea is simple: same classes should cluster tightly together, while different classes are as far away as possible from each other in the lower-dimensional representation. This was also recognized by `Belhumeur <http://www.cs.columbia.edu/~belhumeur/>`_, `Hespanha <http://www.ece.ucsb.edu/~hespanha/>`_ and `Kriegman <http://cseweb.ucsd.edu/~kriegman/>`_ and so they applied a Discriminant Analysis to face recognition in [BHK97]_.
|
||||
|
||||
Algorithmic Description
|
||||
-----------------------
|
||||
|
||||
Let :math:`X` be a random vector with samples drawn from :math:`c` classes:
|
||||
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{align*}
|
||||
X & = & \{X_1,X_2,\ldots,X_c\} \\
|
||||
X_i & = & \{x_1, x_2, \ldots, x_n\}
|
||||
\end{align*}
|
||||
|
||||
|
||||
The scatter matrices :math:`S_{B}` and `S_{W}` are calculated as:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{align*}
|
||||
S_{B} & = & \sum_{i=1}^{c} N_{i} (\mu_i - \mu)(\mu_i - \mu)^{T} \\
|
||||
S_{W} & = & \sum_{i=1}^{c} \sum_{x_{j} \in X_{i}} (x_j - \mu_i)(x_j - \mu_i)^{T}
|
||||
\end{align*}
|
||||
|
||||
, where :math:`\mu` is the total mean:
|
||||
|
||||
.. math::
|
||||
|
||||
\mu = \frac{1}{N} \sum_{i=1}^{N} x_i
|
||||
|
||||
And :math:`\mu_i` is the mean of class :math:`i \in \{1,\ldots,c\}`:
|
||||
|
||||
.. math::
|
||||
|
||||
\mu_i = \frac{1}{|X_i|} \sum_{x_j \in X_i} x_j
|
||||
|
||||
Fisher's classic algorithm now looks for a projection :math:`W`, that maximizes the class separability criterion:
|
||||
|
||||
.. math::
|
||||
|
||||
W_{opt} = \operatorname{arg\,max}_{W} \frac{|W^T S_B W|}{|W^T S_W W|}
|
||||
|
||||
|
||||
Following [BHK97]_, a solution for this optimization problem is given by solving the General Eigenvalue Problem:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{align*}
|
||||
S_{B} v_{i} & = & \lambda_{i} S_w v_{i} \nonumber \\
|
||||
S_{W}^{-1} S_{B} v_{i} & = & \lambda_{i} v_{i}
|
||||
\end{align*}
|
||||
|
||||
There's one problem left to solve: The rank of :math:`S_{W}` is at most :math:`(N-c)`, with :math:`N` samples and :math:`c` classes. In pattern recognition problems the number of samples :math:`N` is almost always samller than the dimension of the input data (the number of pixels), so the scatter matrix :math:`S_{W}` becomes singular (see [Raudys1991]_). In [BHK97]_ this was solved by performing a Principal Component Analysis on the data and projecting the samples into the :math:`(N-c)`-dimensional space. A Linear Discriminant Analysis was then performed on the reduced data, because :math:`S_{W}` isn't singular anymore.
|
||||
|
||||
The optimization problem can then be rewritten as:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{align*}
|
||||
W_{pca} & = & \operatorname{arg\,max}_{W} |W^T S_T W| \\
|
||||
W_{fld} & = & \operatorname{arg\,max}_{W} \frac{|W^T W_{pca}^T S_{B} W_{pca} W|}{|W^T W_{pca}^T S_{W} W_{pca} W|}
|
||||
\end{align*}
|
||||
|
||||
The transformation matrix :math:`W`, that projects a sample into the :math:`(c-1)`-dimensional space is then given by:
|
||||
|
||||
.. math::
|
||||
|
||||
W = W_{fld}^{T} W_{pca}^{T}
|
||||
|
||||
Fisherfaces in OpenCV
|
||||
---------------------
|
||||
|
||||
.. literalinclude:: src/facerec_fisherfaces.cpp
|
||||
:language: cpp
|
||||
:linenos:
|
||||
|
||||
For this example I am going to use the Yale Facedatabase A, just because the plots are nicer. Each Fisherface has the same length as an original image, thus it can be displayed as an image. The demo shows (or saves) the first, at most 16 Fisherfaces:
|
||||
|
||||
.. image:: img/fisherfaces_opencv.png
|
||||
:align: center
|
||||
|
||||
The Fisherfaces method learns a class-specific transformation matrix, so the they do not capture illumination as obviously as the Eigenfaces method. The Discriminant Analysis instead finds the facial features to discriminate between the persons. It's important to mention, that the performance of the Fisherfaces heavily depends on the input data as well. Practically said: if you learn the Fisherfaces for well-illuminated pictures only and you try to recognize faces in bad-illuminated scenes, then method is likely to find the wrong components (just because those features may not be predominant on bad illuminated images). This is somewhat logical, since the method had no chance to learn the illumination.
|
||||
|
||||
The Fisherfaces allow a reconstruction of the projected image, just like the Eigenfaces did. But since we only identified the features to distinguish between subjects, you can't expect a nice reconstruction of the original image. For the Fisherfaces method we'll project the sample image onto each of the Fisherfaces instead. So you'll have a nice visualization, which feature each of the Fisherfaces describes:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Display or save the image reconstruction at some predefined steps:
|
||||
for(int num_component = 0; num_component < min(16, W.cols); num_component++) {
|
||||
// Slice the Fisherface from the model:
|
||||
Mat ev = W.col(num_component);
|
||||
Mat projection = subspaceProject(ev, mean, images[0].reshape(1,1));
|
||||
Mat reconstruction = subspaceReconstruct(ev, mean, projection);
|
||||
// Normalize the result:
|
||||
reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows));
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("fisherface_reconstruction_%d", num_component), reconstruction);
|
||||
} else {
|
||||
imwrite(format("%s/fisherface_reconstruction_%d.png", output_folder.c_str(), num_component), reconstruction);
|
||||
}
|
||||
}
|
||||
|
||||
The differences may be subtle for the human eyes, but you should be able to see some differences:
|
||||
|
||||
.. image:: img/fisherface_reconstruction_opencv.png
|
||||
:align: center
|
||||
|
||||
|
||||
Local Binary Patterns Histograms
|
||||
================================
|
||||
|
||||
Eigenfaces and Fisherfaces take a somewhat holistic approach to face recognition. You treat your data as a vector somewhere in a high-dimensional image space. We all know high-dimensionality is bad, so a lower-dimensional subspace is identified, where (probably) useful information is preserved. The Eigenfaces approach maximizes the total scatter, which can lead to problems if the variance is generated by an external source, because components with a maximum variance over all classes aren't necessarily useful for classification (see `http://www.bytefish.de/wiki/pca_lda_with_gnu_octave <http://www.bytefish.de/wiki/pca_lda_with_gnu_octave>`_). So to preserve some discriminative information we applied a Linear Discriminant Analysis and optimized as described in the Fisherfaces method. The Fisherfaces method worked great... at least for the constrained scenario we've assumed in our model.
|
||||
|
||||
Now real life isn't perfect. You simply can't guarantee perfect light settings in your images or 10 different images of a person. So what if there's only one image for each person? Our covariance estimates for the subspace *may* be horribly wrong, so will the recognition. Remember the Eigenfaces method had a 96% recognition rate on the AT&T Facedatabase? How many images do we actually need to get such useful estimates? Here are the Rank-1 recognition rates of the Eigenfaces and Fisherfaces method on the AT&T Facedatabase, which is a fairly easy image database:
|
||||
|
||||
.. image:: img/at_database_small_sample_size.png
|
||||
:scale: 60%
|
||||
:align: center
|
||||
|
||||
So in order to get good recognition rates you'll need at least 8(+-1) images for each person and the Fisherfaces method doesn't really help here. The above experiment is a 10-fold cross validated result carried out with the facerec framework at: `https://github.com/bytefish/facerec <https://github.com/bytefish/facerec>`_. This is not a publication, so I won't back these figures with a deep mathematical analysis. Please have a look into [KM01]_ for a detailed analysis of both methods, when it comes to small training datasets.
|
||||
|
||||
So some research concentrated on extracting local features from images. The idea is to not look at the whole image as a high-dimensional vector, but describe only local features of an object. The features you extract this way will have a low-dimensionality implicitly. A fine idea! But you'll soon observe the image representation we are given doesn't only suffer from illumination variations. Think of things like scale, translation or rotation in images - your local description has to be at least a bit robust against those things. Just like :ocv:class:`SIFT`, the Local Binary Patterns methodology has its roots in 2D texture analysis. The basic idea of Local Binary Patterns is to summarize the local structure in an image by comparing each pixel with its neighborhood. Take a pixel as center and threshold its neighbors against. If the intensity of the center pixel is greater-equal its neighbor, then denote it with 1 and 0 if not. You'll end up with a binary number for each pixel, just like 11001111. So with 8 surrounding pixels you'll end up with 2^8 possible combinations, called *Local Binary Patterns* or sometimes referred to as *LBP codes*. The first LBP operator described in literature actually used a fixed 3 x 3 neighborhood just like this:
|
||||
|
||||
.. image:: img/lbp/lbp.png
|
||||
:scale: 80%
|
||||
:align: center
|
||||
|
||||
Algorithmic Description
|
||||
-----------------------
|
||||
|
||||
A more formal description of the LBP operator can be given as:
|
||||
|
||||
.. math::
|
||||
|
||||
LBP(x_c, y_c) = \sum_{p=0}^{P-1} 2^p s(i_p - i_c)
|
||||
|
||||
, with :math:`(x_c, y_c)` as central pixel with intensity :math:`i_c`; and :math:`i_n` being the intensity of the the neighbor pixel. :math:`s` is the sign function defined as:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{equation}
|
||||
s(x) =
|
||||
\begin{cases}
|
||||
1 & \text{if $x \geq 0$}\\
|
||||
0 & \text{else}
|
||||
\end{cases}
|
||||
\end{equation}
|
||||
|
||||
This description enables you to capture very fine grained details in images. In fact the authors were able to compete with state of the art results for texture classification. Soon after the operator was published it was noted, that a fixed neighborhood fails to encode details differing in scale. So the operator was extended to use a variable neighborhood in [AHP04]_. The idea is to align an abritrary number of neighbors on a circle with a variable radius, which enables to capture the following neighborhoods:
|
||||
|
||||
.. image:: img/lbp/patterns.png
|
||||
:scale: 80%
|
||||
:align: center
|
||||
|
||||
For a given Point :math:`(x_c,y_c)` the position of the neighbor :math:`(x_p,y_p), p \in P` can be calculated by:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{align*}
|
||||
x_{p} & = & x_c + R \cos({\frac{2\pi p}{P}})\\
|
||||
y_{p} & = & y_c - R \sin({\frac{2\pi p}{P}})
|
||||
\end{align*}
|
||||
|
||||
Where :math:`R` is the radius of the circle and :math:`P` is the number of sample points.
|
||||
|
||||
The operator is an extension to the original LBP codes, so it's sometimes called *Extended LBP* (also referred to as *Circular LBP*) . If a points coordinate on the circle doesn't correspond to image coordinates, the point get's interpolated. Computer science has a bunch of clever interpolation schemes, the OpenCV implementation does a bilinear interpolation:
|
||||
|
||||
.. math::
|
||||
:nowrap:
|
||||
|
||||
\begin{align*}
|
||||
f(x,y) \approx \begin{bmatrix}
|
||||
1-x & x \end{bmatrix} \begin{bmatrix}
|
||||
f(0,0) & f(0,1) \\
|
||||
f(1,0) & f(1,1) \end{bmatrix} \begin{bmatrix}
|
||||
1-y \\
|
||||
y \end{bmatrix}.
|
||||
\end{align*}
|
||||
|
||||
By definition the LBP operator is robust against monotonic gray scale transformations. We can easily verify this by looking at the LBP image of an artificially modified image (so you see what an LBP image looks like!):
|
||||
|
||||
.. image:: img/lbp/lbp_yale.jpg
|
||||
:scale: 60%
|
||||
:align: center
|
||||
|
||||
So what's left to do is how to incorporate the spatial information in the face recognition model. The representation proposed by Ahonen et. al [AHP04]_ is to divide the LBP image into :math:`m` local regions and extract a histogram from each. The spatially enhanced feature vector is then obtained by concatenating the local histograms (**not merging them**). These histograms are called *Local Binary Patterns Histograms*.
|
||||
|
||||
Local Binary Patterns Histograms in OpenCV
|
||||
------------------------------------------
|
||||
|
||||
.. literalinclude:: src/facerec_lbph.cpp
|
||||
:language: cpp
|
||||
:linenos:
|
||||
|
||||
Conclusion
|
||||
==========
|
||||
|
||||
You've learned how to use the new :ocv:class:`FaceRecognizer` in real applications. After reading the document you also know how the algorithms work, so now it's time for you to experiment with the available algorithms. Use them, improve them and let the OpenCV community participate!
|
||||
|
||||
Credits
|
||||
=======
|
||||
|
||||
This document wouldn't be possible without the kind permission to use the face images of the *AT&T Database of Faces* and the *Yale Facedatabase A/B*.
|
||||
|
||||
The Database of Faces
|
||||
---------------------
|
||||
|
||||
** Important: when using these images, please give credit to "AT&T Laboratories, Cambridge." **
|
||||
|
||||
The Database of Faces, formerly *The ORL Database of Faces*, contains a set of face images taken between April 1992 and April 1994. The database was used in the context of a face recognition project carried out in collaboration with the Speech, Vision and Robotics Group of the Cambridge University Engineering Department.
|
||||
|
||||
There are ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
|
||||
|
||||
The files are in PGM format. The size of each image is 92x112 pixels, with 256 grey levels per pixel. The images are organised in 40 directories (one for each subject), which have names of the form sX, where X indicates the subject number (between 1 and 40). In each of these directories, there are ten different images of that subject, which have names of the form Y.pgm, where Y is the image number for that subject (between 1 and 10).
|
||||
|
||||
A copy of the database can be retrieved from: `http://www.cl.cam.ac.uk/research/dtg/attarchive/pub/data/att_faces.zip <http://www.cl.cam.ac.uk/research/dtg/attarchive/pub/data/att_faces.zip>`_.
|
||||
|
||||
Yale Facedatabase A
|
||||
-------------------
|
||||
|
||||
*With the permission of the authors I am allowed to show a small number of images (say subject 1 and all the variations) and all images such as Fisherfaces and Eigenfaces from either Yale Facedatabase A or the Yale Facedatabase B.*
|
||||
|
||||
The Yale Face Database A (size 6.4MB) contains 165 grayscale images in GIF format of 15 individuals. There are 11 images per subject, one per different facial expression or configuration: center-light, w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink. (Source: `http://cvc.yale.edu/projects/yalefaces/yalefaces.html <http://cvc.yale.edu/projects/yalefaces/yalefaces.html>`_)
|
||||
|
||||
Yale Facedatabase B
|
||||
--------------------
|
||||
|
||||
*With the permission of the authors I am allowed to show a small number of images (say subject 1 and all the variations) and all images such as Fisherfaces and Eigenfaces from either Yale Facedatabase A or the Yale Facedatabase B.*
|
||||
|
||||
The extended Yale Face Database B contains 16128 images of 28 human subjects under 9 poses and 64 illumination conditions. The data format of this database is the same as the Yale Face Database B. Please refer to the homepage of the Yale Face Database B (or one copy of this page) for more detailed information of the data format.
|
||||
|
||||
You are free to use the extended Yale Face Database B for research purposes. All publications which use this database should acknowledge the use of "the Exteded Yale Face Database B" and reference Athinodoros Georghiades, Peter Belhumeur, and David Kriegman's paper, "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", PAMI, 2001, `[bibtex] <http://vision.ucsd.edu/~leekc/ExtYaleDatabase/athosref.html>`_.
|
||||
|
||||
The extended database as opposed to the original Yale Face Database B with 10 subjects was first reported by Kuang-Chih Lee, Jeffrey Ho, and David Kriegman in "Acquiring Linear Subspaces for Face Recognition under Variable Lighting, PAMI, May, 2005 `[pdf] <http://vision.ucsd.edu/~leekc/papers/9pltsIEEE.pdf>`_." All test image data used in the experiments are manually aligned, cropped, and then re-sized to 168x192 images. If you publish your experimental results with the cropped images, please reference the PAMI2005 paper as well. (Source: `http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html <http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html>`_)
|
||||
|
||||
Literature
|
||||
==========
|
||||
|
||||
.. [AHP04] Ahonen, T., Hadid, A., and Pietikainen, M. *Face Recognition with Local Binary Patterns.* Computer Vision - ECCV 2004 (2004), 469–481.
|
||||
|
||||
.. [BHK97] Belhumeur, P. N., Hespanha, J., and Kriegman, D. *Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection.* IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 7 (1997), 711–720.
|
||||
|
||||
.. [Bru92] Brunelli, R., Poggio, T. *Face Recognition through Geometrical Features.* European Conference on Computer Vision (ECCV) 1992, S. 792–800.
|
||||
|
||||
.. [Duda01] Duda, Richard O. and Hart, Peter E. and Stork, David G., *Pattern Classification* (2nd Edition) 2001.
|
||||
|
||||
.. [Fisher36] Fisher, R. A. *The use of multiple measurements in taxonomic problems.* Annals Eugen. 7 (1936), 179–188.
|
||||
|
||||
.. [GBK01] Georghiades, A.S. and Belhumeur, P.N. and Kriegman, D.J., *From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose* IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 6 (2001), 643-660.
|
||||
|
||||
.. [Kanade73] Kanade, T. *Picture processing system by computer complex and recognition of human faces.* PhD thesis, Kyoto University, November 1973
|
||||
|
||||
.. [KM01] Martinez, A and Kak, A. *PCA versus LDA* IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No.2, pp. 228-233, 2001.
|
||||
|
||||
.. [Messer06] Messer, K. et al. *Performance Characterisation of Face Recognition Algorithms and Their Sensitivity to Severe Illumination Changes.* In: In: ICB, 2006, S. 1–11.
|
||||
|
||||
.. [RJ91] S. Raudys and A.K. Jain. *Small sample size effects in statistical pattern recognition: Recommendations for practitioneers.* - IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 3 (1991), 252-264.
|
||||
|
||||
.. [Tan10] Tan, X., and Triggs, B. *Enhanced local texture feature sets for face recognition under difficult lighting conditions.* IEEE Transactions on Image Processing 19 (2010), 1635–650.
|
||||
|
||||
.. [TP91] Turk, M., and Pentland, A. *Eigenfaces for recognition.* Journal of Cognitive Neuroscience 3 (1991), 71–86.
|
||||
|
||||
.. [Tu06] Chiara Turati, Viola Macchi Cassia, F. S., and Leo, I. *Newborns face recognition: Role of inner and outer facial features. Child Development* 77, 2 (2006), 297–311.
|
||||
|
||||
.. [Wiskott97] Wiskott, L., Fellous, J., Krüger, N., Malsburg, C. *Face Recognition By Elastic Bunch Graph Matching.* IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (1997), S. 775–779
|
||||
|
||||
.. [Zhao03] Zhao, W., Chellappa, R., Phillips, P., and Rosenfeld, A. Face recognition: A literature survey. ACM Computing Surveys (CSUR) 35, 4 (2003), 399–458.
|
||||
|
||||
Appendix
|
||||
========
|
||||
|
||||
CSV for the AT&T Facedatabase
|
||||
------------------------------
|
||||
|
||||
.. literalinclude:: etc/at.txt
|
||||
:language: none
|
||||
:linenos:
|
After Width: | Height: | Size: 33 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_autumn.jpg
Normal file
After Width: | Height: | Size: 1.3 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_bone.jpg
Normal file
After Width: | Height: | Size: 1.3 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_cool.jpg
Normal file
After Width: | Height: | Size: 1.3 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_hot.jpg
Normal file
After Width: | Height: | Size: 1.4 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_hsv.jpg
Normal file
After Width: | Height: | Size: 1.7 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_jet.jpg
Normal file
After Width: | Height: | Size: 1.5 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_mkpj1.jpg
Normal file
After Width: | Height: | Size: 1.5 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_mkpj2.jpg
Normal file
After Width: | Height: | Size: 1.5 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_ocean.jpg
Normal file
After Width: | Height: | Size: 1.4 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_pink.jpg
Normal file
After Width: | Height: | Size: 1.4 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_rainbow.jpg
Normal file
After Width: | Height: | Size: 1.5 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_spring.jpg
Normal file
After Width: | Height: | Size: 1.2 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_summer.jpg
Normal file
After Width: | Height: | Size: 1.3 KiB |
BIN
modules/contrib/doc/facerec/img/colormaps/colorscale_winter.jpg
Normal file
After Width: | Height: | Size: 1.2 KiB |
After Width: | Height: | Size: 171 KiB |
BIN
modules/contrib/doc/facerec/img/eigenfaces_opencv.png
Normal file
After Width: | Height: | Size: 108 KiB |
After Width: | Height: | Size: 111 KiB |
BIN
modules/contrib/doc/facerec/img/fisherfaces_opencv.png
Normal file
After Width: | Height: | Size: 281 KiB |
BIN
modules/contrib/doc/facerec/img/lbp/lbp.png
Normal file
After Width: | Height: | Size: 15 KiB |
BIN
modules/contrib/doc/facerec/img/lbp/lbp_yale.jpg
Normal file
After Width: | Height: | Size: 83 KiB |
BIN
modules/contrib/doc/facerec/img/lbp/patterns.png
Normal file
After Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 290 KiB |
After Width: | Height: | Size: 5.4 KiB |
After Width: | Height: | Size: 6.2 KiB |
After Width: | Height: | Size: 1.8 KiB |
After Width: | Height: | Size: 7.1 KiB |
After Width: | Height: | Size: 92 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 10 KiB |
After Width: | Height: | Size: 9.9 KiB |
33
modules/contrib/doc/facerec/index.rst
Normal file
@ -0,0 +1,33 @@
|
||||
FaceRecognizer - Face Recognition with OpenCV
|
||||
##############################################
|
||||
|
||||
OpenCV 2.4 now comes with the very new :ocv:class:`FaceRecognizer` class for face recognition. This documentation is going to explain you `the API <facerec_api.rst>`_ in detail and it will give you a lot of help to get started (full source code examples). `Face Recognition with OpenCV <facerec_tutorial.rst>`_ is the definite guide to the new :ocv:class:`FaceRecognizer`. There's also a `tutorial on gender classification <tutorial/facerec_gender_classification.rst>`_, a `tutorial for face recognition in videos <tutorial/facerec_video_recognition.rst>`_ and it's shown `how to load & save your results <tutorial/facerec_save_load.rst>`_.
|
||||
|
||||
These documents are the help I have wished for, when I was working myself into face recognition. I hope you also think the new :ocv:class:`FaceRecognizer` is a useful addition to OpenCV.
|
||||
|
||||
Please issue any feature requests and/or bugs on the official OpenCV bug tracker at:
|
||||
|
||||
* http://code.opencv.org/projects/opencv/issues
|
||||
|
||||
Contents
|
||||
========
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
FaceRecognizer API <facerec_api>
|
||||
Guide to Face Recognition with OpenCV <facerec_tutorial>
|
||||
Tutorial on Gender Classification <tutorial/facerec_gender_classification>
|
||||
Tutorial on Face Recognition in Videos <tutorial/facerec_video_recognition>
|
||||
Tutorial On Saving & Loading a FaceRecognizer <tutorial/facerec_save_load>
|
||||
How to use Colormaps in OpenCV <colormaps>
|
||||
Changelog <facerec_changelog>
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
41
modules/contrib/doc/facerec/src/create_csv.py
Normal file
@ -0,0 +1,41 @@
|
||||
import sys
|
||||
import os.path
|
||||
|
||||
# This is a tiny script to help you creating a CSV file from a face
|
||||
# database with a similar hierarchie:
|
||||
#
|
||||
# philipp@mango:~/facerec/data/at$ tree
|
||||
# .
|
||||
# |-- README
|
||||
# |-- s1
|
||||
# | |-- 1.pgm
|
||||
# | |-- ...
|
||||
# | |-- 10.pgm
|
||||
# |-- s2
|
||||
# | |-- 1.pgm
|
||||
# | |-- ...
|
||||
# | |-- 10.pgm
|
||||
# ...
|
||||
# |-- s40
|
||||
# | |-- 1.pgm
|
||||
# | |-- ...
|
||||
# | |-- 10.pgm
|
||||
#
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
print "usage: create_csv <base_path>"
|
||||
sys.exit(1)
|
||||
|
||||
BASE_PATH=sys.argv[1]
|
||||
SEPARATOR=";"
|
||||
|
||||
label = 0
|
||||
for dirname, dirnames, filenames in os.walk(BASE_PATH):
|
||||
for subdirname in dirnames:
|
||||
subject_path = os.path.join(dirname, subdirname)
|
||||
for filename in os.listdir(subject_path):
|
||||
abs_path = "%s/%s" % (subject_path, filename)
|
||||
print "%s%s%d" % (abs_path, SEPARATOR, label)
|
||||
label = label + 1
|
89
modules/contrib/doc/facerec/src/crop_face.py
Executable file
@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env python
|
||||
# Software License Agreement (BSD License)
|
||||
#
|
||||
# Copyright (c) 2012, Philipp Wagner
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions
|
||||
# are met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above
|
||||
# copyright notice, this list of conditions and the following
|
||||
# disclaimer in the documentation and/or other materials provided
|
||||
# with the distribution.
|
||||
# * Neither the name of the author nor the names of its
|
||||
# contributors may be used to endorse or promote products derived
|
||||
# from this software without specific prior written permission.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
||||
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
||||
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
|
||||
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
# POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
import sys, math, Image
|
||||
|
||||
def Distance(p1,p2):
|
||||
dx = p2[0] - p1[0]
|
||||
dy = p2[1] - p1[1]
|
||||
return math.sqrt(dx*dx+dy*dy)
|
||||
|
||||
def ScaleRotateTranslate(image, angle, center = None, new_center = None, scale = None, resample=Image.BICUBIC):
|
||||
if (scale is None) and (center is None):
|
||||
return image.rotate(angle=angle, resample=resample)
|
||||
nx,ny = x,y = center
|
||||
sx=sy=1.0
|
||||
if new_center:
|
||||
(nx,ny) = new_center
|
||||
if scale:
|
||||
(sx,sy) = (scale, scale)
|
||||
cosine = math.cos(angle)
|
||||
sine = math.sin(angle)
|
||||
a = cosine/sx
|
||||
b = sine/sx
|
||||
c = x-nx*a-ny*b
|
||||
d = -sine/sy
|
||||
e = cosine/sy
|
||||
f = y-nx*d-ny*e
|
||||
return image.transform(image.size, Image.AFFINE, (a,b,c,d,e,f), resample=resample)
|
||||
|
||||
def CropFace(image, eye_left=(0,0), eye_right=(0,0), offset_pct=(0.2,0.2), dest_sz = (70,70)):
|
||||
# calculate offsets in original image
|
||||
offset_h = math.floor(float(offset_pct[0])*dest_sz[0])
|
||||
offset_v = math.floor(float(offset_pct[1])*dest_sz[1])
|
||||
# get the direction
|
||||
eye_direction = (eye_right[0] - eye_left[0], eye_right[1] - eye_left[1])
|
||||
# calc rotation angle in radians
|
||||
rotation = -math.atan2(float(eye_direction[1]),float(eye_direction[0]))
|
||||
# distance between them
|
||||
dist = Distance(eye_left, eye_right)
|
||||
# calculate the reference eye-width
|
||||
reference = dest_sz[0] - 2.0*offset_h
|
||||
# scale factor
|
||||
scale = float(dist)/float(reference)
|
||||
# rotate original around the left eye
|
||||
image = ScaleRotateTranslate(image, center=eye_left, angle=rotation)
|
||||
# crop the rotated image
|
||||
crop_xy = (eye_left[0] - scale*offset_h, eye_left[1] - scale*offset_v)
|
||||
crop_size = (dest_sz[0]*scale, dest_sz[1]*scale)
|
||||
image = image.crop((int(crop_xy[0]), int(crop_xy[1]), int(crop_xy[0]+crop_size[0]), int(crop_xy[1]+crop_size[1])))
|
||||
# resize it
|
||||
image = image.resize(dest_sz, Image.ANTIALIAS)
|
||||
return image
|
||||
|
||||
if __name__ == "__main__":
|
||||
image = Image.open("arnie.jpg")
|
||||
CropFace(image, eye_left=(252,364), eye_right=(420,366), offset_pct=(0.1,0.1), dest_sz=(200,200)).save("arnie_10_10_200_200.jpg")
|
||||
CropFace(image, eye_left=(252,364), eye_right=(420,366), offset_pct=(0.2,0.2), dest_sz=(200,200)).save("arnie_20_20_200_200.jpg")
|
||||
CropFace(image, eye_left=(252,364), eye_right=(420,366), offset_pct=(0.3,0.3), dest_sz=(200,200)).save("arnie_30_30_200_200.jpg")
|
||||
CropFace(image, eye_left=(252,364), eye_right=(420,366), offset_pct=(0.2,0.2)).save("arnie_20_20_70_70.jpg")
|
169
modules/contrib/doc/facerec/src/facerec_demo.cpp
Normal file
@ -0,0 +1,169 @@
|
||||
/*
|
||||
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
|
||||
* Released to public domain under terms of the BSD Simplified license.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of the organization nor the names of its contributors
|
||||
* may be used to endorse or promote products derived from this software
|
||||
* without specific prior written permission.
|
||||
*
|
||||
* See <http://www.opensource.org/licenses/bsd-license>
|
||||
*/
|
||||
|
||||
#include "opencv2/core/core.hpp"
|
||||
#include "opencv2/contrib/contrib.hpp"
|
||||
#include "opencv2/highgui/highgui.hpp"
|
||||
|
||||
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
static Mat norm_0_255(InputArray _src) {
|
||||
Mat src = _src.getMat();
|
||||
// Create and return normalized image:
|
||||
Mat dst;
|
||||
switch(src.channels()) {
|
||||
case 1:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1);
|
||||
break;
|
||||
case 3:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC3);
|
||||
break;
|
||||
default:
|
||||
src.copyTo(dst);
|
||||
break;
|
||||
}
|
||||
return dst;
|
||||
}
|
||||
|
||||
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
|
||||
std::ifstream file(filename.c_str(), ifstream::in);
|
||||
if (!file) {
|
||||
string error_message = "No valid input file was given, please check the given filename.";
|
||||
CV_Error(CV_StsBadArg, error_message);
|
||||
}
|
||||
string line, path, classlabel;
|
||||
while (getline(file, line)) {
|
||||
stringstream liness(line);
|
||||
getline(liness, path, separator);
|
||||
getline(liness, classlabel);
|
||||
if(!path.empty() && !classlabel.empty()) {
|
||||
images.push_back(imread(path, 0));
|
||||
labels.push_back(atoi(classlabel.c_str()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Check for valid command line arguments, print usage
|
||||
// if no arguments were given.
|
||||
if (argc != 2) {
|
||||
cout << "usage: " << argv[0] << " <csv.ext>" << endl;
|
||||
exit(1);
|
||||
}
|
||||
// Get the path to your CSV.
|
||||
string fn_csv = string(argv[1]);
|
||||
// These vectors hold the images and corresponding labels.
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// Read in the data. This can fail if no valid
|
||||
// input filename is given.
|
||||
try {
|
||||
read_csv(fn_csv, images, labels);
|
||||
} catch (cv::Exception& e) {
|
||||
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
|
||||
// nothing more we can do
|
||||
exit(1);
|
||||
}
|
||||
// Quit if there are not enough images for this demo.
|
||||
if(images.size() <= 1) {
|
||||
string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!";
|
||||
CV_Error(CV_StsError, error_message);
|
||||
}
|
||||
// Get the height from the first image. We'll need this
|
||||
// later in code to reshape the images to their original
|
||||
// size:
|
||||
int height = images[0].rows;
|
||||
// The following lines simply get the last images from
|
||||
// your dataset and remove it from the vector. This is
|
||||
// done, so that the training data (which we learn the
|
||||
// cv::FaceRecognizer on) and the test data we test
|
||||
// the model with, do not overlap.
|
||||
Mat testSample = images[images.size() - 1];
|
||||
int testLabel = labels[labels.size() - 1];
|
||||
images.pop_back();
|
||||
labels.pop_back();
|
||||
// The following lines create an Eigenfaces model for
|
||||
// face recognition and train it with the images and
|
||||
// labels read from the given CSV file.
|
||||
// This here is a full PCA, if you just want to keep
|
||||
// 10 principal components (read Eigenfaces), then call
|
||||
// the factory method like this:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10);
|
||||
//
|
||||
// If you want to create a FaceRecognizer with a
|
||||
// confidennce threshold, call it with:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10, 123.0);
|
||||
//
|
||||
Ptr<FaceRecognizer> model = createFisherFaceRecognizer();
|
||||
model->train(images, labels);
|
||||
// The following line predicts the label of a given
|
||||
// test image:
|
||||
int predictedLabel = model->predict(testSample);
|
||||
//
|
||||
// To get the confidence of a prediction call the model with:
|
||||
//
|
||||
// int predictedLabel = -1;
|
||||
// double confidence = 0.0;
|
||||
// model->predict(testSample, predictedLabel, confidence);
|
||||
//
|
||||
string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel);
|
||||
cout << result_message << endl;
|
||||
// Sometimes you'll need to get/set internal model data,
|
||||
// which isn't exposed by the public cv::FaceRecognizer.
|
||||
// Since each cv::FaceRecognizer is derived from a
|
||||
// cv::Algorithm, you can query the data.
|
||||
//
|
||||
// First we'll use it to set the threshold of the FaceRecognizer
|
||||
// to 0.0 without retraining the model. This can be useful if
|
||||
// you are evaluating the model:
|
||||
//
|
||||
model->set("threshold", 0.0);
|
||||
// Now the threshold of this model is set to 0.0. A prediction
|
||||
// now returns -1, as it's impossible to have a distance below
|
||||
// it
|
||||
predictedLabel = model->predict(testSample);
|
||||
cout << "Predicted class = " << predictedLabel << endl;
|
||||
// Here is how to get the eigenvalues of this Eigenfaces model:
|
||||
Mat eigenvalues = model->getMat("eigenvalues");
|
||||
// And we can do the same to display the Eigenvectors (read Eigenfaces):
|
||||
Mat W = model->getMat("eigenvectors");
|
||||
// From this we will display the (at most) first 10 Eigenfaces:
|
||||
for (int i = 0; i < min(10, W.cols); i++) {
|
||||
string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at<double>(i));
|
||||
cout << msg << endl;
|
||||
// get eigenvector #i
|
||||
Mat ev = W.col(i).clone();
|
||||
// Reshape to original size & normalize to [0...255] for imshow.
|
||||
Mat grayscale = norm_0_255(ev.reshape(1, height));
|
||||
// Show the image & apply a Jet colormap for better sensing.
|
||||
Mat cgrayscale;
|
||||
applyColorMap(grayscale, cgrayscale, COLORMAP_JET);
|
||||
imshow(format("%d", i), cgrayscale);
|
||||
}
|
||||
waitKey(0);
|
||||
|
||||
return 0;
|
||||
}
|
192
modules/contrib/doc/facerec/src/facerec_eigenfaces.cpp
Normal file
@ -0,0 +1,192 @@
|
||||
/*
|
||||
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
|
||||
* Released to public domain under terms of the BSD Simplified license.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of the organization nor the names of its contributors
|
||||
* may be used to endorse or promote products derived from this software
|
||||
* without specific prior written permission.
|
||||
*
|
||||
* See <http://www.opensource.org/licenses/bsd-license>
|
||||
*/
|
||||
|
||||
#include "opencv2/core/core.hpp"
|
||||
#include "opencv2/contrib/contrib.hpp"
|
||||
#include "opencv2/highgui/highgui.hpp"
|
||||
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
static Mat norm_0_255(InputArray _src) {
|
||||
Mat src = _src.getMat();
|
||||
// Create and return normalized image:
|
||||
Mat dst;
|
||||
switch(src.channels()) {
|
||||
case 1:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1);
|
||||
break;
|
||||
case 3:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC3);
|
||||
break;
|
||||
default:
|
||||
src.copyTo(dst);
|
||||
break;
|
||||
}
|
||||
return dst;
|
||||
}
|
||||
|
||||
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
|
||||
std::ifstream file(filename.c_str(), ifstream::in);
|
||||
if (!file) {
|
||||
string error_message = "No valid input file was given, please check the given filename.";
|
||||
CV_Error(CV_StsBadArg, error_message);
|
||||
}
|
||||
string line, path, classlabel;
|
||||
while (getline(file, line)) {
|
||||
stringstream liness(line);
|
||||
getline(liness, path, separator);
|
||||
getline(liness, classlabel);
|
||||
if(!path.empty() && !classlabel.empty()) {
|
||||
images.push_back(imread(path, 0));
|
||||
labels.push_back(atoi(classlabel.c_str()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Check for valid command line arguments, print usage
|
||||
// if no arguments were given.
|
||||
if (argc < 2) {
|
||||
cout << "usage: " << argv[0] << " <csv.ext> <output_folder> " << endl;
|
||||
exit(1);
|
||||
}
|
||||
string output_folder;
|
||||
if (argc == 3) {
|
||||
output_folder = string(argv[2]);
|
||||
}
|
||||
// Get the path to your CSV.
|
||||
string fn_csv = string(argv[1]);
|
||||
// These vectors hold the images and corresponding labels.
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// Read in the data. This can fail if no valid
|
||||
// input filename is given.
|
||||
try {
|
||||
read_csv(fn_csv, images, labels);
|
||||
} catch (cv::Exception& e) {
|
||||
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
|
||||
// nothing more we can do
|
||||
exit(1);
|
||||
}
|
||||
// Quit if there are not enough images for this demo.
|
||||
if(images.size() <= 1) {
|
||||
string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!";
|
||||
CV_Error(CV_StsError, error_message);
|
||||
}
|
||||
// Get the height from the first image. We'll need this
|
||||
// later in code to reshape the images to their original
|
||||
// size:
|
||||
int height = images[0].rows;
|
||||
// The following lines simply get the last images from
|
||||
// your dataset and remove it from the vector. This is
|
||||
// done, so that the training data (which we learn the
|
||||
// cv::FaceRecognizer on) and the test data we test
|
||||
// the model with, do not overlap.
|
||||
Mat testSample = images[images.size() - 1];
|
||||
int testLabel = labels[labels.size() - 1];
|
||||
images.pop_back();
|
||||
labels.pop_back();
|
||||
// The following lines create an Eigenfaces model for
|
||||
// face recognition and train it with the images and
|
||||
// labels read from the given CSV file.
|
||||
// This here is a full PCA, if you just want to keep
|
||||
// 10 principal components (read Eigenfaces), then call
|
||||
// the factory method like this:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10);
|
||||
//
|
||||
// If you want to create a FaceRecognizer with a
|
||||
// confidence threshold (e.g. 123.0), call it with:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10, 123.0);
|
||||
//
|
||||
// If you want to use _all_ Eigenfaces and have a threshold,
|
||||
// then call the method like this:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(0, 123.0);
|
||||
//
|
||||
Ptr<FaceRecognizer> model = createEigenFaceRecognizer();
|
||||
model->train(images, labels);
|
||||
// The following line predicts the label of a given
|
||||
// test image:
|
||||
int predictedLabel = model->predict(testSample);
|
||||
//
|
||||
// To get the confidence of a prediction call the model with:
|
||||
//
|
||||
// int predictedLabel = -1;
|
||||
// double confidence = 0.0;
|
||||
// model->predict(testSample, predictedLabel, confidence);
|
||||
//
|
||||
string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel);
|
||||
cout << result_message << endl;
|
||||
// Here is how to get the eigenvalues of this Eigenfaces model:
|
||||
Mat eigenvalues = model->getMat("eigenvalues");
|
||||
// And we can do the same to display the Eigenvectors (read Eigenfaces):
|
||||
Mat W = model->getMat("eigenvectors");
|
||||
// Get the sample mean from the training data
|
||||
Mat mean = model->getMat("mean");
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow("mean", norm_0_255(mean.reshape(1, images[0].rows)));
|
||||
} else {
|
||||
imwrite(format("%s/mean.png", output_folder.c_str()), norm_0_255(mean.reshape(1, images[0].rows)));
|
||||
}
|
||||
// Display or save the Eigenfaces:
|
||||
for (int i = 0; i < min(10, W.cols); i++) {
|
||||
string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at<double>(i));
|
||||
cout << msg << endl;
|
||||
// get eigenvector #i
|
||||
Mat ev = W.col(i).clone();
|
||||
// Reshape to original size & normalize to [0...255] for imshow.
|
||||
Mat grayscale = norm_0_255(ev.reshape(1, height));
|
||||
// Show the image & apply a Jet colormap for better sensing.
|
||||
Mat cgrayscale;
|
||||
applyColorMap(grayscale, cgrayscale, COLORMAP_JET);
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("eigenface_%d", i), cgrayscale);
|
||||
} else {
|
||||
imwrite(format("%s/eigenface_%d.png", output_folder.c_str(), i), norm_0_255(cgrayscale));
|
||||
}
|
||||
}
|
||||
// Display or save the image reconstruction at some predefined steps:
|
||||
for(int num_components = 10; num_components < 300; num_components+=15) {
|
||||
// slice the eigenvectors from the model
|
||||
Mat evs = Mat(W, Range::all(), Range(0, num_components));
|
||||
Mat projection = subspaceProject(evs, mean, images[0].reshape(1,1));
|
||||
Mat reconstruction = subspaceReconstruct(evs, mean, projection);
|
||||
// Normalize the result:
|
||||
reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows));
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("eigenface_reconstruction_%d", num_components), reconstruction);
|
||||
} else {
|
||||
imwrite(format("%s/eigenface_reconstruction_%d.png", output_folder.c_str(), num_components), reconstruction);
|
||||
}
|
||||
}
|
||||
// Display if we are not writing to an output folder:
|
||||
if(argc == 2) {
|
||||
waitKey(0);
|
||||
}
|
||||
return 0;
|
||||
}
|
191
modules/contrib/doc/facerec/src/facerec_fisherfaces.cpp
Normal file
@ -0,0 +1,191 @@
|
||||
/*
|
||||
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
|
||||
* Released to public domain under terms of the BSD Simplified license.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of the organization nor the names of its contributors
|
||||
* may be used to endorse or promote products derived from this software
|
||||
* without specific prior written permission.
|
||||
*
|
||||
* See <http://www.opensource.org/licenses/bsd-license>
|
||||
*/
|
||||
|
||||
#include "opencv2/core/core.hpp"
|
||||
#include "opencv2/contrib/contrib.hpp"
|
||||
#include "opencv2/highgui/highgui.hpp"
|
||||
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
static Mat norm_0_255(InputArray _src) {
|
||||
Mat src = _src.getMat();
|
||||
// Create and return normalized image:
|
||||
Mat dst;
|
||||
switch(src.channels()) {
|
||||
case 1:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1);
|
||||
break;
|
||||
case 3:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC3);
|
||||
break;
|
||||
default:
|
||||
src.copyTo(dst);
|
||||
break;
|
||||
}
|
||||
return dst;
|
||||
}
|
||||
|
||||
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
|
||||
std::ifstream file(filename.c_str(), ifstream::in);
|
||||
if (!file) {
|
||||
string error_message = "No valid input file was given, please check the given filename.";
|
||||
CV_Error(CV_StsBadArg, error_message);
|
||||
}
|
||||
string line, path, classlabel;
|
||||
while (getline(file, line)) {
|
||||
stringstream liness(line);
|
||||
getline(liness, path, separator);
|
||||
getline(liness, classlabel);
|
||||
if(!path.empty() && !classlabel.empty()) {
|
||||
images.push_back(imread(path, 0));
|
||||
labels.push_back(atoi(classlabel.c_str()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Check for valid command line arguments, print usage
|
||||
// if no arguments were given.
|
||||
if (argc < 2) {
|
||||
cout << "usage: " << argv[0] << " <csv.ext> <output_folder> " << endl;
|
||||
exit(1);
|
||||
}
|
||||
string output_folder;
|
||||
if (argc == 3) {
|
||||
output_folder = string(argv[2]);
|
||||
}
|
||||
// Get the path to your CSV.
|
||||
string fn_csv = string(argv[1]);
|
||||
// These vectors hold the images and corresponding labels.
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// Read in the data. This can fail if no valid
|
||||
// input filename is given.
|
||||
try {
|
||||
read_csv(fn_csv, images, labels);
|
||||
} catch (cv::Exception& e) {
|
||||
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
|
||||
// nothing more we can do
|
||||
exit(1);
|
||||
}
|
||||
// Quit if there are not enough images for this demo.
|
||||
if(images.size() <= 1) {
|
||||
string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!";
|
||||
CV_Error(CV_StsError, error_message);
|
||||
}
|
||||
// Get the height from the first image. We'll need this
|
||||
// later in code to reshape the images to their original
|
||||
// size:
|
||||
int height = images[0].rows;
|
||||
// The following lines simply get the last images from
|
||||
// your dataset and remove it from the vector. This is
|
||||
// done, so that the training data (which we learn the
|
||||
// cv::FaceRecognizer on) and the test data we test
|
||||
// the model with, do not overlap.
|
||||
Mat testSample = images[images.size() - 1];
|
||||
int testLabel = labels[labels.size() - 1];
|
||||
images.pop_back();
|
||||
labels.pop_back();
|
||||
// The following lines create an Fisherfaces model for
|
||||
// face recognition and train it with the images and
|
||||
// labels read from the given CSV file.
|
||||
// If you just want to keep 10 Fisherfaces, then call
|
||||
// the factory method like this:
|
||||
//
|
||||
// cv::createFisherFaceRecognizer(10);
|
||||
//
|
||||
// However it is not useful to discard Fisherfaces! Please
|
||||
// always try to use _all_ available Fisherfaces for
|
||||
// classification.
|
||||
//
|
||||
// If you want to create a FaceRecognizer with a
|
||||
// confidence threshold (e.g. 123.0) and use _all_
|
||||
// Fisherfaces, then call it with:
|
||||
//
|
||||
// cv::createFisherFaceRecognizer(0, 123.0);
|
||||
//
|
||||
Ptr<FaceRecognizer> model = createFisherFaceRecognizer();
|
||||
model->train(images, labels);
|
||||
// The following line predicts the label of a given
|
||||
// test image:
|
||||
int predictedLabel = model->predict(testSample);
|
||||
//
|
||||
// To get the confidence of a prediction call the model with:
|
||||
//
|
||||
// int predictedLabel = -1;
|
||||
// double confidence = 0.0;
|
||||
// model->predict(testSample, predictedLabel, confidence);
|
||||
//
|
||||
string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel);
|
||||
cout << result_message << endl;
|
||||
// Here is how to get the eigenvalues of this Eigenfaces model:
|
||||
Mat eigenvalues = model->getMat("eigenvalues");
|
||||
// And we can do the same to display the Eigenvectors (read Eigenfaces):
|
||||
Mat W = model->getMat("eigenvectors");
|
||||
// Get the sample mean from the training data
|
||||
Mat mean = model->getMat("mean");
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow("mean", norm_0_255(mean.reshape(1, images[0].rows)));
|
||||
} else {
|
||||
imwrite(format("%s/mean.png", output_folder.c_str()), norm_0_255(mean.reshape(1, images[0].rows)));
|
||||
}
|
||||
// Display or save the first, at most 16 Fisherfaces:
|
||||
for (int i = 0; i < min(16, W.cols); i++) {
|
||||
string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at<double>(i));
|
||||
cout << msg << endl;
|
||||
// get eigenvector #i
|
||||
Mat ev = W.col(i).clone();
|
||||
// Reshape to original size & normalize to [0...255] for imshow.
|
||||
Mat grayscale = norm_0_255(ev.reshape(1, height));
|
||||
// Show the image & apply a Bone colormap for better sensing.
|
||||
Mat cgrayscale;
|
||||
applyColorMap(grayscale, cgrayscale, COLORMAP_BONE);
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("fisherface_%d", i), cgrayscale);
|
||||
} else {
|
||||
imwrite(format("%s/fisherface_%d.png", output_folder.c_str(), i), norm_0_255(cgrayscale));
|
||||
}
|
||||
}
|
||||
// Display or save the image reconstruction at some predefined steps:
|
||||
for(int num_component = 0; num_component < min(16, W.cols); num_component++) {
|
||||
// Slice the Fisherface from the model:
|
||||
Mat ev = W.col(num_component);
|
||||
Mat projection = subspaceProject(ev, mean, images[0].reshape(1,1));
|
||||
Mat reconstruction = subspaceReconstruct(ev, mean, projection);
|
||||
// Normalize the result:
|
||||
reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows));
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("fisherface_reconstruction_%d", num_component), reconstruction);
|
||||
} else {
|
||||
imwrite(format("%s/fisherface_reconstruction_%d.png", output_folder.c_str(), num_component), reconstruction);
|
||||
}
|
||||
}
|
||||
// Display if we are not writing to an output folder:
|
||||
if(argc == 2) {
|
||||
waitKey(0);
|
||||
}
|
||||
return 0;
|
||||
}
|
155
modules/contrib/doc/facerec/src/facerec_lbph.cpp
Normal file
@ -0,0 +1,155 @@
|
||||
/*
|
||||
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
|
||||
* Released to public domain under terms of the BSD Simplified license.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of the organization nor the names of its contributors
|
||||
* may be used to endorse or promote products derived from this software
|
||||
* without specific prior written permission.
|
||||
*
|
||||
* See <http://www.opensource.org/licenses/bsd-license>
|
||||
*/
|
||||
|
||||
#include "opencv2/core/core.hpp"
|
||||
#include "opencv2/contrib/contrib.hpp"
|
||||
#include "opencv2/highgui/highgui.hpp"
|
||||
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
|
||||
std::ifstream file(filename.c_str(), ifstream::in);
|
||||
if (!file) {
|
||||
string error_message = "No valid input file was given, please check the given filename.";
|
||||
CV_Error(CV_StsBadArg, error_message);
|
||||
}
|
||||
string line, path, classlabel;
|
||||
while (getline(file, line)) {
|
||||
stringstream liness(line);
|
||||
getline(liness, path, separator);
|
||||
getline(liness, classlabel);
|
||||
if(!path.empty() && !classlabel.empty()) {
|
||||
images.push_back(imread(path, 0));
|
||||
labels.push_back(atoi(classlabel.c_str()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Check for valid command line arguments, print usage
|
||||
// if no arguments were given.
|
||||
if (argc != 2) {
|
||||
cout << "usage: " << argv[0] << " <csv.ext>" << endl;
|
||||
exit(1);
|
||||
}
|
||||
// Get the path to your CSV.
|
||||
string fn_csv = string(argv[1]);
|
||||
// These vectors hold the images and corresponding labels.
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// Read in the data. This can fail if no valid
|
||||
// input filename is given.
|
||||
try {
|
||||
read_csv(fn_csv, images, labels);
|
||||
} catch (cv::Exception& e) {
|
||||
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
|
||||
// nothing more we can do
|
||||
exit(1);
|
||||
}
|
||||
// Quit if there are not enough images for this demo.
|
||||
if(images.size() <= 1) {
|
||||
string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!";
|
||||
CV_Error(CV_StsError, error_message);
|
||||
}
|
||||
// Get the height from the first image. We'll need this
|
||||
// later in code to reshape the images to their original
|
||||
// size:
|
||||
int height = images[0].rows;
|
||||
// The following lines simply get the last images from
|
||||
// your dataset and remove it from the vector. This is
|
||||
// done, so that the training data (which we learn the
|
||||
// cv::FaceRecognizer on) and the test data we test
|
||||
// the model with, do not overlap.
|
||||
Mat testSample = images[images.size() - 1];
|
||||
int testLabel = labels[labels.size() - 1];
|
||||
images.pop_back();
|
||||
labels.pop_back();
|
||||
// The following lines create an LBPH model for
|
||||
// face recognition and train it with the images and
|
||||
// labels read from the given CSV file.
|
||||
//
|
||||
// The LBPHFaceRecognizer uses Extended Local Binary Patterns
|
||||
// (it's probably configurable with other operators at a later
|
||||
// point), and has the following default values
|
||||
//
|
||||
// radius = 1
|
||||
// neighbors = 8
|
||||
// grid_x = 8
|
||||
// grid_y = 8
|
||||
//
|
||||
// So if you want a LBPH FaceRecognizer using a radius of
|
||||
// 2 and 16 neighbors, call the factory method with:
|
||||
//
|
||||
// cv::createLBPHFaceRecognizer(2, 16);
|
||||
//
|
||||
// And if you want a threshold (e.g. 123.0) call it with its default values:
|
||||
//
|
||||
// cv::createLBPHFaceRecognizer(1,8,8,8,123.0)
|
||||
//
|
||||
Ptr<FaceRecognizer> model = createLBPHFaceRecognizer();
|
||||
model->train(images, labels);
|
||||
// The following line predicts the label of a given
|
||||
// test image:
|
||||
int predictedLabel = model->predict(testSample);
|
||||
//
|
||||
// To get the confidence of a prediction call the model with:
|
||||
//
|
||||
// int predictedLabel = -1;
|
||||
// double confidence = 0.0;
|
||||
// model->predict(testSample, predictedLabel, confidence);
|
||||
//
|
||||
string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel);
|
||||
cout << result_message << endl;
|
||||
// Sometimes you'll need to get/set internal model data,
|
||||
// which isn't exposed by the public cv::FaceRecognizer.
|
||||
// Since each cv::FaceRecognizer is derived from a
|
||||
// cv::Algorithm, you can query the data.
|
||||
//
|
||||
// First we'll use it to set the threshold of the FaceRecognizer
|
||||
// to 0.0 without retraining the model. This can be useful if
|
||||
// you are evaluating the model:
|
||||
//
|
||||
model->set("threshold", 0.0);
|
||||
// Now the threshold of this model is set to 0.0. A prediction
|
||||
// now returns -1, as it's impossible to have a distance below
|
||||
// it
|
||||
predictedLabel = model->predict(testSample);
|
||||
cout << "Predicted class = " << predictedLabel << endl;
|
||||
// Show some informations about the model, as there's no cool
|
||||
// Model data to display as in Eigenfaces/Fisherfaces.
|
||||
// Due to efficiency reasons the LBP images are not stored
|
||||
// within the model:
|
||||
cout << "Model Information:" << endl;
|
||||
string model_info = format("\tLBPH(radius=%i, neighbors=%i, grid_x=%i, grid_y=%i, threshold=%.2f)",
|
||||
model->getInt("radius"),
|
||||
model->getInt("neighbors"),
|
||||
model->getInt("grid_x"),
|
||||
model->getInt("grid_y"),
|
||||
model->getDouble("threshold"));
|
||||
cout << model_info << endl;
|
||||
// We could get the histograms for example:
|
||||
vector<Mat> histograms = model->getMatVector("histograms");
|
||||
// But should I really visualize it? Probably the length is interesting:
|
||||
cout << "Size of the histograms: " << histograms[0].total() << endl;
|
||||
return 0;
|
||||
}
|
200
modules/contrib/doc/facerec/src/facerec_save_load.cpp
Normal file
@ -0,0 +1,200 @@
|
||||
/*
|
||||
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
|
||||
* Released to public domain under terms of the BSD Simplified license.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of the organization nor the names of its contributors
|
||||
* may be used to endorse or promote products derived from this software
|
||||
* without specific prior written permission.
|
||||
*
|
||||
* See <http://www.opensource.org/licenses/bsd-license>
|
||||
*/
|
||||
|
||||
#include "opencv2/contrib/contrib.hpp"
|
||||
#include "opencv2/core/core.hpp"
|
||||
#include "opencv2/highgui/highgui.hpp"
|
||||
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
static Mat norm_0_255(InputArray _src) {
|
||||
Mat src = _src.getMat();
|
||||
// Create and return normalized image:
|
||||
Mat dst;
|
||||
switch(src.channels()) {
|
||||
case 1:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC1);
|
||||
break;
|
||||
case 3:
|
||||
cv::normalize(_src, dst, 0, 255, NORM_MINMAX, CV_8UC3);
|
||||
break;
|
||||
default:
|
||||
src.copyTo(dst);
|
||||
break;
|
||||
}
|
||||
return dst;
|
||||
}
|
||||
|
||||
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
|
||||
std::ifstream file(filename.c_str(), ifstream::in);
|
||||
if (!file) {
|
||||
string error_message = "No valid input file was given, please check the given filename.";
|
||||
CV_Error(CV_StsBadArg, error_message);
|
||||
}
|
||||
string line, path, classlabel;
|
||||
while (getline(file, line)) {
|
||||
stringstream liness(line);
|
||||
getline(liness, path, separator);
|
||||
getline(liness, classlabel);
|
||||
if(!path.empty() && !classlabel.empty()) {
|
||||
images.push_back(imread(path, 0));
|
||||
labels.push_back(atoi(classlabel.c_str()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Check for valid command line arguments, print usage
|
||||
// if no arguments were given.
|
||||
if (argc < 2) {
|
||||
cout << "usage: " << argv[0] << " <csv.ext> <output_folder> " << endl;
|
||||
exit(1);
|
||||
}
|
||||
string output_folder;
|
||||
if (argc == 3) {
|
||||
output_folder = string(argv[2]);
|
||||
}
|
||||
// Get the path to your CSV.
|
||||
string fn_csv = string(argv[1]);
|
||||
// These vectors hold the images and corresponding labels.
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// Read in the data. This can fail if no valid
|
||||
// input filename is given.
|
||||
try {
|
||||
read_csv(fn_csv, images, labels);
|
||||
} catch (cv::Exception& e) {
|
||||
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
|
||||
// nothing more we can do
|
||||
exit(1);
|
||||
}
|
||||
// Quit if there are not enough images for this demo.
|
||||
if(images.size() <= 1) {
|
||||
string error_message = "This demo needs at least 2 images to work. Please add more images to your data set!";
|
||||
CV_Error(CV_StsError, error_message);
|
||||
}
|
||||
// Get the height from the first image. We'll need this
|
||||
// later in code to reshape the images to their original
|
||||
// size:
|
||||
int height = images[0].rows;
|
||||
// The following lines simply get the last images from
|
||||
// your dataset and remove it from the vector. This is
|
||||
// done, so that the training data (which we learn the
|
||||
// cv::FaceRecognizer on) and the test data we test
|
||||
// the model with, do not overlap.
|
||||
Mat testSample = images[images.size() - 1];
|
||||
int testLabel = labels[labels.size() - 1];
|
||||
images.pop_back();
|
||||
labels.pop_back();
|
||||
// The following lines create an Eigenfaces model for
|
||||
// face recognition and train it with the images and
|
||||
// labels read from the given CSV file.
|
||||
// This here is a full PCA, if you just want to keep
|
||||
// 10 principal components (read Eigenfaces), then call
|
||||
// the factory method like this:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10);
|
||||
//
|
||||
// If you want to create a FaceRecognizer with a
|
||||
// confidence threshold (e.g. 123.0), call it with:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(10, 123.0);
|
||||
//
|
||||
// If you want to use _all_ Eigenfaces and have a threshold,
|
||||
// then call the method like this:
|
||||
//
|
||||
// cv::createEigenFaceRecognizer(0, 123.0);
|
||||
//
|
||||
Ptr<FaceRecognizer> model0 = createEigenFaceRecognizer();
|
||||
model0->train(images, labels);
|
||||
// save the model to eigenfaces_at.yaml
|
||||
model0->save("eigenfaces_at.yml");
|
||||
//
|
||||
//
|
||||
// Now create a new Eigenfaces Recognizer
|
||||
//
|
||||
Ptr<FaceRecognizer> model1 = createEigenFaceRecognizer();
|
||||
model1->load("eigenfaces_at.yml");
|
||||
// The following line predicts the label of a given
|
||||
// test image:
|
||||
int predictedLabel = model1->predict(testSample);
|
||||
//
|
||||
// To get the confidence of a prediction call the model with:
|
||||
//
|
||||
// int predictedLabel = -1;
|
||||
// double confidence = 0.0;
|
||||
// model->predict(testSample, predictedLabel, confidence);
|
||||
//
|
||||
string result_message = format("Predicted class = %d / Actual class = %d.", predictedLabel, testLabel);
|
||||
cout << result_message << endl;
|
||||
// Here is how to get the eigenvalues of this Eigenfaces model:
|
||||
Mat eigenvalues = model1->getMat("eigenvalues");
|
||||
// And we can do the same to display the Eigenvectors (read Eigenfaces):
|
||||
Mat W = model1->getMat("eigenvectors");
|
||||
// Get the sample mean from the training data
|
||||
Mat mean = model1->getMat("mean");
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow("mean", norm_0_255(mean.reshape(1, images[0].rows)));
|
||||
} else {
|
||||
imwrite(format("%s/mean.png", output_folder.c_str()), norm_0_255(mean.reshape(1, images[0].rows)));
|
||||
}
|
||||
// Display or save the Eigenfaces:
|
||||
for (int i = 0; i < min(10, W.cols); i++) {
|
||||
string msg = format("Eigenvalue #%d = %.5f", i, eigenvalues.at<double>(i));
|
||||
cout << msg << endl;
|
||||
// get eigenvector #i
|
||||
Mat ev = W.col(i).clone();
|
||||
// Reshape to original size & normalize to [0...255] for imshow.
|
||||
Mat grayscale = norm_0_255(ev.reshape(1, height));
|
||||
// Show the image & apply a Jet colormap for better sensing.
|
||||
Mat cgrayscale;
|
||||
applyColorMap(grayscale, cgrayscale, COLORMAP_JET);
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("eigenface_%d", i), cgrayscale);
|
||||
} else {
|
||||
imwrite(format("%s/eigenface_%d.png", output_folder.c_str(), i), norm_0_255(cgrayscale));
|
||||
}
|
||||
}
|
||||
// Display or save the image reconstruction at some predefined steps:
|
||||
for(int num_components = 10; num_components < 300; num_components+=15) {
|
||||
// slice the eigenvectors from the model
|
||||
Mat evs = Mat(W, Range::all(), Range(0, num_components));
|
||||
Mat projection = subspaceProject(evs, mean, images[0].reshape(1,1));
|
||||
Mat reconstruction = subspaceReconstruct(evs, mean, projection);
|
||||
// Normalize the result:
|
||||
reconstruction = norm_0_255(reconstruction.reshape(1, images[0].rows));
|
||||
// Display or save:
|
||||
if(argc == 2) {
|
||||
imshow(format("eigenface_reconstruction_%d", num_components), reconstruction);
|
||||
} else {
|
||||
imwrite(format("%s/eigenface_reconstruction_%d.png", output_folder.c_str(), num_components), reconstruction);
|
||||
}
|
||||
}
|
||||
// Display if we are not writing to an output folder:
|
||||
if(argc == 2) {
|
||||
waitKey(0);
|
||||
}
|
||||
return 0;
|
||||
}
|
152
modules/contrib/doc/facerec/src/facerec_video.cpp
Normal file
@ -0,0 +1,152 @@
|
||||
/*
|
||||
* Copyright (c) 2011. Philipp Wagner <bytefish[at]gmx[dot]de>.
|
||||
* Released to public domain under terms of the BSD Simplified license.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above copyright
|
||||
* notice, this list of conditions and the following disclaimer in the
|
||||
* documentation and/or other materials provided with the distribution.
|
||||
* * Neither the name of the organization nor the names of its contributors
|
||||
* may be used to endorse or promote products derived from this software
|
||||
* without specific prior written permission.
|
||||
*
|
||||
* See <http://www.opensource.org/licenses/bsd-license>
|
||||
*/
|
||||
|
||||
#include "opencv2/core/core.hpp"
|
||||
#include "opencv2/contrib/contrib.hpp"
|
||||
#include "opencv2/highgui/highgui.hpp"
|
||||
#include "opencv2/imgproc/imgproc.hpp"
|
||||
#include "opencv2/objdetect/objdetect.hpp"
|
||||
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
|
||||
using namespace cv;
|
||||
using namespace std;
|
||||
|
||||
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
|
||||
std::ifstream file(filename.c_str(), ifstream::in);
|
||||
if (!file) {
|
||||
string error_message = "No valid input file was given, please check the given filename.";
|
||||
CV_Error(CV_StsBadArg, error_message);
|
||||
}
|
||||
string line, path, classlabel;
|
||||
while (getline(file, line)) {
|
||||
stringstream liness(line);
|
||||
getline(liness, path, separator);
|
||||
getline(liness, classlabel);
|
||||
if(!path.empty() && !classlabel.empty()) {
|
||||
images.push_back(imread(path, 0));
|
||||
labels.push_back(atoi(classlabel.c_str()));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, const char *argv[]) {
|
||||
// Check for valid command line arguments, print usage
|
||||
// if no arguments were given.
|
||||
if (argc != 4) {
|
||||
cout << "usage: " << argv[0] << " </path/to/haar_cascade> </path/to/csv.ext> </path/to/device id>" << endl;
|
||||
cout << "\t </path/to/haar_cascade> -- Path to the Haar Cascade for face detection." << endl;
|
||||
cout << "\t </path/to/csv.ext> -- Path to the CSV file with the face database." << endl;
|
||||
cout << "\t <device id> -- The webcam device id to grab frames from." << endl;
|
||||
exit(1);
|
||||
}
|
||||
// Get the path to your CSV:
|
||||
string fn_haar = string(argv[1]);
|
||||
string fn_csv = string(argv[2]);
|
||||
int deviceId = atoi(argv[3]);
|
||||
// These vectors hold the images and corresponding labels:
|
||||
vector<Mat> images;
|
||||
vector<int> labels;
|
||||
// Read in the data (fails if no valid input filename is given, but you'll get an error message):
|
||||
try {
|
||||
read_csv(fn_csv, images, labels);
|
||||
} catch (cv::Exception& e) {
|
||||
cerr << "Error opening file \"" << fn_csv << "\". Reason: " << e.msg << endl;
|
||||
// nothing more we can do
|
||||
exit(1);
|
||||
}
|
||||
// Get the height from the first image. We'll need this
|
||||
// later in code to reshape the images to their original
|
||||
// size AND we need to reshape incoming faces to this size:
|
||||
int im_width = images[0].cols;
|
||||
int im_height = images[0].rows;
|
||||
// Create a FaceRecognizer and train it on the given images:
|
||||
Ptr<FaceRecognizer> model = createFisherFaceRecognizer();
|
||||
model->train(images, labels);
|
||||
// That's it for learning the Face Recognition model. You now
|
||||
// need to create the classifier for the task of Face Detection.
|
||||
// We are going to use the haar cascade you have specified in the
|
||||
// command line arguments:
|
||||
//
|
||||
CascadeClassifier haar_cascade;
|
||||
haar_cascade.load(fn_haar);
|
||||
// Get a handle to the Video device:
|
||||
VideoCapture cap(deviceId);
|
||||
// Check if we can use this device at all:
|
||||
if(!cap.isOpened()) {
|
||||
cerr << "Capture Device ID " << deviceId << "cannot be opened." << endl;
|
||||
return -1;
|
||||
}
|
||||
// Holds the current frame from the Video device:
|
||||
Mat frame;
|
||||
for(;;) {
|
||||
cap >> frame;
|
||||
// Clone the current frame:
|
||||
Mat original = frame.clone();
|
||||
// Convert the current frame to grayscale:
|
||||
Mat gray;
|
||||
cvtColor(original, gray, CV_BGR2GRAY);
|
||||
// Find the faces in the frame:
|
||||
vector< Rect_<int> > faces;
|
||||
haar_cascade.detectMultiScale(gray, faces);
|
||||
// At this point you have the position of the faces in
|
||||
// faces. Now we'll get the faces, make a prediction and
|
||||
// annotate it in the video. Cool or what?
|
||||
for(int i = 0; i < faces.size(); i++) {
|
||||
// Process face by face:
|
||||
Rect face_i = faces[i];
|
||||
// Crop the face from the image. So simple with OpenCV C++:
|
||||
Mat face = gray(face_i);
|
||||
// Resizing the face is necessary for Eigenfaces and Fisherfaces. You can easily
|
||||
// verify this, by reading through the face recognition tutorial coming with OpenCV.
|
||||
// Resizing IS NOT NEEDED for Local Binary Patterns Histograms, so preparing the
|
||||
// input data really depends on the algorithm used.
|
||||
//
|
||||
// I strongly encourage you to play around with the algorithms. See which work best
|
||||
// in your scenario, LBPH should always be a contender for robust face recognition.
|
||||
//
|
||||
// Since I am showing the Fisherfaces algorithm here, I also show how to resize the
|
||||
// face you have just found:
|
||||
Mat face_resized;
|
||||
cv::resize(face, face_resized, Size(im_width, im_height), 1.0, 1.0, INTER_CUBIC);
|
||||
// Now perform the prediction, see how easy that is:
|
||||
int prediction = model->predict(face_resized);
|
||||
// And finally write all we've found out to the original image!
|
||||
// First of all draw a green rectangle around the detected face:
|
||||
rectangle(original, face_i, CV_RGB(0, 255,0), 1);
|
||||
// Create the text we will annotate the box with:
|
||||
string box_text = format("Prediction = %d", prediction);
|
||||
// Calculate the position for annotated text (make sure we don't
|
||||
// put illegal values in there):
|
||||
int pos_x = std::max(face_i.tl().x - 10, 0);
|
||||
int pos_y = std::max(face_i.tl().y - 10, 0);
|
||||
// And now put it into the image:
|
||||
putText(original, box_text, Point(pos_x, pos_y), FONT_HERSHEY_PLAIN, 1.0, CV_RGB(0,255,0), 2.0);
|
||||
}
|
||||
// Show the result:
|
||||
imshow("face_recognizer", original);
|
||||
// And display it:
|
||||
char key = (char) waitKey(20);
|
||||
// Exit this loop on escape:
|
||||
if(key == 27)
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
}
|
@ -0,0 +1,233 @@
|
||||
Gender Classification with OpenCV
|
||||
=================================
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:depth: 3
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
A lot of people interested in face recognition, also want to know how to perform image classification tasks like:
|
||||
|
||||
* Gender Classification (Gender Detection)
|
||||
* Emotion Classification (Emotion Detection)
|
||||
* Glasses Classification (Glasses Detection)
|
||||
* ...
|
||||
|
||||
This is has become very, very easy with the new :ocv:class:`FaceRecognizer` class. In this tutorial I'll show you how to perform gender classification with OpenCV on a set of face images. You'll also learn how to align your images to enhance the recognition results. If you want to do emotion classification instead of gender classification, all you need to do is to update is your training data and the configuration you pass to the demo.
|
||||
|
||||
Prerequisites
|
||||
--------------
|
||||
|
||||
For gender classification of faces, you'll need some images of male and female faces first. I've decided to search faces of celebrities using `Google Images <http://www.google.com/images>`_ with the faces filter turned on (my god, they have great algorithms at `Google <http://www.google.com>`_!). My database has 8 male and 5 female subjects, each with 10 images. Here are the names, if you don't know who to search:
|
||||
|
||||
* Angelina Jolie
|
||||
* Arnold Schwarzenegger
|
||||
* Brad Pitt
|
||||
* Emma Watson
|
||||
* George Clooney
|
||||
* Jennifer Lopez
|
||||
* Johnny Depp
|
||||
* Justin Timberlake
|
||||
* Katy Perry
|
||||
* Keanu Reeves
|
||||
* Naomi Watts
|
||||
* Patrick Stewart
|
||||
* Tom Cruise
|
||||
|
||||
Once you have acquired some images, you'll need to read them. In the demo I have decided to read the images from a very simple CSV file. Why? Because it's the simplest platform-independent approach I can think of. However, if you know a simpler solution please ping me about it. Basically all the CSV file needs to contain are lines composed of a ``filename`` followed by a ``;`` followed by the ``label`` (as *integer number*), making up a line like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/path/to/image.ext;0
|
||||
|
||||
Let's dissect the line. ``/path/to/image.ext`` is the path to an image, probably something like this if you are in Windows: ``C:/faces/person0/image0.jpg``. Then there is the separator ``;`` and finally we assign a label ``0`` to the image. Think of the label as the subject (the person, the gender or whatever comes to your mind). In the gender classification scenario, the label is the gender the person has. I'll give the label ``0`` to *male* persons and the label ``1`` is for *female* subjects. So my CSV file looks like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/home/philipp/facerec/data/gender/male/keanu_reeves/keanu_reeves_01.jpg;0
|
||||
/home/philipp/facerec/data/gender/male/keanu_reeves/keanu_reeves_02.jpg;0
|
||||
/home/philipp/facerec/data/gender/male/keanu_reeves/keanu_reeves_03.jpg;0
|
||||
...
|
||||
/home/philipp/facerec/data/gender/female/katy_perry/katy_perry_01.jpg;1
|
||||
/home/philipp/facerec/data/gender/female/katy_perry/katy_perry_02.jpg;1
|
||||
/home/philipp/facerec/data/gender/female/katy_perry/katy_perry_03.jpg;1
|
||||
...
|
||||
/home/philipp/facerec/data/gender/male/brad_pitt/brad_pitt_01.jpg;0
|
||||
/home/philipp/facerec/data/gender/male/brad_pitt/brad_pitt_02.jpg;0
|
||||
/home/philipp/facerec/data/gender/male/brad_pitt/brad_pitt_03.jpg;0
|
||||
...
|
||||
/home/philipp/facerec/data/gender/female/emma_watson/emma_watson_08.jpg;1
|
||||
/home/philipp/facerec/data/gender/female/emma_watson/emma_watson_02.jpg;1
|
||||
/home/philipp/facerec/data/gender/female/emma_watson/emma_watson_03.jpg;1
|
||||
|
||||
All images for this example were chosen to have a frontal face perspective. They have been cropped, scaled and rotated to be aligned at the eyes, just like this set of George Clooney images:
|
||||
|
||||
.. image:: ../img/tutorial/gender_classification/clooney_set.png
|
||||
:align: center
|
||||
|
||||
You really don't want to create the CSV file by hand. And you really don't want scale, rotate & translate the images manually. I have prepared you two Python scripts ``create_csv.py`` and ``crop_face.py``, you can find them in the ``src`` folder coming with this documentation. You'll see how to use them in the :ref:`appendix`.
|
||||
|
||||
Fisherfaces for Gender Classification
|
||||
--------------------------------------
|
||||
|
||||
If you want to decide wether a person is *male* or *female*, you have to learn the discriminative features of both classes. The Eigenfaces method is based on the Principal Component Analysis, which is an unsupervised statistical model and not suitable for this task. Please see the Face Recognition tutorial for insights into the algorithms. The Fisherfaces instead yields a class-specific linear projection, so it is much better suited for the gender classification task. `http://www.bytefish.de/blog/gender_classification <http://www.bytefish.de/blog/gender_classification>`_ shows the recongition rate of the Fisherfaces method for gender classification.
|
||||
|
||||
The Fisherfaces method achieves a 98% recognition rate in a subject-independent cross-validation. A subject-independent cross-validation means *images of the person under test are never used for learning the model*. And could you believe it: you can simply use the facerec_fisherfaces demo, that's inlcuded in OpenCV.
|
||||
|
||||
Fisherfaces in OpenCV
|
||||
---------------------
|
||||
|
||||
The source code for the demo is available in the ``samples/cpp`` folder of your OpenCV installation. If you have built OpenCV with samples turned on, chances are good you have the executable already.
|
||||
|
||||
.. literalinclude:: ../src/facerec_fisherfaces.cpp
|
||||
:language: cpp
|
||||
:linenos:
|
||||
|
||||
Running the Demo
|
||||
----------------
|
||||
|
||||
If you are in Windows, then simply start the demo by running (from command line):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
facerec_fisherfaces.exe C:/path/to/your/csv.ext
|
||||
|
||||
If you are in Linux, then simply start the demo by running:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
./facerec_fisherfaces /path/to/your/csv.ext
|
||||
|
||||
If you don't want to display the images, but save them, then pass the desired path to the demo. It works like this in Windows:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
facerec_fisherfaces.exe C:/path/to/your/csv.ext C:/path/to/store/results/at
|
||||
|
||||
And in Linux:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
./facerec_fisherfaces /path/to/your/csv.ext /path/to/store/results/at
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
If you run the program with your CSV file as parameter, you'll see the Fisherface that separates between male and female images. I've decided to apply a Jet colormap in this demo, so you can see which features the method identifies:
|
||||
|
||||
.. image:: ../img/tutorial/gender_classification/fisherface_0.png
|
||||
|
||||
The demo also shows the average face of the male and female training images you have passed:
|
||||
|
||||
.. image:: ../img/tutorial/gender_classification/mean.png
|
||||
|
||||
Moreover it the demo should yield the prediction for the correct gender:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Predicted class = 1 / Actual class = 1.
|
||||
|
||||
And for advanced users I have also shown the Eigenvalue for the Fisherface:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Eigenvalue #0 = 152.49493
|
||||
|
||||
And the Fisherfaces reconstruction:
|
||||
|
||||
.. image:: ../img/tutorial/gender_classification/fisherface_reconstruction_0.png
|
||||
|
||||
I hope this gives you an idea how to approach gender classification and the other image classification tasks.
|
||||
|
||||
.. _appendix:
|
||||
|
||||
Appendix
|
||||
--------
|
||||
|
||||
Creating the CSV File
|
||||
+++++++++++++++++++++
|
||||
|
||||
You don't really want to create the CSV file by hand. I have prepared you a little Python script ``create_csv.py`` (you find it at ``/src/create_csv.py`` coming with this tutorial) that automatically creates you a CSV file. If you have your images in hierarchie like this (``/basepath/<subject>/<image.ext>``):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/facerec/data/at$ tree
|
||||
.
|
||||
|-- s1
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|-- s2
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
...
|
||||
|-- s40
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|
||||
|
||||
Then simply call ``create_csv.py`` with the path to the folder, just like this and you could save the output:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/facerec/data$ python create_csv.py
|
||||
at/s13/2.pgm;0
|
||||
at/s13/7.pgm;0
|
||||
at/s13/6.pgm;0
|
||||
at/s13/9.pgm;0
|
||||
at/s13/5.pgm;0
|
||||
at/s13/3.pgm;0
|
||||
at/s13/4.pgm;0
|
||||
at/s13/10.pgm;0
|
||||
at/s13/8.pgm;0
|
||||
at/s13/1.pgm;0
|
||||
at/s17/2.pgm;1
|
||||
at/s17/7.pgm;1
|
||||
at/s17/6.pgm;1
|
||||
at/s17/9.pgm;1
|
||||
at/s17/5.pgm;1
|
||||
at/s17/3.pgm;1
|
||||
[...]
|
||||
|
||||
Here is the script, if you can't find it:
|
||||
|
||||
.. literalinclude:: ../src/create_csv.py
|
||||
:language: python
|
||||
:linenos:
|
||||
|
||||
Aligning Face Images
|
||||
++++++++++++++++++++
|
||||
|
||||
An accurate alignment of your image data is especially important in tasks like emotion detection, were you need as much detail as possible. Believe me... You don't want to do this by hand. So I've prepared you a tiny Python script. The code is really easy to use. To scale, rotate and crop the face image you just need to call *CropFace(image, eye_left, eye_right, offset_pct, dest_sz)*, where:
|
||||
|
||||
* *eye_left* is the position of the left eye
|
||||
* *eye_right* is the position of the right eye
|
||||
* *offset_pct* is the percent of the image you want to keep next to the eyes (horizontal, vertical direction)
|
||||
* *dest_sz* is the size of the output image
|
||||
|
||||
If you are using the same *offset_pct* and *dest_sz* for your images, they are all aligned at the eyes.
|
||||
|
||||
.. literalinclude:: ../src/crop_face.py
|
||||
:language: python
|
||||
:linenos:
|
||||
|
||||
Imagine we are given `this photo of Arnold Schwarzenegger <http://en.wikipedia.org/wiki/File:Arnold_Schwarzenegger_edit%28ws%29.jpg>`_, which is under a Public Domain license. The (x,y)-position of the eyes is approximately *(252,364)* for the left and *(420,366)* for the right eye. Now you only need to define the horizontal offset, vertical offset and the size your scaled, rotated & cropped face should have.
|
||||
|
||||
Here are some examples:
|
||||
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| Configuration | Cropped, Scaled, Rotated Face |
|
||||
+=================================+============================================================================+
|
||||
| 0.1 (10%), 0.1 (10%), (200,200) | .. image:: ../img/tutorial/gender_classification/arnie_10_10_200_200.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| 0.2 (20%), 0.2 (20%), (200,200) | .. image:: ../img/tutorial/gender_classification/arnie_20_20_200_200.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| 0.3 (30%), 0.3 (30%), (200,200) | .. image:: ../img/tutorial/gender_classification/arnie_30_30_200_200.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| 0.2 (20%), 0.2 (20%), (70,70) | .. image:: ../img/tutorial/gender_classification/arnie_20_20_70_70.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
|
||||
|
50
modules/contrib/doc/facerec/tutorial/facerec_save_load.rst
Normal file
@ -0,0 +1,50 @@
|
||||
Saving and Loading a FaceRecognizer
|
||||
===================================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Saving and loading a :ocv:class:`FaceRecognizer` is very important. Training a FaceRecognizer can be a very
|
||||
time-intense task, plus it's often impossible to ship the whole face database to the user of your product.
|
||||
|
||||
The task of saving and loading a FaceRecognizer is easy with :ocv:class:`FaceRecognizer`. You only have to
|
||||
call :ocv:func:`FaceRecognizer::load` for loading and :ocv:func:`FaceRecognizer::save` for saving a
|
||||
:ocv:class:`FaceRecognizer`.
|
||||
|
||||
I'll adapt the Eigenfaces example from the :doc:`/facerec_tutorial`: Imagine we want to learn the Eigenfaces
|
||||
of the `AT&T Facedatabase <http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html>`_ store the
|
||||
model to a YAML file and then load it again.
|
||||
|
||||
From the loaded model, we'll get a prediction, show the mean, eigenfaces and the image reconstruction.
|
||||
|
||||
Using FaceRecognizer::save and FaceRecognizer::load
|
||||
-----------------------------------------------------
|
||||
|
||||
.. literalinclude:: ../src/facerec_save_load.cpp
|
||||
:language: cpp
|
||||
:linenos:
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
``eigenfaces_at.yml`` then contains the model state, we'll simply look at the first 10 lines with ``head eigenfaces_at.yml``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/github/libfacerec-build$ head eigenfaces_at.yml
|
||||
%YAML:1.0
|
||||
num_components: 399
|
||||
mean: !!opencv-matrix
|
||||
rows: 1
|
||||
cols: 10304
|
||||
dt: d
|
||||
data: [ 8.5558897243107765e+01, 8.5511278195488714e+01,
|
||||
8.5854636591478695e+01, 8.5796992481203006e+01,
|
||||
8.5952380952380949e+01, 8.6162907268170414e+01,
|
||||
8.6082706766917283e+01, 8.5776942355889716e+01,
|
||||
|
||||
And here is the Reconstruction, which is the same as the original:
|
||||
|
||||
.. image:: ../img/eigenface_reconstruction_opencv.png
|
||||
:align: center
|
||||
|
@ -0,0 +1,203 @@
|
||||
Face Recognition in Videos with OpenCV
|
||||
=======================================
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:depth: 3
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
Whenever you hear the term *face recognition*, you instantly think of surveillance in videos. So performing face recognition in videos (e.g. webcam) is one of the most requested features I got. I have heard your cries, so here it is. For face detection we'll use the awesome :ocv:class:`CascadeClassifier`, and we'll use :ocv:class:`FaceRecognizer` for face recognition. This example uses the Fisherfaces method for face recognition, because it is robust against large changes in illumination.
|
||||
|
||||
Here is what the final application looks like, as you can see I am only writing the id of the recognized person (by the way this id is Arnold Schwarzenegger for my data set) above the detected face.
|
||||
|
||||
.. image:: ../img/tutorial/facerec_video/facerec_video.png
|
||||
:align: center
|
||||
:scale: 70%
|
||||
|
||||
This demo is a basis for your research, and it shows you how to implement face recognition in videos. You probably want to extend the application and make it more sophisticated: You could combine the id with the name, show the confidence of the prediction, recognize the emotion... and and and. But before you send mails, asking what these Haar-Cascade thing is or what a CSV is... Make sure you have read the entire tutorial. It's all explained in here. If you just want to scroll down to the code, please note:
|
||||
|
||||
* The available Haar-Cascades are located in the ``data`` folder of your OpenCV installation! One of the available Haar-Cascades for face detection is for example ``data/haarcascades/haarcascade_frontalface_default.xml``.
|
||||
|
||||
I encourage you to experiment with the application. Play around with the available :ocv:class:`FaceRecognizer`, try the available cascades in OpenCV and see if you can improve your results!
|
||||
|
||||
Prerequisites
|
||||
--------------
|
||||
|
||||
You want to do face recognition, so you need some face images to learn a :ocv:class:`FaceRecognizer` on. I have decided to reuse the images from the gender classification example: :doc:`../facerec_gender_classification`.
|
||||
|
||||
I have the following celebrities in my training data set:
|
||||
|
||||
* Angelina Jolie
|
||||
* Arnold Schwarzenegger
|
||||
* Brad Pitt
|
||||
* George Clooney
|
||||
* Johnny Depp
|
||||
* Justin Timberlake
|
||||
* Katy Perry
|
||||
* Keanu Reeves
|
||||
* Patrick Stewart
|
||||
* Tom Cruise
|
||||
|
||||
In the demo I have decided to read the images from a very simple CSV file. Why? Because it's the simplest platform-independent approach I can think of. However, if you know a simpler solution please ping me about it. Basically all the CSV file needs to contain are lines composed of a ``filename`` followed by a ``;`` followed by the ``label`` (as *integer number*), making up a line like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/path/to/image.ext;0
|
||||
|
||||
Let's dissect the line. ``/path/to/image.ext`` is the path to an image, probably something like this if you are in Windows: ``C:/faces/person0/image0.jpg``. Then there is the separator ``;`` and finally we assign a label ``0`` to the image. Think of the label as the subject (the person, the gender or whatever comes to your mind). In the face recognition scenario, the label is the person this image belongs to. In the gender classification scenario, the label is the gender the person has. So my CSV file looks like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/home/philipp/facerec/data/c/keanu_reeves/keanu_reeves_01.jpg;0
|
||||
/home/philipp/facerec/data/c/keanu_reeves/keanu_reeves_02.jpg;0
|
||||
/home/philipp/facerec/data/c/keanu_reeves/keanu_reeves_03.jpg;0
|
||||
...
|
||||
/home/philipp/facerec/data/c/katy_perry/katy_perry_01.jpg;1
|
||||
/home/philipp/facerec/data/c/katy_perry/katy_perry_02.jpg;1
|
||||
/home/philipp/facerec/data/c/katy_perry/katy_perry_03.jpg;1
|
||||
...
|
||||
/home/philipp/facerec/data/c/brad_pitt/brad_pitt_01.jpg;2
|
||||
/home/philipp/facerec/data/c/brad_pitt/brad_pitt_02.jpg;2
|
||||
/home/philipp/facerec/data/c/brad_pitt/brad_pitt_03.jpg;2
|
||||
...
|
||||
/home/philipp/facerec/data/c1/crop_arnold_schwarzenegger/crop_08.jpg;6
|
||||
/home/philipp/facerec/data/c1/crop_arnold_schwarzenegger/crop_05.jpg;6
|
||||
/home/philipp/facerec/data/c1/crop_arnold_schwarzenegger/crop_02.jpg;6
|
||||
/home/philipp/facerec/data/c1/crop_arnold_schwarzenegger/crop_03.jpg;6
|
||||
|
||||
All images for this example were chosen to have a frontal face perspective. They have been cropped, scaled and rotated to be aligned at the eyes, just like this set of George Clooney images:
|
||||
|
||||
.. image:: ../img/tutorial/gender_classification/clooney_set.png
|
||||
:align: center
|
||||
|
||||
Face Recongition from Videos
|
||||
-----------------------------
|
||||
|
||||
The source code for the demo is available in the ``samples/cpp`` folder of your OpenCV installation. If you have built OpenCV with samples turned on, chances are good you have the executable already. This demo uses the :ocv:class:`CascadeClassifier`:
|
||||
|
||||
.. literalinclude:: ../src/facerec_video.cpp
|
||||
:language: cpp
|
||||
:linenos:
|
||||
|
||||
Running the Demo
|
||||
----------------
|
||||
|
||||
You'll need:
|
||||
|
||||
* The path to a valid Haar-Cascade for detecting a face with a :ocv:class:`CascadeClassifier`.
|
||||
* The path to a valid CSV File for learning a :ocv:class:`FaceRecognizer`.
|
||||
* A webcam and its device id (you don't know the device id? Simply start from 0 on and see what happens).
|
||||
|
||||
If you are in Windows, then simply start the demo by running (from command line):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
facerec_video.exe <C:/path/to/your/haar_cascade.xml> <C:/path/to/your/csv.ext> <video device>
|
||||
|
||||
If you are in Linux, then simply start the demo by running:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
./facerec_video </path/to/your/haar_cascade.xml> <C:/path/to/your/csv.ext> <video device>
|
||||
|
||||
So if the haar-cascade is at ``C:/opencv/data/haarcascades/haarcascade_frontalface_default.xml``, the CSV file at ``C:/facerec/data/celebrities.txt`` and i have a webcam with deviceId ``1``, then I would call the demo with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
facerec_video.exe C:/opencv/data/haarcascades/haarcascade_frontalface_default.xml C:/facerec/data/celebrities.txt 1
|
||||
|
||||
Results
|
||||
-------
|
||||
|
||||
Enjoy!
|
||||
|
||||
Appendix
|
||||
--------
|
||||
|
||||
Creating the CSV File
|
||||
+++++++++++++++++++++
|
||||
|
||||
You don't really want to create the CSV file by hand. I have prepared you a little Python script ``create_csv.py`` (you find it at ``/src/create_csv.py`` coming with this tutorial) that automatically creates you a CSV file. If you have your images in hierarchie like this (``/basepath/<subject>/<image.ext>``):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/facerec/data/at$ tree
|
||||
.
|
||||
|-- s1
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|-- s2
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
...
|
||||
|-- s40
|
||||
| |-- 1.pgm
|
||||
| |-- ...
|
||||
| |-- 10.pgm
|
||||
|
||||
|
||||
Then simply call ``create_csv.py`` with the path to the folder, just like this and you could save the output:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
philipp@mango:~/facerec/data$ python create_csv.py
|
||||
at/s13/2.pgm;0
|
||||
at/s13/7.pgm;0
|
||||
at/s13/6.pgm;0
|
||||
at/s13/9.pgm;0
|
||||
at/s13/5.pgm;0
|
||||
at/s13/3.pgm;0
|
||||
at/s13/4.pgm;0
|
||||
at/s13/10.pgm;0
|
||||
at/s13/8.pgm;0
|
||||
at/s13/1.pgm;0
|
||||
at/s17/2.pgm;1
|
||||
at/s17/7.pgm;1
|
||||
at/s17/6.pgm;1
|
||||
at/s17/9.pgm;1
|
||||
at/s17/5.pgm;1
|
||||
at/s17/3.pgm;1
|
||||
[...]
|
||||
|
||||
Here is the script, if you can't find it:
|
||||
|
||||
.. literalinclude:: ../src/create_csv.py
|
||||
:language: python
|
||||
:linenos:
|
||||
|
||||
Aligning Face Images
|
||||
++++++++++++++++++++
|
||||
|
||||
An accurate alignment of your image data is especially important in tasks like emotion detection, were you need as much detail as possible. Believe me... You don't want to do this by hand. So I've prepared you a tiny Python script. The code is really easy to use. To scale, rotate and crop the face image you just need to call *CropFace(image, eye_left, eye_right, offset_pct, dest_sz)*, where:
|
||||
|
||||
* *eye_left* is the position of the left eye
|
||||
* *eye_right* is the position of the right eye
|
||||
* *offset_pct* is the percent of the image you want to keep next to the eyes (horizontal, vertical direction)
|
||||
* *dest_sz* is the size of the output image
|
||||
|
||||
If you are using the same *offset_pct* and *dest_sz* for your images, they are all aligned at the eyes.
|
||||
|
||||
.. literalinclude:: ../src/crop_face.py
|
||||
:language: python
|
||||
:linenos:
|
||||
|
||||
Imagine we are given `this photo of Arnold Schwarzenegger <http://en.wikipedia.org/wiki/File:Arnold_Schwarzenegger_edit%28ws%29.jpg>`_, which is under a Public Domain license. The (x,y)-position of the eyes is approximately *(252,364)* for the left and *(420,366)* for the right eye. Now you only need to define the horizontal offset, vertical offset and the size your scaled, rotated & cropped face should have.
|
||||
|
||||
Here are some examples:
|
||||
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| Configuration | Cropped, Scaled, Rotated Face |
|
||||
+=================================+============================================================================+
|
||||
| 0.1 (10%), 0.1 (10%), (200,200) | .. image:: ../img/tutorial/gender_classification/arnie_10_10_200_200.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| 0.2 (20%), 0.2 (20%), (200,200) | .. image:: ../img/tutorial/gender_classification/arnie_20_20_200_200.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| 0.3 (30%), 0.3 (30%), (200,200) | .. image:: ../img/tutorial/gender_classification/arnie_30_30_200_200.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
| 0.2 (20%), 0.2 (20%), (70,70) | .. image:: ../img/tutorial/gender_classification/arnie_20_20_70_70.jpg |
|
||||
+---------------------------------+----------------------------------------------------------------------------+
|
||||
|
||||
|