opencv/doc/tutorials/video/background_subtraction/background_subtraction.markdown

216 lines
8.8 KiB
Markdown
Raw Normal View History

2014-11-27 20:39:05 +08:00
How to Use Background Subtraction Methods {#tutorial_background_subtraction}
=========================================
- Background subtraction (BS) is a common and widely used technique for generating a foreground
mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by
using static cameras.
- As the name suggests, BS calculates the foreground mask performing a subtraction between the
current frame and a background model, containing the static part of the scene or, more in
general, everything that can be considered as background given the characteristics of the
observed scene.
2014-11-28 21:21:28 +08:00
![](images/Background_Subtraction_Tutorial_Scheme.png)
2014-11-27 20:39:05 +08:00
- Background modeling consists of two main steps:
2014-11-28 21:21:28 +08:00
-# Background Initialization;
-# Background Update.
2014-11-27 20:39:05 +08:00
In the first step, an initial model of the background is computed, while in the second step that
model is updated in order to adapt to possible changes in the scene.
- In this tutorial we will learn how to perform BS by using OpenCV. As input, we will use data
coming from the publicly available data set [Background Models Challenge
(BMC)](http://bmc.univ-bpclermont.fr/) .
Goals
-----
In this tutorial you will learn how to:
2014-11-28 21:21:28 +08:00
-# Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref
2014-11-27 20:39:05 +08:00
cv::imread ;
2014-11-28 21:21:28 +08:00
-# Create and update the background model by using @ref cv::BackgroundSubtractor class;
-# Get and show the foreground mask by using @ref cv::imshow ;
-# Save the output by using @ref cv::imwrite to quantitatively evaluate the results.
2014-11-27 20:39:05 +08:00
Code
----
In the following you can find the source code. We will let the user chose to process either a video
file or a sequence of images.
2014-11-28 21:21:28 +08:00
Two different methods are used to generate two foreground masks:
-# cv::bgsegm::BackgroundSubtractorMOG
2014-11-28 21:21:28 +08:00
-# @ref cv::BackgroundSubtractorMOG2
2014-11-27 20:39:05 +08:00
The results as well as the input data are shown on the screen.
2014-11-28 21:21:28 +08:00
The source file can be downloaded [here ](samples/cpp/tutorial_code/video/bg_sub.cpp).
2014-11-27 20:39:05 +08:00
2014-11-28 21:21:28 +08:00
@includelineno samples/cpp/tutorial_code/video/bg_sub.cpp
2014-11-27 20:39:05 +08:00
Explanation
-----------
We discuss the main parts of the above code:
2014-11-28 21:21:28 +08:00
-# First, three Mat objects are allocated to store the current frame and two foreground masks,
2014-11-27 20:39:05 +08:00
obtained by using two different BS algorithms.
@code{.cpp}
Mat frame; //current frame
Mat fgMaskMOG; //fg mask generated by MOG method
Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method
@endcode
2014-11-28 21:21:28 +08:00
-# Two @ref cv::BackgroundSubtractor objects will be used to generate the foreground masks. In this
2014-11-27 20:39:05 +08:00
example, default parameters are used, but it is also possible to declare specific parameters in
the create function.
@code{.cpp}
Ptr<BackgroundSubtractor> pMOG; //MOG Background subtractor
Ptr<BackgroundSubtractor> pMOG2; //MOG2 Background subtractor
...
//create Background Subtractor objects
pMOG = createBackgroundSubtractorMOG(); //MOG approach
pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach
@endcode
2014-11-28 21:21:28 +08:00
-# The command line arguments are analysed. The user can chose between two options:
2014-11-27 20:39:05 +08:00
- video files (by choosing the option -vid);
- image sequences (by choosing the option -img).
@code{.cpp}
if(strcmp(argv[1], "-vid") == 0) {
//input data coming from a video
processVideo(argv[2]);
}
else if(strcmp(argv[1], "-img") == 0) {
//input data coming from a sequence of images
processImages(argv[2]);
}
@endcode
2014-11-28 21:21:28 +08:00
-# Suppose you want to process a video file. The video is read until the end is reached or the user
2014-11-27 20:39:05 +08:00
presses the button 'q' or the button 'ESC'.
@code{.cpp}
while( (char)keyboard != 'q' && (char)keyboard != 27 ){
//read the current frame
if(!capture.read(frame)) {
cerr << "Unable to read next frame." << endl;
cerr << "Exiting..." << endl;
exit(EXIT_FAILURE);
}
@endcode
2014-11-28 21:21:28 +08:00
-# Every frame is used both for calculating the foreground mask and for updating the background. If
2014-11-27 20:39:05 +08:00
you want to change the learning rate used for updating the background model, it is possible to
set a specific learning rate by passing a third parameter to the 'apply' method.
@code{.cpp}
//update the background model
pMOG->apply(frame, fgMaskMOG);
pMOG2->apply(frame, fgMaskMOG2);
@endcode
2014-11-28 21:21:28 +08:00
-# The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in
2014-11-27 20:39:05 +08:00
the top left corner of the current frame. A white rectangle is used to highlight the black
colored frame number.
@code{.cpp}
//get the frame number and write it on the current frame
stringstream ss;
rectangle(frame, cv::Point(10, 2), cv::Point(100,20),
cv::Scalar(255,255,255), -1);
ss << capture.get(CAP_PROP_POS_FRAMES);
string frameNumberString = ss.str();
putText(frame, frameNumberString.c_str(), cv::Point(15, 15),
FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0));
@endcode
2014-11-28 21:21:28 +08:00
-# We are ready to show the current input frame and the results.
2014-11-27 20:39:05 +08:00
@code{.cpp}
//show the current frame and the fg masks
imshow("Frame", frame);
imshow("FG Mask MOG", fgMaskMOG);
imshow("FG Mask MOG 2", fgMaskMOG2);
@endcode
2014-11-28 21:21:28 +08:00
-# The same operations listed above can be performed using a sequence of images as input. The
2014-11-27 20:39:05 +08:00
processImage function is called and, instead of using a @ref cv::VideoCapture object, the images
are read by using @ref cv::imread , after individuating the correct path for the next frame to
read.
@code{.cpp}
//read the first file of the sequence
frame = imread(fistFrameFilename);
if(!frame.data){
//error in opening the first image
cerr << "Unable to open first image frame: " << fistFrameFilename << endl;
exit(EXIT_FAILURE);
}
...
//search for the next image in the sequence
ostringstream oss;
oss << (frameNumber + 1);
string nextFrameNumberString = oss.str();
string nextFrameFilename = prefix + nextFrameNumberString + suffix;
//read the next frame
frame = imread(nextFrameFilename);
if(!frame.data){
//error in opening the next image in the sequence
cerr << "Unable to open image frame: " << nextFrameFilename << endl;
exit(EXIT_FAILURE);
}
//update the path of the current frame
fn.assign(nextFrameFilename);
@endcode
Note that this example works only on image sequences in which the filename format is \<n\>.png,
where n is the frame number (e.g., 7.png).
Results
-------
- Given the following input parameters:
@code{.cpp}
-vid Video_001.avi
@endcode
The output of the program will look as the following:
2014-11-28 21:21:28 +08:00
![](images/Background_Subtraction_Tutorial_Result_1.png)
2014-11-27 20:39:05 +08:00
- The video file Video_001.avi is part of the [Background Models Challenge
(BMC)](http://bmc.univ-bpclermont.fr/) data set and it can be downloaded from the following link
[Video_001](http://bmc.univ-bpclermont.fr/sites/default/files/videos/evaluation/Video_001.zip)
(about 32 MB).
- If you want to process a sequence of images, then the '-img' option has to be chosen:
@code{.cpp}
-img 111_png/input/1.png
@endcode
The output of the program will look as the following:
2014-11-28 21:21:28 +08:00
![](images/Background_Subtraction_Tutorial_Result_2.png)
2014-11-27 20:39:05 +08:00
- The sequence of images used in this example is part of the [Background Models Challenge
(BMC)](http://bmc.univ-bpclermont.fr/) dataset and it can be downloaded from the following link
[sequence 111](http://bmc.univ-bpclermont.fr/sites/default/files/videos/learning/111_png.zip)
(about 708 MB). Please, note that this example works only on sequences in which the filename
format is \<n\>.png, where n is the frame number (e.g., 7.png).
Evaluation
----------
To quantitatively evaluate the results obtained, we need to:
- Save the output images;
- Have the ground truth images for the chosen sequence.
In order to save the output images, we can use @ref cv::imwrite . Adding the following code allows
for saving the foreground masks.
@code{.cpp}
string imageToSave = "output_MOG_" + frameNumberString + ".png";
bool saved = imwrite(imageToSave, fgMaskMOG);
if(!saved) {
cerr << "Unable to save " << imageToSave << endl;
}
@endcode
Once we have collected the result images, we can compare them with the ground truth data. There
exist several publicly available sequences for background subtraction that come with ground truth
data. If you decide to use the [Background Models Challenge (BMC)](http://bmc.univ-bpclermont.fr/),
then the result images can be used as input for the [BMC
Wizard](http://bmc.univ-bpclermont.fr/?q=node/7). The wizard can compute different measures about
the accuracy of the results.
References
----------
2014-11-28 21:21:28 +08:00
- [Background Models Challenge (BMC) website](http://bmc.univ-bpclermont.fr/)
- A Benchmark Dataset for Foreground/Background Extraction @cite vacavant2013benchmark