diff --git a/doc/tutorials/video/background_subtraction/background_subtraction.markdown b/doc/tutorials/video/background_subtraction/background_subtraction.markdown index f914379f3e..2e480f5585 100644 --- a/doc/tutorials/video/background_subtraction/background_subtraction.markdown +++ b/doc/tutorials/video/background_subtraction/background_subtraction.markdown @@ -19,190 +19,149 @@ How to Use Background Subtraction Methods {#tutorial_background_subtraction} In the first step, an initial model of the background is computed, while in the second step that model is updated in order to adapt to possible changes in the scene. -- In this tutorial we will learn how to perform BS by using OpenCV. As input, we will use data - coming from the publicly available data set [Background Models Challenge - (BMC)](http://bmc.univ-bpclermont.fr/) . +- In this tutorial we will learn how to perform BS by using OpenCV. Goals ----- In this tutorial you will learn how to: --# Read data from videos by using @ref cv::VideoCapture or image sequences by using @ref - cv::imread ; +-# Read data from videos or image sequences by using @ref cv::VideoCapture ; -# Create and update the background model by using @ref cv::BackgroundSubtractor class; -# Get and show the foreground mask by using @ref cv::imshow ; --# Save the output by using @ref cv::imwrite to quantitatively evaluate the results. Code ---- -In the following you can find the source code. We will let the user chose to process either a video +In the following you can find the source code. We will let the user choose to process either a video file or a sequence of images. We will use @ref cv::BackgroundSubtractorMOG2 in this sample, to generate the foreground mask. The results as well as the input data are shown on the screen. -The source file can be downloaded [here ](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/video/bg_sub.cpp). -@include samples/cpp/tutorial_code/video/bg_sub.cpp +@add_toggle_cpp +- **Downloadable code**: Click + [here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/video/bg_sub.cpp) + +- **Code at glance:** + @include samples/cpp/tutorial_code/video/bg_sub.cpp +@end_toggle + +@add_toggle_java +- **Downloadable code**: Click + [here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java) + +- **Code at glance:** + @include samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java +@end_toggle + +@add_toggle_python +- **Downloadable code**: Click + [here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/video/background_subtraction/bg_sub.py) + +- **Code at glance:** + @include samples/python/tutorial_code/video/background_subtraction/bg_sub.py +@end_toggle Explanation ----------- -We discuss the main parts of the above code: +We discuss the main parts of the code above: --# First, two Mat objects are allocated to store the current frame and two foreground masks, - obtained by using two different BS algorithms. - @code{.cpp} - Mat frame; //current frame - Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method - @endcode --# A @ref cv::BackgroundSubtractor object will be used to generate the foreground mask. In this +- A @ref cv::BackgroundSubtractor object will be used to generate the foreground mask. In this example, default parameters are used, but it is also possible to declare specific parameters in the create function. - @code{.cpp} - Ptr pMOG2; //MOG2 Background subtractor - ... - //create Background Subtractor object - pMOG2 = createBackgroundSubtractorMOG2(); //MOG2 approach - @endcode --# The command line arguments are analysed. The user can chose between two options: - - video files (by choosing the option -vid); - - image sequences (by choosing the option -img). - @code{.cpp} - if(strcmp(argv[1], "-vid") == 0) { - //input data coming from a video - processVideo(argv[2]); - } - else if(strcmp(argv[1], "-img") == 0) { - //input data coming from a sequence of images - processImages(argv[2]); - } - @endcode --# Suppose you want to process a video file. The video is read until the end is reached or the user - presses the button 'q' or the button 'ESC'. - @code{.cpp} - while( (char)keyboard != 'q' && (char)keyboard != 27 ){ - //read the current frame - if(!capture.read(frame)) { - cerr << "Unable to read next frame." << endl; - cerr << "Exiting..." << endl; - exit(EXIT_FAILURE); - } - @endcode --# Every frame is used both for calculating the foreground mask and for updating the background. If + +@add_toggle_cpp +@snippet samples/cpp/tutorial_code/video/bg_sub.cpp create +@end_toggle + +@add_toggle_java +@snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java create +@end_toggle + +@add_toggle_python +@snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py create +@end_toggle + +- A @ref cv::VideoCapture object is used to read the input video or input images sequence. + +@add_toggle_cpp +@snippet samples/cpp/tutorial_code/video/bg_sub.cpp capture +@end_toggle + +@add_toggle_java +@snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java capture +@end_toggle + +@add_toggle_python +@snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py capture +@end_toggle + +- Every frame is used both for calculating the foreground mask and for updating the background. If you want to change the learning rate used for updating the background model, it is possible to - set a specific learning rate by passing a third parameter to the 'apply' method. - @code{.cpp} - //update the background model - pMOG2->apply(frame, fgMaskMOG2); - @endcode --# The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in + set a specific learning rate by passing a parameter to the `apply` method. + +@add_toggle_cpp +@snippet samples/cpp/tutorial_code/video/bg_sub.cpp apply +@end_toggle + +@add_toggle_java +@snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java apply +@end_toggle + +@add_toggle_python +@snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py apply +@end_toggle + +- The current frame number can be extracted from the @ref cv::VideoCapture object and stamped in the top left corner of the current frame. A white rectangle is used to highlight the black colored frame number. - @code{.cpp} - //get the frame number and write it on the current frame - stringstream ss; - rectangle(frame, cv::Point(10, 2), cv::Point(100,20), - cv::Scalar(255,255,255), -1); - ss << capture.get(CAP_PROP_POS_FRAMES); - string frameNumberString = ss.str(); - putText(frame, frameNumberString.c_str(), cv::Point(15, 15), - FONT_HERSHEY_SIMPLEX, 0.5 , cv::Scalar(0,0,0)); - @endcode --# We are ready to show the current input frame and the results. - @code{.cpp} - //show the current frame and the fg masks - imshow("Frame", frame); - imshow("FG Mask MOG 2", fgMaskMOG2); - @endcode --# The same operations listed above can be performed using a sequence of images as input. The - processImage function is called and, instead of using a @ref cv::VideoCapture object, the images - are read by using @ref cv::imread , after individuating the correct path for the next frame to - read. - @code{.cpp} - //read the first file of the sequence - frame = imread(fistFrameFilename); - if(!frame.data){ - //error in opening the first image - cerr << "Unable to open first image frame: " << fistFrameFilename << endl; - exit(EXIT_FAILURE); - } - ... - //search for the next image in the sequence - ostringstream oss; - oss << (frameNumber + 1); - string nextFrameNumberString = oss.str(); - string nextFrameFilename = prefix + nextFrameNumberString + suffix; - //read the next frame - frame = imread(nextFrameFilename); - if(!frame.data){ - //error in opening the next image in the sequence - cerr << "Unable to open image frame: " << nextFrameFilename << endl; - exit(EXIT_FAILURE); - } - //update the path of the current frame - fn.assign(nextFrameFilename); - @endcode - Note that this example works only on image sequences in which the filename format is \.png, - where n is the frame number (e.g., 7.png). + +@add_toggle_cpp +@snippet samples/cpp/tutorial_code/video/bg_sub.cpp display_frame_number +@end_toggle + +@add_toggle_java +@snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java display_frame_number +@end_toggle + +@add_toggle_python +@snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py display_frame_number +@end_toggle + +- We are ready to show the current input frame and the results. + +@add_toggle_cpp +@snippet samples/cpp/tutorial_code/video/bg_sub.cpp show +@end_toggle + +@add_toggle_java +@snippet samples/java/tutorial_code/video/background_subtraction/BackgroundSubtractionDemo.java show +@end_toggle + +@add_toggle_python +@snippet samples/python/tutorial_code/video/background_subtraction/bg_sub.py show +@end_toggle Results ------- -- Given the following input parameters: - @code{.cpp} - -vid Video_001.avi - @endcode - The output of the program will look as the following: +- With the `vtest.avi` video, for the following frame: - ![](images/Background_Subtraction_Tutorial_Result_1.png) + ![](images/Background_Subtraction_Tutorial_frame.jpg) -- The video file Video_001.avi is part of the [Background Models Challenge - (BMC)](http://bmc.univ-bpclermont.fr/) data set and it can be downloaded from the following link - [Video_001](http://bmc.univ-bpclermont.fr/sites/default/files/videos/evaluation/Video_001.zip) - (about 32 MB). -- If you want to process a sequence of images, then the '-img' option has to be chosen: - @code{.cpp} - -img 111_png/input/1.png - @endcode - The output of the program will look as the following: + The output of the program will look as the following for MOG2 method (gray areas are detected shadows): - ![](images/Background_Subtraction_Tutorial_Result_2.png) + ![](images/Background_Subtraction_Tutorial_result_MOG2.jpg) -- The sequence of images used in this example is part of the [Background Models Challenge - (BMC)](http://bmc.univ-bpclermont.fr/) dataset and it can be downloaded from the following link - [sequence 111](http://bmc.univ-bpclermont.fr/sites/default/files/videos/learning/111_png.zip) - (about 708 MB). Please, note that this example works only on sequences in which the filename - format is \.png, where n is the frame number (e.g., 7.png). + The output of the program will look as the following for the KNN method (gray areas are detected shadows): -Evaluation ----------- - -To quantitatively evaluate the results obtained, we need to: - -- Save the output images; -- Have the ground truth images for the chosen sequence. - -In order to save the output images, we can use @ref cv::imwrite . Adding the following code allows -for saving the foreground masks. -@code{.cpp} -string imageToSave = "output_MOG_" + frameNumberString + ".png"; -bool saved = imwrite(imageToSave, fgMaskMOG); -if(!saved) { - cerr << "Unable to save " << imageToSave << endl; -} -@endcode -Once we have collected the result images, we can compare them with the ground truth data. There -exist several publicly available sequences for background subtraction that come with ground truth -data. If you decide to use the [Background Models Challenge (BMC)](http://bmc.univ-bpclermont.fr/), -then the result images can be used as input for the [BMC -Wizard](http://bmc.univ-bpclermont.fr/?q=node/7). The wizard can compute different measures about -the accuracy of the results. + ![](images/Background_Subtraction_Tutorial_result_KNN.jpg) References ---------- -- [Background Models Challenge (BMC) website](http://bmc.univ-bpclermont.fr/) +- [Background Models Challenge (BMC) website](https://web.archive.org/web/20140418093037/http://bmc.univ-bpclermont.fr/) - A Benchmark Dataset for Foreground/Background Extraction @cite vacavant2013benchmark diff --git a/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_1.png b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_1.png deleted file mode 100644 index ad3a7d780c..0000000000 Binary files a/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_1.png and /dev/null differ diff --git a/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_2.png b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_2.png deleted file mode 100644 index 5618ca3635..0000000000 Binary files a/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_Result_2.png and /dev/null differ diff --git a/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_frame.jpg b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_frame.jpg new file mode 100644 index 0000000000..19eec6d2ce Binary files /dev/null and b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_frame.jpg differ diff --git a/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_KNN.jpg b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_KNN.jpg new file mode 100644 index 0000000000..bc9f8199d6 Binary files /dev/null and b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_KNN.jpg differ diff --git a/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_MOG2.jpg b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_MOG2.jpg new file mode 100644 index 0000000000..850baa2353 Binary files /dev/null and b/doc/tutorials/video/background_subtraction/images/Background_Subtraction_Tutorial_result_MOG2.jpg differ diff --git a/doc/tutorials/video/table_of_content_video.markdown b/doc/tutorials/video/table_of_content_video.markdown index 481476bb62..2e30d2cc8b 100644 --- a/doc/tutorials/video/table_of_content_video.markdown +++ b/doc/tutorials/video/table_of_content_video.markdown @@ -6,6 +6,8 @@ tracking and foreground extractions. - @subpage tutorial_background_subtraction + *Languages:* C++, Java, Python + *Compatibility:* \> OpenCV 2.4.6 *Author:* Domenico Daniele Bloisi diff --git a/samples/cpp/tutorial_code/video/bg_sub.cpp b/samples/cpp/tutorial_code/video/bg_sub.cpp index bd511eb702..3180abbb8a 100644 --- a/samples/cpp/tutorial_code/video/bg_sub.cpp +++ b/samples/cpp/tutorial_code/video/bg_sub.cpp @@ -4,180 +4,84 @@ * @author Domenico D. Bloisi */ -//opencv -#include "opencv2/imgcodecs.hpp" -#include "opencv2/imgproc.hpp" -#include "opencv2/videoio.hpp" -#include -#include -//C -#include -//C++ #include #include +#include +#include +#include +#include +#include using namespace cv; using namespace std; -// Global variables -Mat frame; //current frame -Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method -Ptr pMOG2; //MOG2 Background subtractor -char keyboard; //input from keyboard +const char* params + = "{ help h | | Print usage }" + "{ input | ../data/vtest.avi | Path to a video or a sequence of image }" + "{ algo | MOG2 | Background subtraction method (KNN, MOG2) }"; -/** Function Headers */ -void help(); -void processVideo(char* videoFilename); -void processImages(char* firstFrameFilename); - -void help() -{ - cout - << "--------------------------------------------------------------------------" << endl - << "This program shows how to use background subtraction methods provided by " << endl - << " OpenCV. You can process both videos (-vid) and images (-img)." << endl - << endl - << "Usage:" << endl - << "./bg_sub {-vid