I believe you are using the wrong version of open() on line 28 - adding deviceID + appId together. It's better to use the new version of .open() taking two integers as parameter.
G-API: Laplacian and bilateralFilter standard kernels
* Added Laplacian kernel and tests
* Added: Laplacian kernel, Bilateral kernel (CPU, GPU); Performance and accuracy tests for this kernels
* Changed tolerance for GPU test
* boner
* Some changes with alignment; Tests's parameters are the same as for OCV
* Cut tests
* Compressed tests
* Minor changes (rsrt bb)
* Returned types
- kernel added to a cv::gapi::video namespace
- tests to check a kernels (based on cv::video tests for cv::buildOpticalFlowPyramid())
- tests for a combined G-API-pipeline (buildOpticalFlowPyramid() -> calcOpticalFlowPyrLK())
- tests for internal purposes added
- custom function for comparison in tests implemented
- It is safe to remove `explicit` keyword for constructors with 1
argument, because it is C++ specific keyword and does not affect any of
the generated binding.
* LineVirtualIterator
Proposal of LineVirtualIterator, an alternative to "LineIterator not attached to any mat".
This is basically the same implementation, replacing the address difference by a single "offset" variable. elemsize becomes irrelevant and considered to be 1. "step" is thus equal to size.width since no stride is expected.
* Update drawing.cpp
fixed warning
* improvement of LineVirtualIterator
instead of being too conservative, the new implementation gets rid of "offset/step" and only keeps a "Point currentPos" up to date.
left_to_right is renamed to forceLeftToRight as suggested (even for the old LineIterator)
assert() replaced by CV_Assert() (even for the old LineIterator)
* fixed implementation
+fixed last commit so that LineVirtualIterator gives at least the same results as LineIterator
+added a new constructor that does not require any Size, so that no clipping is done and iteration occurs from pt1 to pt2. This is done by adding a spatial offset to pt1 and pt2 so that the same implementation is used, the size being in that case the spatial size between pt1 and pt2
* Update imgproc.hpp
fixed warnings
* Update drawing.cpp
fixed whitespace
* Update drawing.cpp
trailing whitespace
* Update imgproc.hpp
+added a new constructor that takes a Rect rather than a Size. It computes the line pt1->pt2 that clips that rect.
Yet again, this is still based on the same implementation, thanks to the Size and the currentPosOffset that can artifically consider the origin of the rect at (0,0)
* revert changes
revert changes on original LineIterator implementation, that will be superseded by the new LineVirtualIterator anyway
* added test of LineVirtualIterator
* More tests
* refactoring
Use C++11 chained constructors
Improved code style
* improve test
Added offset as random test data.
* fixed order of initialization
* merged LineIterator and VirtualLineIterator
* merged LineIterator & VirtualLineIterator
* merged LineIterator & VirtualLineIterator
* merged LineIterator & VirtualLineIterator
* made LineIterator::operator ++() more efficient
added one perfectly predictable check; in theory, since ptmode is set in the end of the constructor in the header file, the compiler can figure out that it's always true/false and eliminate the check from the inline `LineIterator::operator++()` completely
* optimized Line() function
in the most common case (CV_8UC3) eliminated the check from the loop
Co-authored-by: Vadim Pisarevsky <vadim.pisarevsky@gmail.com>
- opencv_gapi module is linked with opencv_video module (optional dependency)
- kernels added to a new cv::gapi::video namespace and a brand new files created to provide gapi_video environment
- there are 2 different kernels as G-API should provide GMat AND GArray<GMat> implementation: cv::calcOptFlowPyrLK doesn't calculate pyramids if vector<Mat> is given so just the cast GMat -> GArray<GMat> wouldn't represent all the cv:: functionality
- tests to check both kernels (based on cv::video tests for cv::calcOpticalFlowPyrLK())
- tests for internal purposes added
- vectors<T> comparison in tests implemented
- new (and old too) common test structures refactored to avoid code copypasting
- "modules/gapi/test/common/gapi_video_tests_common.hpp" created to share some code snippets between perf and acc tests and avoid code copypasting
added estimateTranslation3D to calib3d/ptsetreg
* added estimateTranslation3D; follows API and implementation structure for estimateAffine3D, but only allows for translation
* void variables in null function to suppress compiler warnings
* added test for estimateTranslation3D
* changed to Matx13d datatype for translation vector in ptsetreg and test; used short license in test
* removed iostream include
* calib3d: code cleanup
- cv::gapi::goodFeaturesToTrack() kernel is implemented
- tests (for exact check with cv::goodFeaturesToTrack() and for internal cases) are implemented
- a custom comparison function for vectors and a custom test fixture implemented
- some posiible issues as wrong/inexact sorting of two compared vectors are
not taken into account
- initializations of an input Mat using a picture from opencv_extra implemented (function from gapi_streaming_test used)
G-API: Unification of own:: Scalar with cv:: Scalar
* cvdefs.hpp
* Small changes
* Deowned Scalar. Does't work
* Something
* Removed to_ocv for Scalar
* Clear code
* Deleted whitespaces
* Added include<..own/scalar.hpp in cvdefs.hpp.
* Long string split on two now
* Comment about scalar
* Comment about crutch
* Removed second varible in scalar_wrapper
* Changed wrapper for scalar, alignment
* Alignment
* Whitespaces
* Removed scalar_wrapper
Jpeg2000 OpenJPEG port
* OpenJPEG based JPEG2000 decoder implementation
Currently, the following input color spaces and depth conversions are
supported:
- 8 bit -> 8 bit
- 16 bit -> 16 bit (IMREAD_UNCHANGED, IMREAD_ANYDEPTH)
- RGB(a) -> BGR
- RGBA -> BGRA (IMREAD_UNCHANGED)
- Y(a) -> Y(a) (IMREAD_ANYCOLOR, IMREAD_GRAY, IMREAD_UNCHANGED))
- YCC -> Y (IMREAD_GRAY)
* Check for OpenJPEG availability
This enables OpenJPEG based JPEG2000 imread support by default, which
can be disabled by -DWITH_OPENJPEG=OFF. In case OpenJPEG is enabled
and found, any checks for Jasper are skipped.
* Implement precision downscaling for precision > 8 without IMREAD_UNCHANGED
With IMREAD_UNCHANGED, values are kept from the input image, without it
components are downscaled to CV_8U range.
* Enable Jpeg2K tests when OpenJPEG is available
* Add support for some more color conversions
Support IMREAD_GRAY when input color space is RGB or unspecified.
Support YUV input color space for BGR output.
* fix: problems with unmanaged memory
* fix: CMake warning - HAVE_OPENJPEG is undefined
Removed trailing whitespaces
* fix: CMake find_package OpenJPEG add minimal version
* Basic JPEG2K encoder
Images with depth CV_8U and CV_16U are supported, with 1 to 4 channels.
* feature: Improved code for OpenJPEG2000 encoder/decoder
- Removed code duplication
- Added error handlers
- Extracted functions
* feature: Update conversion openjpeg array from/to Mat
* feature: Extend ChannelsIterator to fulfill RandomAccessIterator named requirements
- Removed channels split in copyFromMatImpl. With ChannelsIterator no allocations are performed.
- Split whole loop into 2 parts in copyToMat -> where std::copy and std::transforms are called.
* fix: Applied review comments.
- Changed `nullptr` in CV_LOG* functions to `NULL`
- Added `falls through` comment in decoder color space `switch`
- Added warning about unsupported parameters for the encoder
* feature: Added decode from in-memory buffers.
Co-authored-by: Vadim Levin <vadim.levin@xperience.ai>
the float variant was always shadowed by the int version as
Rect2d is implicitly convertible to Rect.
This swaps things which is fine, as the vector of boxes was always
copied and the computation was done in double.
* feature: Add video capture bitrate read-only property for FFMPEG backend
* test: For WIN32 property should be either expected or 0.
Added `IsOneOf` helper function, enabled only for _WIN32.
dnn(darknet-importer): add grouped convolutions, sigmoid, swish, scale_channels
* update darknet importer to support enetb0-yolo
* remove dropout (pr16438) and fix formatting
* add test for scale_channels
* disable batch testing for scale channels
* do not set LayerParams::name
* merge all activations into setActivation
* Add Tengine support .
* Modify printf to CV_LOG_WARNING
* a few minor fixes in the code
* Renew Tengine version
* Add header file for CV_LOG_WARNING
* Add #ifdef HAVE_TENGINE in tengine_graph_convolution.cpp
* remove trailing whitespace
* Remove trailing whitespace
* Modify for compile problem
* Modify some code style error
* remove whitespace
* Move some code style problem
* test
* add ios limit and build problem
* Modified as alalek suggested
* Add cmake 2.8 support
* modify cmake 3.5.1 problem
* test and set BUILD_ANDROID_PROJECTS OFF
* remove some compile error
* remove some extra code in tengine
* close test.
* Test again
* disable android.
* delete ndk version judgement
* Remove setenv() call . and add License information
* Set tengine default OFF. Close test .
Co-authored-by: Vadim Pisarevsky <vadim.pisarevsky@gmail.com>
Image sharpness, as well as brightness, are a critical parameter for
accuracte camera calibration. For accessing these parameters for
filtering out problematic calibraiton images, this method calculates
edge profiles by traveling from black to white chessboard cell centers.
Based on this, the number of pixels is calculated required to transit
from black to white. This width of the transition area is a good
indication of how sharp the chessboard is imaged and should be below
~3.0 pixels.
Based on this also motion blur can be detectd by comparing sharpness in
vertical and horizontal direction. All unsharp images should be excluded
from calibration as they will corrupt the calibration result. The same
is true for overexposued images due to a none-linear sensor response.
This can be detected by looking at the average cell brightness of the
detected chessboard.
Lets the user choose the maximum number of iterations the robust
estimator runs for, similary to findHomography. This can significantly
improve performance (at a computational cost).
The hard-coded string value "Mat" was used in the two format strings for vector_mat and vector_mat_template, preventing UMat arguments to functions that have these types from working correctly. as noted in #12231.
* Vectorize calculating integral for line for single and multiple channels
* Single vector processing for 4-channels - 25-30% faster
* Single vector processing for 4-channels - 25-30% faster
* Fixed AVX512 code for 4 channels
* Disable 3 channel 8UC1 to 32S for SSE2 and SSE3 (slower). Use new version of 8UC1 to 64F for AVX512.
fixed the ordering of contour convex hull points
* partially fixed the issue #4539
* fixed warnings and test failures
* fixed integer overflow (issue #14521)
* added comment to force buildbot to re-run
* extended the test for the issue 4539. Check the expected behaviour on the original contour as well
* added comment; fixed typo, renamed another variable for a little better clarity
* added yet another part to the test for issue #4539, where we run convexHull and convexityDetects on the original contour, without any manipulations. the rest of the test stays the same
* fixed several problems when running tests on Mac:
* OCL_pyrUp
* OCL_flip
* some basic UMat tests
* histogram badarg test (out of range access)
* retained the storepix fix in ocl_flip only for 16U/16S datatype, where the OpenCL compiler on Mac generates incorrect code
* moved deletion of ACCESS_FAST flag to non-SVM branch (where SVM is shared virtual memory (in OpenCL 2.x), not support vector machine)
* force OpenCL to use read/write for GPU<=>CPU memory transfers on machines with discrete video only on Macs. On Windows/Linux the drivers are seemingly smart enough to implement map/unmap properly (and maybe more efficiently than explicit read/write)
Changes:
* UMat for blur + rotate resulting in a speedup of around 2X on an i7
* support for boards larger than specified allowing to cover full FOV
* support for markers moving the origin into the center of the board
* increase detection accuracy
The main change is for supporting boards that are larger than the FOV of
the camera and have their origin in the board center. This allows
building OEM calibration targets similar to the one from intel real
sense utilizing corner points as close as possible to the image border.
cuda4dnn(concat): write outputs from previous layers directly into concat's output
* eliminate concat by directly writing to its output buffer
* fix concat fusion not happening sometimes
* use a whitelist instead of a blacklist
* G-API/Samples: Added a simple "privacy masking camera" sample
The main idea is to host this code for an opencv.org blog post only
* G-API/Samples: Modified privacy masking camera code to look better for the post
* G-API/Samples: fix Windows (MSVC) support in Privacy Masking Camera
* G-API/Samples: Addressed the majority of review comments in PMC
* G-API/Samples: Use TickMeter to measure time + more info in cmd options
* G-API/Samples: fix yet another Windows warning in PMC
* G-API/Samples: Fix wording in PMC cmd arg parameters
* Fix wording, again
* G-API/Samples: Fix PMC cmd-line arguments, again
G-API: Using functors as kernel implementation
* Implement ability to create kernel impls from functors
* Clean up
* Replace make_ocv_functor to ocv_kernel
* Clean up
* Replace GCPUFunctor -> GOCVFunctor
* Move GOCVFunctor to cv::gapi::cpu namespace
* Implement override for rvalue and lvalue cases
* Fix comments to review
* Remove GAPI_EXPORT for template functions
* Fix indentation
fixed cv::moveWindow() on mac
* fixed cv::moveWindow() on mac (issue #16343). Thanks to cwreynolds and saskatchewancatch for the help!
* fixed warnings about _x0 and _y0
* fixed warnings about _x0 and _y0
* Fix NN resize with dimentions > 4
* add test check for nn resize with channels > 4
* Change types from float to double
* Del unnecessary test file. Move nn test to test_imgwarp. Add 5 channels test only.
* improved version of HoughCircles (HOUGH_GRADIENT_ALT method)
* trying to fix build problems on Windows
* fixed typo
* * fixed warnings on Windows
* make use of param2. make it minCos2 (minimal value of squared cosine between the gradient at the pixel edge and the vector connecting it with circle center). with minCos2=0.85 we can detect some more eyes :)
* * added description of HOUGH_GRADIENT_ALT
* cleaned up the implementation; added comments, replaced built-in numeic constants with symbolic constants
* rewrote circle_popcount() to use built-in popcount() if possible
* modified some of HoughCircles tests to use method parameter instead of the built-in loop
* fixed warnings on Windows
trying to fix handling file storages with extremely long lines
* trying to fix handling of file storages with extremely long lines: https://github.com/opencv/opencv/issues/11061
* * fixed errorneous pointer access in JSON parser.
* it's now crash-test time! temporarily set the initial parser buffer size to just 40 bytes. let's run all the test and check if the buffer is always correctly resized and handled
* fixed pointer use in JSON parser; added the proper test to catch this case
* fixed the test to make it more challenging. generate test json with
*
**
***
etc. shape
Fix compilation errors on GLES platforms
* Do not include glx.h when using GLES
GL/glx.h is included on all LINUX plattforms, which is wrong
for a number of reasons:
- GL_PERSPECTIVE_CORRECTION_HINT is defined in GL/gl.h, so we
want gl.h not glx.h, the latter just includes the former
- GL/gl.h is a Desktop GL header, and should not be included
on GLES plattforms
- GL/gl.h is already included via QtOpenGL ->
QtGui/qopengl.h on desktop plattforms
This fixes a problem when Qt is compiled with GLES, which
is often done on ARM platforms where desktop GL is not or
only poorly supported (e.g. slow due to emulation).
Fixes part of #9171.
* Only set GL_PERSPECTIVE_CORRECTION_HINT when GL version defines it
GL_PERSPECTIVE_CORRECTION_HINT does not exist in GLES 2.0/3.x,
and has been deprecated in OpenGL 3.0 core profiles.
Fixes part of #9171.
This is a correction of the previously missleading documentation and a warning related to a common calibration failure described in issue 15992
* corrected incorrect description of failed calibration state.
see issue 15992
* calib3d: apply suggestions from code review by catree
QR-Code detector : multiple detection
* change in qr-codes detection
* change in qr-codes detection
* change in test
* change in test
* add multiple detection
* multiple detection
* multiple detect
* add parallel implementation
* add functional for performance tests
* change in test
* add perftest
* returned implementation for 1 qr-code, added support for vector<Mat> and vector<vector<Point2f>> in MultipleDetectAndDecode
* deleted all lambda expressions
* changing in triangle sort
* fixed warnings
* fixed errors
* add java and python tests
* change in java tests
* change in java and python tests
* change in perf test
* change in qrcode.cpp
* add spaces
* change in qrcode.cpp
* change in qrcode.cpp
* change in qrcode.cpp
* change in java tests
* change in java tests
* solved problems
* solved problems
* change in java and python tests
* change in python tests
* change in python tests
* change in python tests
* change in methods name
* deleted sample qrcode_multi, change in qrcode.cpp
* change in perf tests
* change in objdetect.hpp
* deleted code duplication in sample qrcode.cpp
* returned spaces
* added spaces
* deleted draw function
* change in qrcode.cpp
* change in qrcode.cpp
* deleted all draw functions
* objdetect(QR): extractVerticalLines
* objdetect(QR): whitespaces
* objdetect(QR): simplify operations, avoid duplicated code
* change in interface, additional checks in java and python tests, added new key in sample for saving original image from camera
* fix warnings and errors in python test
* fix
* write in file with space key
* solved error with empty mat check in python test
* correct path to test image
* deleted spaces
* solved error with check empty mat in python tests
* added check of empty vector of points
* samples: rework qrcode.cpp
* objdetect(QR): fix API, input parameters must be first
* objdetect(QR): test/fix points layout
* Reduce LLC loads, stores and multiplies on MulTransposed - 8% faster on VSX
* Add is_same method so c++11 is not required
* Remove trailing whitespaces.
* Change is_same to DataType depth check
Added type check for solvePnPGeneric | Issue: #16049
* Added type check
* Added checks before type fix
* Tests for 16049
* calib3d: update solvePnP regression check (16049)
- Added `explicit` to `VideoCapture` constructors with 2
arguments, 1 of them has default value
- Applied library code style
- Introduced 2 debug macros to improve readability of the code
Vectorize minMaxIdx functions
* Updated documentation and intrinsic tests for v_reduce
* Add other files back in from the forced push
* Prevent an constant overflow with v_reduce for int8 type
* Another alternative to fix constant overflow warning.
* Fix another compiler warning.
* Update comments and change comparison form to be consistent with other vectorized loops.
* Change return type of v_reduce_min & max for v_uint8 and v_uint16 to be same as lane type.
* Cast v_reduce functions to int to avoid overflow. Reduce number of parameters in MINMAXIDX_REDUCE macro.
* Restore cast type for v_reduce_min & max to LaneType
support eltwise sum with different number of input channels in CUDA backend
* add shortcut primitive
* add offsets in shortcut kernel
* skip tests involving more than two inputs
* remove redundant modulus operation
* support multiple inputs
* remove whole file indentation
* skip acc in0 trunc test if weighted
* use shortcut iff channels are unequal
Enable cuda4dnn on hardware without support for __half
* Enable cuda4dnn on hardware without support for half (ie. compute capability < 5.3)
Update CMakeLists.txt
Lowered minimum CC to 3.0
* UPD: added ifdef on new copy kernel
* added fp16 support detection at runtime
* Clarified #if condition on atomicAdd definition
* More explicit CMake error message
Fix implicit conversion from array to scalar in python bindings
* Fix wrong conversion behavior for primitive types
- Introduce ArgTypeInfo namedtuple instead of plain tuple.
If strict conversion parameter for type is set to true, it is
handled like object argument in PyArg_ParseTupleAndKeywords and
converted to concrete type with the appropriate pyopencv_to function
call.
- Remove deadcode and unused variables.
- Fix implicit conversion from numpy array with 1 element to scalar
- Fix narrowing conversion to size_t type.
* Fix wrong conversion behavior for primitive types
- Introduce ArgTypeInfo namedtuple instead of plain tuple.
If strict conversion parameter for type is set to true, it is
handled like object argument in PyArg_ParseTupleAndKeywords and
converted to concrete type with the appropriate pyopencv_to function
call.
- Remove deadcode and unused variables.
- Fix implicit conversion from numpy array with 1 element to scalar
- Fix narrowing conversion to size_t type.·
- Enable tests with wrong conversion behavior
- Restrict passing None as value
- Restrict bool to integer/floating types conversion
* Add PyIntType support for Python 2
* Remove possible narrowing conversion of size_t
* Bindings conversion update
- Remove unused macro
- Add better conversion for types to numpy types descriptors
- Add argument name to fail messages
- NoneType treated as a valid argument. Better handling will be added
as a standalone patch
* Add descriptor specialization for size_t
* Add check for signed to unsigned integer conversion safety
- If signed integer is positive it can be safely converted
to unsigned
- Add check for plain python 2 objects
- Add check for numpy scalars
- Add simple type_traits implementation for better code style
* Resolve type "overflow" false negative in safe casting check
- Move type_traits to separate header
* Add copyright message to type_traits.hpp
* Limit conversion scope for integral numpy types
- Made canBeSafelyCasted specialized only for size_t, so
type_traits header became unused and was removed.
- Added clarification about descriptor pointer
Add lightweight IE hardware targets checks
nGraph: Concat with paddings
Enable more nGraph tests
Restore FP32->FP16 for GPU plugin of IE
try to fix buildbot
Use lightweight IE targets check only starts from R4
- some of `icvCvt_BGR*` functions have R with B channels
swapped what leads to the wrong conversion
- renames misleading `rgb` variable name to `bgr`
- swap back the conversion coefficients, `cB` should be the first
Signed-off-by: Janusz Lisiecki <jlisiecki@nvidia.com>
Actually, we can do this in constant time. xofs always
contains same or increasing offset values. We can instead
find the most extreme value used and never attempt to load it.
Similarly, we can note for all dx >= 0 and dx < (dwidth - cn)
where xofs[dx] + cn < xofs[dwidth-cn] implies dx < (dwidth - cn).
Thus, we can use this to control our loop termination optimally.
This fixes#16137 with little or no performance impact. I have
also added a debug check as a sanity check.
* add cv::compare test when Mat type == CV_16F
* add assertion in cv::compare when src.depth() == CV_16F
* cv::compare assertion minor fix
* core: add more checks
* enable tests for DNN_TARGET_CUDA_FP16
* disable deconvolution tests
* disable shortcut tests
* fix typos and some minor changes
* dnn(test): skip CUDA FP16 test too (run_pool_max)
* Handle det == 0 in findCircle3pts.
Issue 16051 shows a case where findCircle3pts returns NaN for the
center coordinates and radius due to dividing by a determinant of 0. In
this case, the points are colinear, so the longest distance between any
2 points is the diameter of the minimum enclosing circle.
* imgproc(test): update test checks for minEnclosingCircle()
* imgproc: fix handling of special cases in minEnclosingCircle()
G-API: Tutorial: Face beautification algorithm implementation
* Introduce a tutorial on face beautification algorithm
- small typo issue in render_ocv.cpp
* Addressing comments rgarnov smirnov-alexey
* Eltwise::DIV support in Halide backend
* fix typo
* remove div from generated test suite to pass CI, switching to manual test...
* ensure divisor not near to zero
* use randu
* dnn(test): update test data for Eltwise.Accuracy/DIV layer test
Add checks for empty operands in Matrix expressions that don't check properly
* Starting to add checks for empty operands in Matrix expressions that
don't check properly.
* Adding checks and delcarations for checker functions
* Fix signatures and add checks for each class of Matrix Expr operation
* Make it catch the right exception
* Don't expose helper functions to public API
* G-API: Added G-API Overview slides & its source code
- Sample code snippets are moved to separate files;
- Introduced a separate benchmark to measure Fluid/OpenCV
performance;
- Added notes on API changes (it is still a 4.0, not a 4.2 talk!)
- Added a "Metropolis" beamer download-n-build script.
* G-API: Addressed review issues on G-API overview slides
* imgproc: Prevent 1B overrun of 8C3 SIMD optimization
The fourth value read via v_load_q is essentially ignored,
but can cause trouble if it happens to cross page boundaries.
The final few iterations may attempt to read the most extreme
elements of S, which will read 1B beyond the array in most
aligment cases. Dynamically compute the stop. This could be
hoised from the loop, but will require a more extensive change.
Likewise, cleanup the iteration increment statements to make
it more obvious they do channel count (3) elements per pass.
This should resolve#16137
* imgproc(resize): extra check
dnn(eltwise): fix handling of different number of channels
* dnn(test): reproducer for Eltwise layer issue from PR16063
* dnn(eltwise): rework support for inputs with different channels
* dnn(eltwise): get rid of finalize(), variableChannels
* dnn(eltwise): update input sorting by number of channels
- do not swap inputs if number of channels are same after truncation
* dnn(test): skip "shortcut" with batch size 2 on MYRIAD targets
G-API: Fix various issues for 4.2 release
* G-API: Fix issues reported by Coverity
- Fixed: passing values by value instead of passing by reference
* G-API: Fix redundant std::move()'s in return statements
Fixes#15903
* G-API: Added a smarter handling of Stop messages in the pipeline
- This should fix the "expected 100, got 99 frames" problem
- Fixes#15882
* G-API: Pass enum instead of GKernelPackage in Streaming test parameters
- Likely fixes#15836
* G-API: Address review issues in new bugfix comments
* G-API-NG/Docs: Added a tutorial page on interactive face detection sample
- Introduced a "--ser" option to run the pipeline serially for
benchmarking purposes
- Reorganized sample code to better fit the documentation;
- Fixed a couple of issues (mainly typos) in the public headers
* G-API-NG/Docs: Reflected meta-less compilation in new G-API tutorial
* G-API-NG/Docs: Addressed review comments on Face Analytics Pipeline example