* Allow the number of threads FFMpeg uses to be selected during VideoCapture::open().
Reset interupt timer in grab if
err = avformat_find_stream_info(ic, NULL);
is interupted but open is successful.
* Correct the returned number of threads and amend test cases.
* Update container test case.
* Reverse changes added to existing videoio_container test case and include test combining thread change and raw read in the newly added videoio_read test case.
This fixes the following error with mingw toolchain:
opencv/modules/videoio/src/cap_msmf.cpp:1020: error: 'wstring_convert' is not a member of 'std'
1020 | std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> conv;
| ^~~~~~~~~~~~~~~
opencv/modules/videoio/src/cap_ffmpeg_hw.hpp:230:26: error: 'wstring_convert' is not a member of 'std'
230 | std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> conv;
| ^~~~~~~~~~~~~~~
The locale header is required according to C++ standard.
See https://en.cppreference.com/w/cpp/locale/wstring_convert
This fixes the following error with mingw toolchain:
opencv/modules/videoio/src/cap_obsensor/obsensor_stream_channel_msmf.hpp:160:10: error: 'condition_variable' in namespace 'std' does not name a type
160 | std::condition_variable streamStateCv_;
| ^~~~~~~~~~~~~~~~~~
libstdc++ that comes with gcc 4.8 doesn't
define `getline(basic_istream<char>&&, std::string&)`
even if it's part of the c++11 standard.
However we can still use the following:
`getline(basic_istream<char>&, std::string&)`.
* videoio: add support for obsensor (Orbbec RGB-D Camera )
* obsensor: code format issues fixed and some code optimized
* obsensor: fix typo and format issues
* obsensor: fix crosses initialization error
Replaced sprintf with safer snprintf
* Straightforward replacement of sprintf with safer snprintf
* Trickier replacement of sprintf with safer snprintf
Some functions were changed to take another parameter: the size of the buffer, so that they can pass that size on to snprintf.
Some GStreamer elements may produce buffers with very non
standard strides, offsets and/or even transport each plane
in different, non-contiguous pointers. This non-standard
layout is communicated via GstVideoMeta structures attached
to the buffers. Given this, when a GstVideoMeta is available,
one should parse the layout from it instead of generating
a generic one from the caps.
The GstVideoFrame utility does precisely this: if the buffer
contains a video meta, it uses that to fill the format and
memory layout. If there is no meta available, the layout is
inferred from the caps.
* Added support for 4B RGB V4L2 pixel formats
Added support for V4L2_PIX_FMT_XBGR32 and V4L2_PIX_FMT_ABGR32 pixel
formats.
* Added workaround for missing V4L2_PIX_FMT_ABGR32 and V4L2_PIX_FMT_XBGR32
defines
Default FFMPEG VideoCapture backend to rtsp_flags=prefer_tcp
* Make the VideoCapture ffmpeg backends default rtsp connection type prefer_tcp.
* Ensure that the ffmpeg version of avformat is checked.