* Allow the number of threads FFMpeg uses to be selected during VideoCapture::open().
Reset interupt timer in grab if
err = avformat_find_stream_info(ic, NULL);
is interupted but open is successful.
* Correct the returned number of threads and amend test cases.
* Update container test case.
* Reverse changes added to existing videoio_container test case and include test combining thread change and raw read in the newly added videoio_read test case.
This fixes the following error with mingw toolchain:
opencv/modules/videoio/src/cap_msmf.cpp:1020: error: 'wstring_convert' is not a member of 'std'
1020 | std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> conv;
| ^~~~~~~~~~~~~~~
opencv/modules/videoio/src/cap_ffmpeg_hw.hpp:230:26: error: 'wstring_convert' is not a member of 'std'
230 | std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> conv;
| ^~~~~~~~~~~~~~~
The locale header is required according to C++ standard.
See https://en.cppreference.com/w/cpp/locale/wstring_convert
This fixes the following error with mingw toolchain:
opencv/modules/videoio/src/cap_obsensor/obsensor_stream_channel_msmf.hpp:160:10: error: 'condition_variable' in namespace 'std' does not name a type
160 | std::condition_variable streamStateCv_;
| ^~~~~~~~~~~~~~~~~~
libstdc++ that comes with gcc 4.8 doesn't
define `getline(basic_istream<char>&&, std::string&)`
even if it's part of the c++11 standard.
However we can still use the following:
`getline(basic_istream<char>&, std::string&)`.
* videoio: add support for obsensor (Orbbec RGB-D Camera )
* obsensor: code format issues fixed and some code optimized
* obsensor: fix typo and format issues
* obsensor: fix crosses initialization error
Replaced sprintf with safer snprintf
* Straightforward replacement of sprintf with safer snprintf
* Trickier replacement of sprintf with safer snprintf
Some functions were changed to take another parameter: the size of the buffer, so that they can pass that size on to snprintf.
Some GStreamer elements may produce buffers with very non
standard strides, offsets and/or even transport each plane
in different, non-contiguous pointers. This non-standard
layout is communicated via GstVideoMeta structures attached
to the buffers. Given this, when a GstVideoMeta is available,
one should parse the layout from it instead of generating
a generic one from the caps.
The GstVideoFrame utility does precisely this: if the buffer
contains a video meta, it uses that to fill the format and
memory layout. If there is no meta available, the layout is
inferred from the caps.
* Added support for 4B RGB V4L2 pixel formats
Added support for V4L2_PIX_FMT_XBGR32 and V4L2_PIX_FMT_ABGR32 pixel
formats.
* Added workaround for missing V4L2_PIX_FMT_ABGR32 and V4L2_PIX_FMT_XBGR32
defines
Default FFMPEG VideoCapture backend to rtsp_flags=prefer_tcp
* Make the VideoCapture ffmpeg backends default rtsp connection type prefer_tcp.
* Ensure that the ffmpeg version of avformat is checked.
Audio MSMF: added the ability to set sample per second
* Audio MSMF: added the ability to set sample per second
* changed the valid sampling rate check
* fixed docs
* add test
* fixed warning
* fixed error
* fixed error
Add capacity to Videocapture to return the extraData from FFmpeg when required
* Update rawMode to append any extra data recieved during the initial negotiation of an RTSP stream or during the parsing of an MPEG4 file header.
For h264[5] RTSP streams this ensures the parameter sets if available are always returned on the first call to grab()/read() and has two purposes:
1) To ensure the parameter sets are available even if they are not transmitted in band. This is common for axis ip camera's.
2) To allow callers of VideoCapture::grab()[read()] to write to split the raw stream over multiple files by appending the parameter sets to the begining of any new files.
For (1) there is no alternative, for (2) if the parameter sets were provided in band it would be possible to parse the raw bit stream and search for the parameter sets however that would be a lot of work when that information is already provided by FFMPEG.
For MPEG4 files this information is only suplied in the header and is required for decoding.
Two properties are also required to enable the raw encoded bitstream to be written to multiple files, these are;
1) an indicator as to whether the last frame was a key frame or not - each new file needs to start at a key frame to avoid storing unusable frame diffs,
2) the length in bytes of the paramater sets contained in the last frame - required to split the paramater sets from the frame without having to parse the stream. Any call to VideoCapture::get(CAP_PROP_LF_PARAM_SET_LEN) returning a number greater than zero indicates the presense of a parameter set at the begining of the raw bitstream.
* Adjust test data to account for extraData
* Address warning.
* Change added property names and remove paramater set start code check.
* Output extra data on calls to retrieve instead of appending to the first packet.
* Reverted old test case and added new one to evaluate new functionality.
* Add missing definition.
* Remove flag from legacy api.
Add property to determine if returning extra data is supported.
Always allow extra data to be returned on calls to cap.retrieve()
Update test case.
* Update condition which indicates CAP_PROP_CODEC_EXTRADATA_INDEX is not supported in test case.
* Include compatibility for windows dll if not updated.
Enforce existing return status convention.
* Fix return error and missing test constraints.
Fix the following build failure with gcc 4.8:
In file included from /home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/videoio/src/cap_ffmpeg_impl.hpp💯0,
from /home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/videoio/src/cap_ffmpeg.cpp:50:
/home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/videoio/src/cap_ffmpeg_hw.hpp: In constructor 'HWAccelIterator::HWAccelIterator(cv::VideoAccelerationType, bool, AVDictionary*)':
/home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/videoio/src/cap_ffmpeg_hw.hpp:939:23: error: use of deleted function 'std::basic_istringstream<char>& std::basic_istringstream<char>::operator=(const std::basic_istringstream<char>&)'
s_stream_ = std::istringstream(accel_list);
^
In file included from /home/buildroot/autobuild/instance-3/output-1/host/opt/ext-toolchain/arm-none-linux-gnueabi/include/c++/4.8.3/complex:45:0,
from /home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/core/include/opencv2/core/cvstd.inl.hpp:47,
from /home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/core/include/opencv2/core.hpp:3306,
from /home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/videoio/include/opencv2/videoio.hpp:46,
from /home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/videoio/src/precomp.hpp:57,
from /home/buildroot/autobuild/instance-3/output-1/build/opencv4-4.5.4/modules/videoio/src/cap_ffmpeg.cpp:42:
/home/buildroot/autobuild/instance-3/output-1/host/opt/ext-toolchain/arm-none-linux-gnueabi/include/c++/4.8.3/sstream:272:11: note: 'std::basic_istringstream<char>& std::basic_istringstream<char>::operator=(const std::basic_istringstream<char>&)' is implicitly deleted because the default definition would be ill-formed:
class basic_istringstream : public basic_istream<_CharT, _Traits>
^
/home/buildroot/autobuild/instance-3/output-1/host/opt/ext-toolchain/arm-none-linux-gnueabi/include/c++/4.8.3/sstream:272:11: error: use of deleted function 'std::basic_istream<char>& std::basic_istream<char>::operator=(const std::basic_istream<char>&)'
Fixes:
- http://autobuild.buildroot.org/results/60f8846b435dafda0ced412d59ffe15bdff0810d
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
Fix: #21021
NDK API AMediaCodec_getOutputBuffer() returns MediaCodecBuffer::data()
which is actually ABuffer::data(). The returned buffer address is already
adjusted by offset.
More info:
ABuffer::base() returns base address without offset
ABuffer::data() returns base + offset
Change-Id: I2936339ce4fa9acf657a5a7d92adc1275d7b28a1
* Fix gst error handling
* Use the return value instead of the error, which gives no guarantee of being NULL in case of error
* Test err pointer before accessing it
* Remove unreachable code
* videoio(gstreamer): restore check in writer code
Co-authored-by: Alexander Alekhin <alexander.a.alekhin@gmail.com>
Add CAP_PROP_STREAM_OPEN_TIME
* Added CAP_PROP_STREAM_OPEN_TIME to videoio module - can be used to query the time at which the stream was opened, in seconds since Jan 1 1970 (midnight, UTC). Useful for RTSP and other live video where absolute timestamps are needed. Only applicable to ffmpeg backends
* use nanoseconds instead of seconds to mark the stream open time, and change the cap prop name to CAP_PROP_STREAM_OPEN_TIME_NSEC
* use microseconds for CAP_PROP_STREAM_OPEN_TIME (nanoseconds rolls over too soon, and milliseconds/seconds requires a division)
* fix whitespace issue
* Added exposure and gain props, maximized pixel clk
* removed pixel clock maximization
pixel clock maximization is not suitable for all use cases, so I removed it from PR.
* videoio/gstreamer: Add support for GRAY16_LE.
* videoio/gstreamer: added BGRA/BGRx support
Co-authored-by: Maksim Shabunin <maksim.shabunin@gmail.com>
* VideoCapture timeout set/get
* Common formatting for enum values
* Fix enum values wrongly in videoio.hpp
* Define timeout enum values in public api and align with master
videoio: HW decode/encode in FFMPEG backend; new properties with support in FFMPEG/GST/MSMF
* HW acceleration in FFMPEG backend
* fixes on Windows, remove D3D9
* HW acceleration in FFMPEG backend
* fixes on Windows, remove D3D9
* improve va test
* Copyright
* check LIBAVUTIL_BUILD >= AV_VERSION_INT(55, 78, 100) // FFMPEG 3.4+
* CAP_MSMF test on .mp4
* .mp4 in test
* improve va test
* Copyright
* check LIBAVUTIL_BUILD >= AV_VERSION_INT(55, 78, 100) // FFMPEG 3.4+
* CAP_MSMF test on .mp4
* .mp4 in test
* .avi for GStreamer test
* revert changes around seek()
* cv_writer_open_with_params
* params.warnUnusedParameters
* VideoCaptureParameters in GStreamer
* open_with_params
* params->getUnused
* Reduce PSNR threshold 33->32 (other tests use 30)
* require FFMPEG 4.0+; PSNR 30 as in other tests
* GStreamer AVI-demux plugin not installed in Ubuntu test environment?
* fix build on very old ffmpeg
* fix build on very old ffmpeg
* fix build issues
* fix build issues (static_cast)
* FFMPEG built on Windows without H264 encoder?
* fix for write_nothing test on VAAPI
* fix warnings
* fix cv_writer_get_prop in plugins
* use avcodec_get_hw_frames_parameters; more robust fallback to SW codecs
* internal function hw_check_device() for device check/logging
* two separate tests for HW read and write
* image size 640x480 in encode test
* WITH_VA=ON (only .h headers used in OpenCV, no linkage dependency)
* exception on VP9 SW encoder?
* rebase master; refine info message
* videoio: fix FFmpeg standalone plugin build
* videoio(ffmpeg): eliminate MSVC build warnings
* address review comments
* videoio(hw): update videocapture_acceleration.read test
- remove parallel decoding by SW code path
- check PSNR against the original generated image
* videoio: minor fixes
* videoio(test): disable unsupported MSMF cases (SW and HW)
* videoio(test): update PSNR thresholds for HW acceleration read
* videoio(test): update debug messages
* "hw_acceleration" whitelisting parameter
* little optimization in test
* D3D11VA supports decoders, doesn't support encoders
* videoio(test): adjust PSNR threshold in write_read_position tests
* videoio(ffmpeg): fix rejecting on acceleration device name mismatch
* videoio(ffmpeg): fix compilation USE_AV_HW_CODECS=0, add more debug logging
* videoio: rework VideoAccelerationType behavior
- enum is not a bitset
- default value is backend specific
- only '_NONE' and '_ANY' may fallback on software processing
- specific H/W acceleration doesn't fallback on software processing. It fails if there is no support for specified H/W acceleration.
* videoio(test): fix for current FFmpeg wrapper
Co-authored-by: Alexander Alekhin <alexander.a.alekhin@gmail.com>
Android NDK camera support
* Add native camera video backend for Android
* In the event of a "No buffer available error" wait for the appropriate callback and retry
* Fix stale context when creating a new AndroidCameraCapture
* Add property handling
VideoCapture/DSHOW : Allow to set CAP_PROP_CONVERT_RGB before FOURCC/FPS/CHANNEL/WIDTH/HEIGHT.
* 🐛 cap_dshow : Allow to set CAP_PROP_CONVERT_RGB before FOURCC/FPS/CHANNEL
* 🐛 cap_dshow : fix g_VI.setConvertRGB not being called with correct boolean value on first property set.
* ✅ cap_dshow : Test CAP_PROP_CONVERT_RGB persistence
* 🚨 Fix cast from bool to double
* 🚨 Fix trailing whitespace
add video capture parameters
* add parameters
* videoio: revert unnecessary massive changes
* videoio: support capture parameters in backends API
- add tests
- FFmpeg backend sample code
- StaticBackend API is done
- support through PluginBackend API will be added later
Co-authored-by: Milashchenko <maksim.milashchenko@intel.com>
Co-authored-by: Alexander Alekhin <alexander.a.alekhin@gmail.com>