* Enabled -Wnarrowing warning
* Fixed type narrowing issues
* Cast python constants
* Use long long for python constants
* Use int for python constants with fallback to long
* Update cv2.cpp
* videoio(librealsense): fix pipeline start with config
fix to apply pipeline settings by passing config to start.
* videoio(librealsense): add support get props
add support get some props.
cap_libv4l depends on an external library (libv4l) yet is still larger
(1966 loc vs 1822 loc).
It was initially introduced copy pasting cap_v4l in order to offload
various color conversions to libv4l.
However nowadays we handle most of the needed color conversions inside
OpenCV. Our own implementation is better tested and (probably) also
better performing. (as it can optionally leverage IPP/ OpenCL)
Currently cap_v4l is better maintained and generally the code is in
better shape. There is however an API
difference in getting unconverted frames:
* on cap_libv4l one need to set `CV_CAP_MODE_GRAY=1` or
`CV_CAP_MODE_YUYV=1`
* on cap_v4l one needs to set `CV_CAP_PROP_CONVERT_RGB=0`
the latter is more flexible though as it also allows accessing undecoded
JPEG images.
fixes#4563
* removed C API in the following modules: photo, video, imgcodecs, videoio
* trying to fix various compile errors and warnings on Windows and Linux
* continue to fix compile errors and warnings
* continue to fix compile errors, warnings, as well as the test failures
* trying to resolve compile warnings on Android
* Update cap_dc1394_v2.cpp
fix warning from the new GCC
V4L (V4L2): Refactoring. Added missed camera properties. Fixed getting `INF` for some properties. Singlethread as always (#12893)
* cap_v4l:
1 Added cap_properties verbalization.
2 Set Get of properties elementary refactoring.
3 Removed converting parameters to/from [0,1] range.
4 Added all known conversion from V4L2_CID_* to CV_CAP_PROP_*
* cap_v4l:
1. Removed all query for parameters range.
2. Refactored capture initialization.
3. Added selecting input channel by CV_CAP_PROP_MODE. Default value -1 the channels not changed.
* cap_v4l:
1. Refactoring of Convert To RGB
* cap_v4l:
1. Fixed use of video buffer index.
2. Removed extra memcopy for grab image.
3. Removed device closing from autosetup_capture_mode_v4l2
* cap_v4l:
1. The `goto` was eliminated
2. Fixed use of temporary buffer index for V4L2_PIX_FMT_SN9C10X
3. Fixed use of the bufferIndex
4. Removed trailing spaces and unused variables.
* cap_v4l:
1. Alias for capture->buffers[capture->bufferIndex]
2. Reduced size of data for memcpy: bytesused instead of length
3. Refactoring. Code duplication. More info for debug
* cap_v4l:
1. Added the ability to grab and retrieveFrame independently several times
* cap_v4l:
1. Not need to close/open device for new capture parameters applying.
2. Removed using of device name as a flag that the capture is closed. Added sufficient function.
3. Refactoring. Added requestBuffers and createBuffers
* cap_v4l:
1. Added tryIoctl with `select` like was in mainloop_v4l2.
2. Fixed buffer request for device without closing the device.
3. Some static function moved to CvCaptureCAM_V4L
4. Removed unused defines
* cap_v4l:
1. Thread-safe now
* cap_v4l:
1. Fixed thread-safe destructor
2. Fixed FPS setting
* Missed brake
* Removed thread-safety
* cap_v4l:
1. Reverted conversion parameters to/from [0,1] by default for backward compatibility.
2. Added setting for turn off compatibility mode: set CV_CAP_PROP_MODE to 65536
3. Most static functions moved to CvCaptureCAM_V4L
4. Refactoring of icvRetrieveFrameCAM_V4L and using of frame_allocated flag
* cap_v4l:
1. Added conversion to RGB from NV12, NV21
2. Refactoring. Removed wrappers for known format conversions.
* Added `CAP_PROP_CHANNEL` to the enum VideoCaptureProperties.
CAP_V4L migrated to use VideoCaptureProperties.
* 1. Update comments.
2. Environment variable `OPENCV_VIDEOIO_V4L_RANGE_NORMALIZED` for setting default backward compatibility mode.
3. Revert getting of `CAP_PROP_MODE` as fourcc code in backward compatibility mode.
* videoio: update cap_v4l - compatibilityMode => normalizePropRange
* videoio(test): V4L2 MJPEG test
`v4l2-ctl --list-formats` should have 'MJPG' entry
* videoio: fix buffer initialization
to avoid "munmap: Invalid argument" messages
* Update videoio.hpp
add VideoCapturePropertie for clossbar input pin setting
* Update cap_dshow.cpp
For some kind of capture card, such as "avermedia cv710 " , it use SerialDigital as input pin and so it can not work.
Here added new PhysicalConnectorType enumeration: PhysConn_Video_YRYBY and PhysConn_Video_SerialDigital to support it.
And also provide new property parameter CAP_CROSSBAR_INPIN_TYPE to set the crossbar input pin type which will be used in videoInput::start(int deviceID, videoDevice *VD):
" if(VD->useCrossbar)
{
DebugPrintOut("SETUP: Checking crossbar\n");
routeCrossbar(&VD->pCaptureGraph, &VD->pVideoInputFilter, VD->connection, CAPTURE_MODE);
}
"
And at last ,fixed one issue for function setSizeAndSubtype, added code
pVih->rcSource.top = pVih->rcSource.left = pVih->rcTarget.top =pVih->rcTarget.left=0;
pVih->rcSource.right = pVih->rcTarget.right= attemptWidth;
pVih->rcSource.bottom = pVih->rcTarget.bottom = attemptHeight;
without these code , rcSource and rcTarget will keeping use default resolution and cause fail in hr = VD->streamConf->SetFormat(VD->pAmMediaType) and cannot find suitable MediaType.
Tested with python3 and mfc (Avermedia cv710)
Python3 code:
import cv2
print("test cv")
cap=cv2.VideoCapture(0)
cap.set(5,60)
cap.set(3,1920)
cap.set(4,1080)
cap.set(31,6)
ret,img=cap.read()
cv2.namedWindow("cap",cv2.WINDOW_NORMAL)
cv2.resizeWindow("cap",960,640);
while True:
ret,img=cap.read()
if ret==False:
continue
cv2.imshow("cap",img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
MFC code:
void CcvtestDlg::OnBnClickedButton1()
{
VideoCapture cap(0);
cap.set(CAP_PROP_FRAME_WIDTH, 1920);
cap.set(CAP_PROP_FRAME_HEIGHT, 1080);
cap.set(CAP_CROSSBAR_INPIN_TYPE , 6);
Mat img;
namedWindow("test", WINDOW_NORMAL);
resizeWindow("test", 960, 640);
while (1)
{
if (cap.read(img))
{
imshow("test", img);
if ('q' ==waitKey(1))
break;
}
}
destroyAllWindows();
cap.release();
}
* Update cap_dshow.cpp
* Update videoio.hpp
move enum value of CAP_CROSSBAR_INPIN_TYPE to the end of list
* Update videoio.hpp
* Update cap_dshow.cpp
removed trailing whitespace
* Update test_camera.cpp
Add test for capture device using PhysConn_Video_SerialDigital as crossbar input pin
* Update test_camera.cpp
Correction of misunderstanding about how to add test case.
The `codec_tag` is only available when opening a file from disk. If `AVStream` is a network stream then `fourcc` must be obtained using `codec_id`. I have tested the following scenarios:
1) Open a `.mp4` file and verify that `codec_tag` is returned (old behavior)
2) Open a `rtsp` stream and verify that `codec_fourcc` is returned (Tested with a MJPEG, H264 and H265 stream)
* may be an typo fix
* remove identical branch,may be paste error
* add parentheses around macro parameter
* simplify if condition
* check malloc fail
* change the condition of branch removed by commit 3041502861
Currently the private control enumeration will be stopped when QUERYCTRL
returns -EINVAL only. It is possible however that other errors occur.
One particular case is when the v4l2 device doesn't support any controls
and doesn't implement the QUERYCTRL ioctl. In that case the v4l2
framework returns -ENOTTY. In that case the current control enumeration
will go in an endless loop.
To fix this change the control enumeration stop condition. If any errors
occur, end the control enumeration.
Signed-off-by: Todor Tomov <todor.tomov@linaro.org>
* Add HPX backend for OpenCV implementation
Adds hpx backend for cv::parallel_for_() calls respecting the nstripes chunking parameter. C++ code for the backend is added to modules/core/parallel.cpp. Also, the necessary changes to cmake files are introduced.
Backend can operate in 2 versions (selectable by cmake build option WITH_HPX_STARTSTOP): hpx (runtime always on) and hpx_startstop (start and stop the backend for each cv::parallel_for_() call)
* WIP: Conditionally include hpx_main.hpp to tests in core module
Header hpx_main.hpp is included to both core/perf/perf_main.cpp and core/test/test_main.cpp.
The changes to cmake files for linking hpx library to above mentioned test executalbles are proposed but have issues.
* Add coditional iclusion of hpx_main.hpp to cpp cpu modules
* Remove start/stop version of hpx backend
the static variables will cause race-condition when operating in
multithread scenarios.
Signed-off-by: Teng Yiliang <ylteng@outlook.com>
Signed-off-by: Teng Yiliang <yiliang.teng@weimob.com>
Synchronized reading from camera with heavy frame processing
provides bad effects (huge frame latency, processing frames from the past).
Generally, there is no way to process each frame and some frames will be dropped.
Allow preventive frame dropping to reduce lag of processed frames.
This mode is applied to cameras only (opened by 'index').
this enables the usage of current sensors, while dropping support for
legacy devices, see:
https://github.com/IntelRealSense/librealsense#overview
Given limited resources, and that the legacy sensors where not that
great, I think we should focus on v2.
Using codec->time_base is deprecated to specify muxer settings.
Resolves issue with FPS value for AVI files with FFmpeg 4.0.
Related FFmpeg commits:
- 194be1f43e
- 91736025b2
* Fix CV_Asserts with negation of strings
{!"string"} causes some compilers to throw a warning.
The value of the string is not that important -- it's only for printing
the assertion message.
Replace these calls with:
CV_Error(Error::StsError, "string")
to suppress the warning.
* remove unnecessary 'break' after CV_Error()
Update for MSMF-based VideoCapture and VideoWriter (#11092)
* MSMF based VideoCapture updated to handle video stream formats different from RGB24
* MSMF based VideoWriter updated to handle video frame top-bottom line ordering regardless of output format
* Fixed race condition in MSMF based VideoCapture
* Refactored MSMF based VideoCapture and VideoWriter
* Disabled frame rate estimation for MP43
* Removed test for unsupported avi container from MSMF VideoWriter tests
* Enabled MSMF-based VideoIO by default
- removed tr1 usage (dropped in C++17)
- moved includes of vector/map/iostream/limits into ts.hpp
- require opencv_test + anonymous namespace (added compile check)
- fixed norm() usage (must be from cvtest::norm for checks) and other conflict functions
- added missing license headers
v2: fix stray trailing whitespace
v3: only allow for up to one property window at the time
Opening multiple windows in the same process will just confuse
the camera filter or outright crash.
Suggested-by: @alalek
Also return whether a dialog was opened at the time.
Fix build with FFmpeg master. Some deprecated APIs have been removed. (#10011)
* Fix build with FFmpeg master.
* ffmpeg: update AVFMT_RAWPICTURE support removal
The entire AssetsLibrary framework is deprecated since iOS 8.0. The code
used in the camera example code can use UIKit to save videos to the
camera instead, which allows to avoid linking with PhotoKit instead to
prevent increasing the iOS deployment target.
* Using environment variable to store options parsed by av_dict_parse_string(ENV{OPENCV_FFMPEG_CAPTURE_OPTIONS}, ";", "|")
* Adding missing mandatory flags parameter
* Guarding against missing function via LIBAVUTIL version
* Code review fixes
Copy/paste error due to coder mistake reverted
Proper version checking for LIBAVUTIL_BUILD
Add gstreamer capture capability for some YUV formats (#8914)
* Add gstreamer capture capability for some YUV formats.(only for gstreamer-1.0)
* avoid cross initialization error
* add checking if pipeline is manualpipeline, for compatibility.
general:
- all iterative tests have been replaced with parameterized tests
- old-style try..catch tests have been modified to use EXPECT_/ASSERT_ gtest macros
- added temporary files cleanup
- modified MatComparator error message formatting
imgcodecs:
- test_grfmt.cpp split to test_jpg.cpp, test_png.cpp, test_tiff.cpp, etc.
videoio:
- added public HAVE_VIDEO_INPUT, HAVE_VIDEO_OUTPUT definitions to cvconfig.h
- built-in MotionJPEG codec could not be tested on some platforms (read_write test was disabled if ffmpeg is off, encoding/decoding was handled by ffmpeg otherwise).
- image-related tests moved to imgcodecs (Videoio_Image)
- several property get/set tests have been combined into one
- added MotionJPEG test video to opencv_extra
Aravis: Use of std::fabs, added support for 16-bit mono files and exposure compensation (#8711)
* Use of std::fabs, added support for 16-bit mono files
* Correction in priority2 stage & adding exposure compensation
CMake: Building Dynamic Framework on iOS (#8009)
* Updated python script with dynamic parameter
Updated python script to build static library by default
Updated python script to include bitcode flag
Added bitcode flag to c flags
Fixed directories and targets with static
Bitcode parameter fixed
Fixed script for static library
Fixed parameters in build function
Updated cmake common toolchain
Added changes to OpenCV Utils
Updates to cmake
Added cache internal
Updates to common toolchain
Fixed path in framework destination and added UIKit dependency
Dynamic plist for framework
Lib version removed hardcoded value
Removed trailing whitespace in toolchain
* Removed trailing whitespace
* Fixed typo in comment
* Renamed bitcode variable to bitcodedisabled
* Fixed target device family
Add support image save parameters in cv::VideoWriter.
This change will become available setting same parameters as
cv::imwrite() to cv::VideoWriter::set( cv::IMWRITE_*, value ).
Aravis several updates
* Fix adressing camera with id=0
* Aravis buffer property control & status added
* Modify of autoexposure algorith, ream frame ID from aravis + new properites
* Change of macro name
* VideoCapture now returns no frame on camera disconnecion
* Allow aravis-0.4 usage, proper camera object release.
Aravis SDK: Basic software based autoexposure control
* Basic software based autoexposure control
* Aravis autoexposure: skip frame taken while changing exposure setup