mirror of
https://github.com/opencv/opencv.git
synced 2024-11-27 20:50:25 +08:00
Fixed whitespace warnings in new tutorials
This commit is contained in:
parent
6d282ddf72
commit
ecfd056111
@ -144,4 +144,3 @@ proper macros in their appropriate positions. Rest is done by generator scripts.
|
||||
may be an exceptional cases where generator scripts cannot create the wrappers. Such functions need
|
||||
to be handled manually. But most of the time, a code written according to OpenCV coding guidelines
|
||||
will be automatically wrapped by generator scripts.
|
||||
|
||||
|
@ -65,4 +65,3 @@ Exercises
|
||||
|
||||
-# OpenCV samples contain an example of generating disparity map and its 3D reconstruction. Check
|
||||
stereo_match.py in OpenCV-Python samples.
|
||||
|
||||
|
@ -172,4 +172,3 @@ Exercises
|
||||
2. Fundamental Matrix estimation is sensitive to quality of matches, outliers etc. It becomes worse
|
||||
when all selected matches lie on the same plane. [Check this
|
||||
discussion](http://answers.opencv.org/question/18125/epilines-not-correct/).
|
||||
|
||||
|
@ -80,4 +80,3 @@ Additional Resources
|
||||
Independent Elementary Features", 11th European Conference on Computer Vision (ECCV), Heraklion,
|
||||
Crete. LNCS Springer, September 2010.
|
||||
2. LSH (Locality Sensitive Hasing) at wikipedia.
|
||||
|
||||
|
@ -109,4 +109,3 @@ Exercises
|
||||
|
||||
-# In our last example, we drew filled rectangle. You modify the code to draw an unfilled
|
||||
rectangle.
|
||||
|
||||
|
@ -72,4 +72,3 @@ Exercises
|
||||
|
||||
-# Create a Paint application with adjustable colors and brush radius using trackbars. For drawing,
|
||||
refer previous tutorial on mouse handling.
|
||||
|
||||
|
@ -135,4 +135,3 @@ Exercises
|
||||
|
||||
-# OpenCV samples contain digits.py which applies a slight improvement of the above method to get
|
||||
improved result. It also contains the reference. Check it and understand it.
|
||||
|
||||
|
@ -87,4 +87,3 @@ Exercises
|
||||
Adobe Photoshop. On further search, I was able to find that same technique is already there in
|
||||
GIMP with different name, "Resynthesizer" (You need to install separate plugin). I am sure you
|
||||
will enjoy the technique.
|
||||
|
||||
|
@ -84,4 +84,3 @@ Additional Resources
|
||||
3. [Numpy Examples List](http://wiki.scipy.org/Numpy_Example_List)
|
||||
4. [OpenCV Documentation](http://docs.opencv.org/)
|
||||
5. [OpenCV Forum](http://answers.opencv.org/questions/)
|
||||
|
||||
|
@ -223,4 +223,3 @@ Exercises
|
||||
|
||||
-# Check the code in samples/python2/lk_track.py. Try to understand the code.
|
||||
2. Check the code in samples/python2/opt_flow.py. Try to understand the code.
|
||||
|
||||
|
@ -183,4 +183,3 @@ Exercises
|
||||
|
||||
-# OpenCV comes with a Python sample on interactive demo of camshift. Use it, hack it, understand
|
||||
it.
|
||||
|
||||
|
@ -52,4 +52,3 @@ image.
|
||||
opencv/samples/cpp/calibration.cpp, function computeReprojectionErrors).
|
||||
|
||||
Question: how to calculate the distance from the camera origin to any of the corners?
|
||||
|
||||
|
@ -241,4 +241,3 @@ Result
|
||||
Compiling and running your program should give you a result like this:
|
||||
|
||||
![](images/Drawing_1_Tutorial_Result_0.png)
|
||||
|
||||
|
@ -50,5 +50,3 @@ known planar objects in scenes.
|
||||
Mat points1Projected; perspectiveTransform(Mat(points1), points1Projected, H);
|
||||
|
||||
- Use drawMatches for drawing inliers.
|
||||
|
||||
|
||||
|
@ -137,5 +137,3 @@ Result
|
||||
-# And here is the result for the detected object (highlighted in green)
|
||||
|
||||
![](images/Feature_Homography_Result.jpg)
|
||||
|
||||
|
||||
|
@ -127,4 +127,3 @@ Result
|
||||
Here is the result:
|
||||
|
||||
![](images/Corner_Subpixeles_Result.jpg)
|
||||
|
||||
|
@ -33,4 +33,3 @@ Result
|
||||
![](images/My_Harris_corner_detector_Result.jpg)
|
||||
|
||||
![](images/My_Shi_Tomasi_corner_detector_Result.jpg)
|
||||
|
||||
|
@ -112,4 +112,3 @@ Result
|
||||
------
|
||||
|
||||
![](images/Feature_Detection_Result_a.jpg)
|
||||
|
||||
|
@ -206,4 +206,3 @@ The original image:
|
||||
The detected corners are surrounded by a small black circle
|
||||
|
||||
![](images/Harris_Detector_Result.jpg)
|
||||
|
||||
|
@ -74,4 +74,3 @@ As always, we would be happy to hear your comments and receive your contribution
|
||||
- @subpage tutorial_table_of_content_viz
|
||||
|
||||
These tutorials show how to use Viz module effectively.
|
||||
|
||||
|
@ -1,8 +1,5 @@
|
||||
HighGUI {#tutorial_ug_highgui}
|
||||
=======
|
||||
|
||||
Using Kinect and other OpenNI compatible depth sensors
|
||||
------------------------------------------------------
|
||||
Using Kinect and other OpenNI compatible depth sensors {#tutorial_ug_highgui}
|
||||
======================================================
|
||||
|
||||
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture
|
||||
class. Depth map, RGB image and some other formats of output can be retrieved by using familiar
|
||||
|
@ -1,8 +1,5 @@
|
||||
HighGUI {#tutorial_ug_intelperc}
|
||||
=======
|
||||
|
||||
Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors
|
||||
---------------------------------------------------------------------------------------
|
||||
Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_ug_intelperc}
|
||||
=======================================================================================
|
||||
|
||||
Depth sensors compatible with Intel Perceptual Computing SDK are supported through VideoCapture
|
||||
class. Depth map, RGB image and some other formats of output can be retrieved by using familiar
|
||||
|
Loading…
Reference in New Issue
Block a user