Merge pull request #3207 from jamoque:more-better-fixes

This commit is contained in:
Vadim Pisarevsky 2014-09-14 18:41:05 +00:00
commit 8fe18fefdd
11 changed files with 11 additions and 11 deletions

View File

@ -48,7 +48,7 @@ Below code snippet shows a simple procedure to create disparity map.
plt.imshow(disparity,'gray')
plt.show()
Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get more better result.
Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get a better result.
.. image:: images/disparity_map.jpg
:alt: Disparity Map

View File

@ -49,7 +49,7 @@ You can modify the pixel values the same way.
.. warning:: Numpy is a optimized library for fast array calculations. So simply accessing each and every pixel values and modifying it will be very slow and it is discouraged.
.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be more better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
Better pixel accessing and editing method :

View File

@ -75,7 +75,7 @@ Measuring Performance in IPython
Sometimes you may need to compare the performance of two similar operations. IPython gives you a magic command ``%timeit`` to perform this. It runs the code several times to get more accurate results. Once again, they are suitable to measure single line codes.
For example, do you know which of the following addition operation is more better, ``x = 5; y = x**2``, ``x = 5; y = x*x``, ``x = np.uint8([5]); y = x*x`` or ``y = np.square(x)`` ? We will find it with %timeit in IPython shell.
For example, do you know which of the following addition operation is better, ``x = 5; y = x**2``, ``x = 5; y = x*x``, ``x = np.uint8([5]); y = x*x`` or ``y = np.square(x)`` ? We will find it with %timeit in IPython shell.
::
In [10]: x = 5

View File

@ -29,7 +29,7 @@ Image is very simple. At the top of image, six small image patches are given. Qu
A and B are flat surfaces, and they are spread in a lot of area. It is difficult to find the exact location of these patches.
C and D are much more simpler. They are edges of the building. You can find an approximate location, but exact location is still difficult. It is because, along the edge, it is same everywhere. Normal to the edge, it is different. So edge is much more better feature compared to flat area, but not good enough (It is good in jigsaw puzzle for comparing continuity of edges).
C and D are much more simpler. They are edges of the building. You can find an approximate location, but exact location is still difficult. It is because, along the edge, it is same everywhere. Normal to the edge, it is different. So edge is a much better feature compared to flat area, but not good enough (It is good in jigsaw puzzle for comparing continuity of edges).
Finally, E and F are some corners of the building. And they can be easily found out. Because at corners, wherever you move this patch, it will look different. So they can be considered as a good feature. So now we move into more simpler (and widely used image) for better understanding.

View File

@ -110,7 +110,7 @@ Now I want to apply U-SURF, so that it won't find the orientation.
>>> plt.imshow(img2),plt.show()
See the results below. All the orientations are shown in same direction. It is more faster than previous. If you are working on cases where orientation is not a problem (like panorama stitching) etc, this is more better.
See the results below. All the orientations are shown in same direction. It is more faster than previous. If you are working on cases where orientation is not a problem (like panorama stitching) etc, this is better.
.. image:: images/surf_kp2.jpg
:alt: Upright-SURF

View File

@ -69,7 +69,7 @@ To draw a polygon, first you need coordinates of vertices. Make those points int
.. Note:: If third argument is ``False``, you will get a polylines joining all the points, not a closed shape.
.. Note:: ``cv2.polylines()`` can be used to draw multiple lines. Just create a list of all the lines you want to draw and pass it to the function. All lines will be drawn individually. It is more better and faster way to draw a group of lines than calling ``cv2.line()`` for each line.
.. Note:: ``cv2.polylines()`` can be used to draw multiple lines. Just create a list of all the lines you want to draw and pass it to the function. All lines will be drawn individually. It is a much better and faster way to draw a group of lines than calling ``cv2.line()`` for each line.
Adding Text to Images:
------------------------

View File

@ -48,7 +48,7 @@ Creating mouse callback function has a specific format which is same everywhere.
More Advanced Demo
===================
Now we go for much more better application. In this, we draw either rectangles or circles (depending on the mode we select) by dragging the mouse like we do in Paint application. So our mouse callback function has two parts, one to draw rectangle and other to draw the circles. This specific example will be really helpful in creating and understanding some interactive applications like object tracking, image segmentation etc.
Now we go for a much better application. In this, we draw either rectangles or circles (depending on the mode we select) by dragging the mouse like we do in Paint application. So our mouse callback function has two parts, one to draw rectangle and other to draw the circles. This specific example will be really helpful in creating and understanding some interactive applications like object tracking, image segmentation etc.
::
import cv2

View File

@ -64,7 +64,7 @@ The result we get is a two dimensional array of size 180x256. So we can show the
Method - 2 : Using Matplotlib
------------------------------
We can use **matplotlib.pyplot.imshow()** function to plot 2D histogram with different color maps. It gives us much more better idea about the different pixel density. But this also, doesn't gives us idea what color is there on a first look, unless you know the Hue values of different colors. Still I prefer this method. It is simple and better.
We can use **matplotlib.pyplot.imshow()** function to plot 2D histogram with different color maps. It gives us a much better idea about the different pixel density. But this also, doesn't gives us idea what color is there on a first look, unless you know the Hue values of different colors. Still I prefer this method. It is simple and better.
.. note:: While using this function, remember, interpolation flag should be ``nearest`` for better results.

View File

@ -15,7 +15,7 @@ It was proposed by **Michael J. Swain , Dana H. Ballard** in their paper **Index
**What is it actually in simple words?** It is used for image segmentation or finding objects of interest in an image. In simple words, it creates an image of the same size (but single channel) as that of our input image, where each pixel corresponds to the probability of that pixel belonging to our object. In more simpler worlds, the output image will have our object of interest in more white compared to remaining part. Well, that is an intuitive explanation. (I can't make it more simpler). Histogram Backprojection is used with camshift algorithm etc.
**How do we do it ?** We create a histogram of an image containing our object of interest (in our case, the ground, leaving player and other things). The object should fill the image as far as possible for better results. And a color histogram is preferred over grayscale histogram, because color of the object is more better way to define the object than its grayscale intensity. We then "back-project" this histogram over our test image where we need to find the object, ie in other words, we calculate the probability of every pixel belonging to the ground and show it. The resulting output on proper thresholding gives us the ground alone.
**How do we do it ?** We create a histogram of an image containing our object of interest (in our case, the ground, leaving player and other things). The object should fill the image as far as possible for better results. And a color histogram is preferred over grayscale histogram, because color of the object is a better way to define the object than its grayscale intensity. We then "back-project" this histogram over our test image where we need to find the object, ie in other words, we calculate the probability of every pixel belonging to the ground and show it. The resulting output on proper thresholding gives us the ground alone.
Algorithm in Numpy
====================

View File

@ -37,7 +37,7 @@ Now let's see it in OpenCV.
kNN in OpenCV
===============
We will do a simple example here, with two families (classes), just like above. Then in the next chapter, we will do much more better example.
We will do a simple example here, with two families (classes), just like above. Then in the next chapter, we will do an even better example.
So here, we label the Red family as **Class-0** (so denoted by 0) and Blue family as **Class-1** (denoted by 1). We create 25 families or 25 training data, and label them either Class-0 or Class-1. We do all these with the help of Random Number Generator in Numpy.

View File

@ -9,7 +9,7 @@ Video Analysis
.. cssclass:: toctableopencv
=========== ======================================================
|vdo_1| We have already seen an example of color-based tracking. It is simpler. This time, we see much more better algorithms like "Meanshift", and its upgraded version, "Camshift" to find and track them.
|vdo_1| We have already seen an example of color-based tracking. It is simpler. This time, we see significantly better algorithms like "Meanshift", and its upgraded version, "Camshift" to find and track them.
=========== ======================================================