moved the new docs from docroot to doc

This commit is contained in:
Vadim Pisarevsky 2011-05-09 16:35:11 +00:00
parent 7491596ef8
commit a15fe21ea0
69 changed files with 53241 additions and 0 deletions

View File

@ -0,0 +1,20 @@
project(opencv_refman1)
file(GLOB_RECURSE OPENCV1_FILES_PICT pics/*.png pics/*.jpg)
file(GLOB_RECURSE OPENCV1_FILES_RST *.rst)
add_custom_target(refman1
${SPHINX_BUILD}
-b latex -c ${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_SOURCE_DIR} .
COMMAND ${CMAKE_COMMAND} -E copy_directory
${CMAKE_CURRENT_SOURCE_DIR}/pics ${CMAKE_CURRENT_BINARY_DIR}/pics
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_CURRENT_SOURCE_DIR}/../mymath.sty ${CMAKE_CURRENT_BINARY_DIR}
COMMAND ${PDFLATEX_COMPILER} opencv1x
COMMAND ${PDFLATEX_COMPILER} opencv1x
DEPENDS conf.py ${OPENCV1_FILES_RST} ${OPENCV1_FILES_PICT}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Generating the OpenCV 1.x Reference Manual")
#install(FILES ${CURRENT_BINARY_DIR}/opencv1x.pdf DESTINATION "${OPENCV_DOC_INSTALL_PATH}" COMPONENT main)

View File

@ -0,0 +1,22 @@
############
Bibliography
############
.. [Agrawal08] Agrawal, M. and Konolige, K. and Blas, M.R. "CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching", ECCV08, 2008
.. [BT96] Tomasi, C. and Birchfield, S.T. "Depth Discontinuities by Pixel-to-Pixel Stereo", STAN-CS, 1996
.. [Bay06] Bay, H. and Tuytelaars, T. and Van Gool, L. "SURF: Speeded Up Robust Features", 9th European Conference on Computer Vision, 2006
.. [Borgefors86] Borgefors, Gunilla, "Distance transformations in digital images". Comput. Vision Graph. Image Process. 34 3, pp 344--371 (1986)
.. [Bradski00] Davis, J.W. and Bradski, G.R. "Motion Segmentation and Pose Recognition with Motion History Gradients", WACV00, 2000
.. [Bradski98] Bradski, G.R. "Computer Vision Face Tracking for Use in a Perceptual User Interface", Intel, 1998
.. [Davis97] Davis, J.W. and Bobick, A.F. "The Representation and Recognition of Action Using Temporal Templates", CVPR97, 1997
.. [Felzenszwalb04] Felzenszwalb, Pedro F. and Huttenlocher, Daniel P. "Distance Transforms of Sampled Functions", TR2004-1963, TR2004-1963 (2004)
.. [Hartley99] Hartley, R.I., "Theory and Practice of Projective Rectification". IJCV 35 2, pp 115-127 (1999)

16
doc/opencv1/c/c_index.rst Normal file
View File

@ -0,0 +1,16 @@
###########
C Reference
###########
.. highlight:: python
.. toctree::
:maxdepth: 2
core
imgproc
features2d
objdetect
video
highgui
calib3d

10
doc/opencv1/c/calib3d.rst Normal file
View File

@ -0,0 +1,10 @@
*******************************************************
calib3d. Camera Calibration, Pose Estimation and Stereo
*******************************************************
.. toctree::
:maxdepth: 2
calib3d_camera_calibration_and_3d_reconstruction

File diff suppressed because it is too large Load Diff

206
doc/opencv1/c/conf.py Normal file
View File

@ -0,0 +1,206 @@
# -*- coding: utf-8 -*-
#
# opencv documentation build configuration file, created by
# sphinx-quickstart on Thu Jun 4 21:06:43 2009.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.append(os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.pngmath', 'sphinx.ext.doctest'] # , 'sphinx.ext.intersphinx']
doctest_test_doctest_blocks = 'block'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'opencv'
copyright = u'2010, authors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '2.2'
# The full version, including alpha/beta/rc tags.
release = '2.2.9'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'blue'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
"lang" : "%LANG%" # buildall substitutes this for c, cpp, py
}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['../_themes']
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = '../opencv-logo2.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['../_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'opencvdoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'opencv.tex', u'opencv Documentation',
u'author', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True
pngmath_latex_preamble = '\usepackage{mymath}\usepackage{amsmath}\usepackage{bbm}\usepackage[usenames]{color}'
# intersphinx_mapping = {
# 'http://docs.python.org/': None,
# }
intersphinx_mapping = {}
latex_elements = {'preamble': '\usepackage{mymath}\usepackage{amssymb}\usepackage{amsmath}\usepackage{bbm}'}

14
doc/opencv1/c/core.rst Normal file
View File

@ -0,0 +1,14 @@
****************************
core. The Core Functionality
****************************
.. toctree::
:maxdepth: 2
core_basic_structures
core_operations_on_arrays
core_dynamic_structures
core_drawing_functions
core_xml_yaml_persistence
core_clustering
core_utility_and_system_functions_and_macros

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,312 @@
Clustering
==========
.. highlight:: c
.. index:: KMeans2
.. _KMeans2:
KMeans2
-------
`id=0.323145542573 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/KMeans2>`__
.. cfunction:: int cvKMeans2(const CvArr* samples, int nclusters, CvArr* labels, CvTermCriteria termcrit, int attempts=1, CvRNG* rng=0, int flags=0, CvArr* centers=0, double* compactness=0)
Splits set of vectors by a given number of clusters.
:param samples: Floating-point matrix of input samples, one row per sample
:param nclusters: Number of clusters to split the set by
:param labels: Output integer vector storing cluster indices for every sample
:param termcrit: Specifies maximum number of iterations and/or accuracy (distance the centers can move by between subsequent iterations)
:param attempts: How many times the algorithm is executed using different initial labelings. The algorithm returns labels that yield the best compactness (see the last function parameter)
:param rng: Optional external random number generator; can be used to fully control the function behaviour
:param flags: Can be 0 or ``CV_KMEANS_USE_INITIAL_LABELS`` . The latter
value means that during the first (and possibly the only) attempt, the
function uses the user-supplied labels as the initial approximation
instead of generating random labels. For the second and further attempts,
the function will use randomly generated labels in any case
:param centers: The optional output array of the cluster centers
:param compactness: The optional output parameter, which is computed as :math:`\sum_i ||\texttt{samples}_i - \texttt{centers}_{\texttt{labels}_i}||^2`
after every attempt; the best (minimum) value is chosen and the
corresponding labels are returned by the function. Basically, the
user can use only the core of the function, set the number of
attempts to 1, initialize labels each time using a custom algorithm
( ``flags=CV_KMEAN_USE_INITIAL_LABELS`` ) and, based on the output compactness
or any other criteria, choose the best clustering.
The function
``cvKMeans2``
implements a k-means algorithm that finds the
centers of
``nclusters``
clusters and groups the input samples
around the clusters. On output,
:math:`\texttt{labels}_i`
contains a cluster index for
samples stored in the i-th row of the
``samples``
matrix.
::
#include "cxcore.h"
#include "highgui.h"
void main( int argc, char** argv )
{
#define MAX_CLUSTERS 5
CvScalar color_tab[MAX_CLUSTERS];
IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
CvRNG rng = cvRNG(0xffffffff);
color_tab[0] = CV_RGB(255,0,0);
color_tab[1] = CV_RGB(0,255,0);
color_tab[2] = CV_RGB(100,100,255);
color_tab[3] = CV_RGB(255,0,255);
color_tab[4] = CV_RGB(255,255,0);
cvNamedWindow( "clusters", 1 );
for(;;)
{
int k, cluster_count = cvRandInt(&rng)
int i, sample_count = cvRandInt(&rng)
CvMat* points = cvCreateMat( sample_count, 1, CV_32FC2 );
CvMat* clusters = cvCreateMat( sample_count, 1, CV_32SC1 );
/* generate random sample from multigaussian distribution */
for( k = 0; k < cluster_count; k++ )
{
CvPoint center;
CvMat point_chunk;
center.x = cvRandInt(&rng)
center.y = cvRandInt(&rng)
cvGetRows( points,
&point_chunk,
k*sample_count/cluster_count,
(k == (cluster_count - 1)) ?
sample_count :
(k+1)*sample_count/cluster_count );
cvRandArr( &rng, &point_chunk, CV_RAND_NORMAL,
cvScalar(center.x,center.y,0,0),
cvScalar(img->width/6, img->height/6,0,0) );
}
/* shuffle samples */
for( i = 0; i < sample_count/2; i++ )
{
CvPoint2D32f* pt1 =
(CvPoint2D32f*)points->data.fl + cvRandInt(&rng)
CvPoint2D32f* pt2 =
(CvPoint2D32f*)points->data.fl + cvRandInt(&rng)
CvPoint2D32f temp;
CV_SWAP( *pt1, *pt2, temp );
}
cvKMeans2( points, cluster_count, clusters,
cvTermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0 ));
cvZero( img );
for( i = 0; i < sample_count; i++ )
{
CvPoint2D32f pt = ((CvPoint2D32f*)points->data.fl)[i];
int cluster_idx = clusters->data.i[i];
cvCircle( img,
cvPointFrom32f(pt),
2,
color_tab[cluster_idx],
CV_FILLED );
}
cvReleaseMat( &points );
cvReleaseMat( &clusters );
cvShowImage( "clusters", img );
int key = cvWaitKey(0);
if( key == 27 )
break;
}
}
..
.. index:: SeqPartition
.. _SeqPartition:
SeqPartition
------------
`id=0.684667795556 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/SeqPartition>`__
.. cfunction:: int cvSeqPartition( const CvSeq* seq, CvMemStorage* storage, CvSeq** labels, CvCmpFunc is_equal, void* userdata )
Splits a sequence into equivalency classes.
:param seq: The sequence to partition
:param storage: The storage block to store the sequence of equivalency classes. If it is NULL, the function uses ``seq->storage`` for output labels
:param labels: Ouput parameter. Double pointer to the sequence of 0-based labels of input sequence elements
:param is_equal: The relation function that should return non-zero if the two particular sequence elements are from the same class, and zero otherwise. The partitioning algorithm uses transitive closure of the relation function as an equivalency critria
:param userdata: Pointer that is transparently passed to the ``is_equal`` function
::
typedef int (CV_CDECL* CvCmpFunc)(const void* a, const void* b, void* userdata);
..
The function
``cvSeqPartition``
implements a quadratic algorithm for
splitting a set into one or more equivalancy classes. The function
returns the number of equivalency classes.
::
#include "cxcore.h"
#include "highgui.h"
#include <stdio.h>
CvSeq* point_seq = 0;
IplImage* canvas = 0;
CvScalar* colors = 0;
int pos = 10;
int is_equal( const void* _a, const void* _b, void* userdata )
{
CvPoint a = *(const CvPoint*)_a;
CvPoint b = *(const CvPoint*)_b;
double threshold = *(double*)userdata;
return (double)((a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y - b.y)) <=
threshold;
}
void on_track( int pos )
{
CvSeq* labels = 0;
double threshold = pos*pos;
int i, class_count = cvSeqPartition( point_seq,
0,
&labels,
is_equal,
&threshold );
printf("
cvZero( canvas );
for( i = 0; i < labels->total; i++ )
{
CvPoint pt = *(CvPoint*)cvGetSeqElem( point_seq, i );
CvScalar color = colors[*(int*)cvGetSeqElem( labels, i )];
cvCircle( canvas, pt, 1, color, -1 );
}
cvShowImage( "points", canvas );
}
int main( int argc, char** argv )
{
CvMemStorage* storage = cvCreateMemStorage(0);
point_seq = cvCreateSeq( CV_32SC2,
sizeof(CvSeq),
sizeof(CvPoint),
storage );
CvRNG rng = cvRNG(0xffffffff);
int width = 500, height = 500;
int i, count = 1000;
canvas = cvCreateImage( cvSize(width,height), 8, 3 );
colors = (CvScalar*)cvAlloc( count*sizeof(colors[0]) );
for( i = 0; i < count; i++ )
{
CvPoint pt;
int icolor;
pt.x = cvRandInt( &rng )
pt.y = cvRandInt( &rng )
cvSeqPush( point_seq, &pt );
icolor = cvRandInt( &rng ) | 0x00404040;
colors[i] = CV_RGB(icolor & 255,
(icolor >> 8)&255,
(icolor >> 16)&255);
}
cvNamedWindow( "points", 1 );
cvCreateTrackbar( "threshold", "points", &pos, 50, on_track );
on_track(pos);
cvWaitKey(0);
return 0;
}
..

View File

@ -0,0 +1,898 @@
Drawing Functions
=================
.. highlight:: c
Drawing functions work with matrices/images of arbitrary depth.
The boundaries of the shapes can be rendered with antialiasing (implemented only for 8-bit images for now).
All the functions include the parameter color that uses a rgb value (that may be constructed
with
``CV_RGB``
macro or the :func:`cvScalar` function
) for color
images and brightness for grayscale images. For color images the order channel
is normally
*Blue, Green, Red*
, this is what
:func:`imshow`
,
:func:`imread`
and
:func:`imwrite`
expect
, so if you form a color using
:func:`cvScalar`
, it should look like:
.. math::
\texttt{cvScalar} (blue \_ component, green \_ component, red \_ component[, alpha \_ component])
If you are using your own image rendering and I/O functions, you can use any channel ordering, the drawing functions process each channel independently and do not depend on the channel order or even on the color space used. The whole image can be converted from BGR to RGB or to a different color space using
:func:`cvtColor`
.
If a drawn figure is partially or completely outside the image, the drawing functions clip it. Also, many drawing functions can handle pixel coordinates specified with sub-pixel accuracy, that is, the coordinates can be passed as fixed-point numbers, encoded as integers. The number of fractional bits is specified by the
``shift``
parameter and the real point coordinates are calculated as
:math:`\texttt{Point}(x,y)\rightarrow\texttt{Point2f}(x*2^{-shift},y*2^{-shift})`
. This feature is especially effective wehn rendering antialiased shapes.
Also, note that the functions do not support alpha-transparency - when the target image is 4-channnel, then the
``color[3]``
is simply copied to the repainted pixels. Thus, if you want to paint semi-transparent shapes, you can paint them in a separate buffer and then blend it with the main image.
.. index:: Circle
.. _Circle:
Circle
------
`id=0.533309560434 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/Circle>`__
.. cfunction:: void cvCircle( CvArr* img, CvPoint center, int radius, CvScalar color, int thickness=1, int lineType=8, int shift=0 )
Draws a circle.
:param img: Image where the circle is drawn
:param center: Center of the circle
:param radius: Radius of the circle
:param color: Circle color
:param thickness: Thickness of the circle outline if positive, otherwise this indicates that a filled circle is to be drawn
:param lineType: Type of the circle boundary, see :ref:`Line` description
:param shift: Number of fractional bits in the center coordinates and radius value
The function draws a simple or filled circle with a
given center and radius.
.. index:: ClipLine
.. _ClipLine:
ClipLine
--------
`id=0.773573058754 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/ClipLine>`__
.. cfunction:: int cvClipLine( CvSize imgSize, CvPoint* pt1, CvPoint* pt2 )
Clips the line against the image rectangle.
:param imgSize: Size of the image
:param pt1: First ending point of the line segment. It is modified by the function.
:param pt2: Second ending point of the line segment. It is modified by the function.
The function calculates a part of the line segment which is entirely within the image.
It returns 0 if the line segment is completely outside the image and 1 otherwise.
.. index:: DrawContours
.. _DrawContours:
DrawContours
------------
`id=0.180838715035 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/DrawContours>`__
.. cfunction:: void cvDrawContours( CvArr *img, CvSeq* contour, CvScalar external_color, CvScalar hole_color, int max_level, int thickness=1, int lineType=8 )
Draws contour outlines or interiors in an image.
:param img: Image where the contours are to be drawn. As with any other drawing function, the contours are clipped with the ROI.
:param contour: Pointer to the first contour
:param external_color: Color of the external contours
:param hole_color: Color of internal contours (holes)
:param max_level: Maximal level for drawn contours. If 0, only ``contour`` is drawn. If 1, the contour and all contours following
it on the same level are drawn. If 2, all contours following and all
contours one level below the contours are drawn, and so forth. If the value
is negative, the function does not draw the contours following after ``contour`` but draws the child contours of ``contour`` up
to the :math:`|\texttt{max\_level}|-1` level.
:param thickness: Thickness of lines the contours are drawn with.
If it is negative (For example, =CV _ FILLED), the contour interiors are
drawn.
:param lineType: Type of the contour segments, see :ref:`Line` description
The function draws contour outlines in the image if
:math:`\texttt{thickness} \ge 0`
or fills the area bounded by the contours if
:math:`\texttt{thickness}<0`
.
Example: Connected component detection via contour functions
::
#include "cv.h"
#include "highgui.h"
int main( int argc, char** argv )
{
IplImage* src;
// the first command line parameter must be file name of binary
// (black-n-white) image
if( argc == 2 && (src=cvLoadImage(argv[1], 0))!= 0)
{
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 3 );
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvThreshold( src, src, 1, 255, CV_THRESH_BINARY );
cvNamedWindow( "Source", 1 );
cvShowImage( "Source", src );
cvFindContours( src, storage, &contour, sizeof(CvContour),
CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
cvZero( dst );
for( ; contour != 0; contour = contour->h_next )
{
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
/* replace CV_FILLED with 1 to see the outlines */
cvDrawContours( dst, contour, color, color, -1, CV_FILLED, 8 );
}
cvNamedWindow( "Components", 1 );
cvShowImage( "Components", dst );
cvWaitKey(0);
}
}
..
.. index:: Ellipse
.. _Ellipse:
Ellipse
-------
`id=0.702580088492 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/Ellipse>`__
.. cfunction:: void cvEllipse( CvArr* img, CvPoint center, CvSize axes, double angle, double start_angle, double end_angle, CvScalar color, int thickness=1, int lineType=8, int shift=0 )
Draws a simple or thick elliptic arc or an fills ellipse sector.
:param img: The image
:param center: Center of the ellipse
:param axes: Length of the ellipse axes
:param angle: Rotation angle
:param start_angle: Starting angle of the elliptic arc
:param end_angle: Ending angle of the elliptic arc.
:param color: Ellipse color
:param thickness: Thickness of the ellipse arc outline if positive, otherwise this indicates that a filled ellipse sector is to be drawn
:param lineType: Type of the ellipse boundary, see :ref:`Line` description
:param shift: Number of fractional bits in the center coordinates and axes' values
The function draws a simple or thick elliptic
arc or fills an ellipse sector. The arc is clipped by the ROI rectangle.
A piecewise-linear approximation is used for antialiased arcs and
thick arcs. All the angles are given in degrees. The picture below
explains the meaning of the parameters.
Parameters of Elliptic Arc
.. image:: ../pics/ellipse.png
.. index:: EllipseBox
.. _EllipseBox:
EllipseBox
----------
`id=0.594855594674 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/EllipseBox>`__
.. cfunction:: void cvEllipseBox( CvArr* img, CvBox2D box, CvScalar color, int thickness=1, int lineType=8, int shift=0 )
Draws a simple or thick elliptic arc or fills an ellipse sector.
:param img: Image
:param box: The enclosing box of the ellipse drawn
:param thickness: Thickness of the ellipse boundary
:param lineType: Type of the ellipse boundary, see :ref:`Line` description
:param shift: Number of fractional bits in the box vertex coordinates
The function draws a simple or thick ellipse outline, or fills an ellipse. The functions provides a convenient way to draw an ellipse approximating some shape; that is what
:ref:`CamShift`
and
:ref:`FitEllipse`
do. The ellipse drawn is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs.
.. index:: FillConvexPoly
.. _FillConvexPoly:
FillConvexPoly
--------------
`id=0.492328679574 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/FillConvexPoly>`__
.. cfunction:: void cvFillConvexPoly( CvArr* img, CvPoint* pts, int npts, CvScalar color, int lineType=8, int shift=0 )
Fills a convex polygon.
:param img: Image
:param pts: Array of pointers to a single polygon
:param npts: Polygon vertex counter
:param color: Polygon color
:param lineType: Type of the polygon boundaries, see :ref:`Line` description
:param shift: Number of fractional bits in the vertex coordinates
The function fills a convex polygon's interior.
This function is much faster than the function
``cvFillPoly``
and can fill not only convex polygons but any monotonic polygon,
i.e., a polygon whose contour intersects every horizontal line (scan
line) twice at the most.
.. index:: FillPoly
.. _FillPoly:
FillPoly
--------
`id=0.225907613807 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/FillPoly>`__
.. cfunction:: void cvFillPoly( CvArr* img, CvPoint** pts, int* npts, int contours, CvScalar color, int lineType=8, int shift=0 )
Fills a polygon's interior.
:param img: Image
:param pts: Array of pointers to polygons
:param npts: Array of polygon vertex counters
:param contours: Number of contours that bind the filled region
:param color: Polygon color
:param lineType: Type of the polygon boundaries, see :ref:`Line` description
:param shift: Number of fractional bits in the vertex coordinates
The function fills an area bounded by several
polygonal contours. The function fills complex areas, for example,
areas with holes, contour self-intersection, and so forth.
.. index:: GetTextSize
.. _GetTextSize:
GetTextSize
-----------
`id=0.524127677241 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/GetTextSize>`__
.. cfunction:: void cvGetTextSize( const char* textString, const CvFont* font, CvSize* textSize, int* baseline )
Retrieves the width and height of a text string.
:param font: Pointer to the font structure
:param textString: Input string
:param textSize: Resultant size of the text string. Height of the text does not include the height of character parts that are below the baseline.
:param baseline: y-coordinate of the baseline relative to the bottom-most text point
The function calculates the dimensions of a rectangle to enclose a text string when a specified font is used.
.. index:: InitFont
.. _InitFont:
InitFont
--------
`id=0.0379839040886 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/InitFont>`__
.. cfunction:: void cvInitFont( CvFont* font, int fontFace, double hscale, double vscale, double shear=0, int thickness=1, int lineType=8 )
Initializes font structure.
:param font: Pointer to the font structure initialized by the function
:param fontFace: Font name identifier. Only a subset of Hershey fonts http://sources.isc.org/utils/misc/hershey-font.txt are supported now:
* **CV_FONT_HERSHEY_SIMPLEX** normal size sans-serif font
* **CV_FONT_HERSHEY_PLAIN** small size sans-serif font
* **CV_FONT_HERSHEY_DUPLEX** normal size sans-serif font (more complex than ``CV_FONT_HERSHEY_SIMPLEX`` )
* **CV_FONT_HERSHEY_COMPLEX** normal size serif font
* **CV_FONT_HERSHEY_TRIPLEX** normal size serif font (more complex than ``CV_FONT_HERSHEY_COMPLEX`` )
* **CV_FONT_HERSHEY_COMPLEX_SMALL** smaller version of ``CV_FONT_HERSHEY_COMPLEX``
* **CV_FONT_HERSHEY_SCRIPT_SIMPLEX** hand-writing style font
* **CV_FONT_HERSHEY_SCRIPT_COMPLEX** more complex variant of ``CV_FONT_HERSHEY_SCRIPT_SIMPLEX``
The parameter can be composited from one of the values above and an optional ``CV_FONT_ITALIC`` flag, which indicates italic or oblique font.
:param hscale: Horizontal scale. If equal to ``1.0f`` , the characters have the original width depending on the font type. If equal to ``0.5f`` , the characters are of half the original width.
:param vscale: Vertical scale. If equal to ``1.0f`` , the characters have the original height depending on the font type. If equal to ``0.5f`` , the characters are of half the original height.
:param shear: Approximate tangent of the character slope relative to the vertical line. A zero value means a non-italic font, ``1.0f`` means about a 45 degree slope, etc.
:param thickness: Thickness of the text strokes
:param lineType: Type of the strokes, see :ref:`Line` description
The function initializes the font structure that can be passed to text rendering functions.
.. index:: InitLineIterator
.. _InitLineIterator:
InitLineIterator
----------------
`id=0.82383633716 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/InitLineIterator>`__
.. cfunction:: int cvInitLineIterator( const CvArr* image, CvPoint pt1, CvPoint pt2, CvLineIterator* line_iterator, int connectivity=8, int left_to_right=0 )
Initializes the line iterator.
:param image: Image to sample the line from
:param pt1: First ending point of the line segment
:param pt2: Second ending point of the line segment
:param line_iterator: Pointer to the line iterator state structure
:param connectivity: The scanned line connectivity, 4 or 8.
:param left_to_right:
If ( :math:`\texttt{left\_to\_right} = 0` ) then the line is scanned in the specified order, from ``pt1`` to ``pt2`` .
If ( :math:`\texttt{left\_to\_right} \ne 0` ) the line is scanned from left-most point to right-most.
The function initializes the line
iterator and returns the number of pixels between the two end points.
Both points must be inside the image.
After the iterator has been
initialized, all the points on the raster line that connects the
two ending points may be retrieved by successive calls of
``CV_NEXT_LINE_POINT``
point.
The points on the line are
calculated one by one using a 4-connected or 8-connected Bresenham
algorithm.
Example: Using line iterator to calculate the sum of pixel values along the color line.
::
CvScalar sum_line_pixels( IplImage* image, CvPoint pt1, CvPoint pt2 )
{
CvLineIterator iterator;
int blue_sum = 0, green_sum = 0, red_sum = 0;
int count = cvInitLineIterator( image, pt1, pt2, &iterator, 8, 0 );
for( int i = 0; i < count; i++ ){
blue_sum += iterator.ptr[0];
green_sum += iterator.ptr[1];
red_sum += iterator.ptr[2];
CV_NEXT_LINE_POINT(iterator);
/* print the pixel coordinates: demonstrates how to calculate the
coordinates */
{
int offset, x, y;
/* assume that ROI is not set, otherwise need to take it
into account. */
offset = iterator.ptr - (uchar*)(image->imageData);
y = offset/image->widthStep;
x = (offset - y*image->widthStep)/(3*sizeof(uchar)
/* size of pixel */);
printf("(
}
}
return cvScalar( blue_sum, green_sum, red_sum );
}
..
.. index:: Line
.. _Line:
Line
----
`id=0.447321958155 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/Line>`__
.. cfunction:: void cvLine( CvArr* img, CvPoint pt1, CvPoint pt2, CvScalar color, int thickness=1, int lineType=8, int shift=0 )
Draws a line segment connecting two points.
:param img: The image
:param pt1: First point of the line segment
:param pt2: Second point of the line segment
:param color: Line color
:param thickness: Line thickness
:param lineType: Type of the line:
* **8** (or omitted) 8-connected line.
* **4** 4-connected line.
* **CV_AA** antialiased line.
:param shift: Number of fractional bits in the point coordinates
The function draws the line segment between
``pt1``
and
``pt2``
points in the image. The line is
clipped by the image or ROI rectangle. For non-antialiased lines
with integer coordinates the 8-connected or 4-connected Bresenham
algorithm is used. Thick lines are drawn with rounding endings.
Antialiased lines are drawn using Gaussian filtering. To specify
the line color, the user may use the macro
``CV_RGB( r, g, b )``
.
.. index:: PolyLine
.. _PolyLine:
PolyLine
--------
`id=0.384796564044 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/PolyLine>`__
.. cfunction:: void cvPolyLine( CvArr* img, CvPoint** pts, int* npts, int contours, int is_closed, CvScalar color, int thickness=1, int lineType=8, int shift=0 )
Draws simple or thick polygons.
:param pts: Array of pointers to polygons
:param npts: Array of polygon vertex counters
:param contours: Number of contours that bind the filled region
:param img: Image
:param is_closed: Indicates whether the polylines must be drawn
closed. If closed, the function draws the line from the last vertex
of every contour to the first vertex.
:param color: Polyline color
:param thickness: Thickness of the polyline edges
:param lineType: Type of the line segments, see :ref:`Line` description
:param shift: Number of fractional bits in the vertex coordinates
The function draws single or multiple polygonal curves.
.. index:: PutText
.. _PutText:
PutText
-------
`id=0.662272934911 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/PutText>`__
.. cfunction:: void cvPutText( CvArr* img, const char* text, CvPoint org, const CvFont* font, CvScalar color )
Draws a text string.
:param img: Input image
:param text: String to print
:param org: Coordinates of the bottom-left corner of the first letter
:param font: Pointer to the font structure
:param color: Text color
The function renders the text in the image with
the specified font and color. The printed text is clipped by the ROI
rectangle. Symbols that do not belong to the specified font are
replaced with the symbol for a rectangle.
.. index:: Rectangle
.. _Rectangle:
Rectangle
---------
`id=0.025949516421 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/Rectangle>`__
.. cfunction:: void cvRectangle( CvArr* img, CvPoint pt1, CvPoint pt2, CvScalar color, int thickness=1, int lineType=8, int shift=0 )
Draws a simple, thick, or filled rectangle.
:param img: Image
:param pt1: One of the rectangle's vertices
:param pt2: Opposite rectangle vertex
:param color: Line color (RGB) or brightness (grayscale image)
:param thickness: Thickness of lines that make up the rectangle. Negative values, e.g., CV _ FILLED, cause the function to draw a filled rectangle.
:param lineType: Type of the line, see :ref:`Line` description
:param shift: Number of fractional bits in the point coordinates
The function draws a rectangle with two opposite corners
``pt1``
and
``pt2``
.
.. index:: CV_RGB
.. _CV_RGB:
CV_RGB
------
`id=0.708413350932 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/CV_RGB>`__
.. cfunction:: \#define CV_RGB( r, g, b ) cvScalar( (b), (g), (r) )
Constructs a color value.
:param red: Red component
:param grn: Green component
:param blu: Blue component

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,993 @@
Utility and System Functions and Macros
=======================================
.. highlight:: c
Error Handling
--------------
Error handling in OpenCV is similar to IPL (Image Processing
Library). In the case of an error, functions do not return the error
code. Instead, they raise an error using
``CV_ERROR``
macro that calls
:ref:`Error`
that, in its turn, sets the error
status with
:ref:`SetErrStatus`
and calls a standard or user-defined
error handler (that can display a message box, write to log, etc., see
:ref:`RedirectError`
). There is a global variable, one per each program
thread, that contains current error status (an integer value). The status
can be retrieved with the
:ref:`GetErrStatus`
function.
There are three modes of error handling (see
:ref:`SetErrMode`
and
:ref:`GetErrMode`
):
*
**Leaf**
. The program is terminated after the error handler is
called. This is the default value. It is useful for debugging, as the
error is signalled immediately after it occurs. However, for production
systems, other two methods may be preferable as they provide more
control.
*
**Parent**
. The program is not terminated, but the error handler
is called. The stack is unwound (it is done w/o using a C++ exception
mechanism). The user may check error code after calling the
``CxCore``
function with
:ref:`GetErrStatus`
and react.
*
**Silent**
. Similar to
``Parent``
mode, but no error handler
is called.
Actually, the semantics of the
``Leaf``
and
``Parent``
modes are implemented by error handlers and the above description is true for them.
:ref:`GuiBoxReport`
behaves slightly differently, and some custom error handlers may implement quite different semantics.
Macros for raising an error, checking for errors, etc.
::
/* special macros for enclosing processing statements within a function and separating
them from prologue (resource initialization) and epilogue (guaranteed resource release) */
#define __BEGIN__ {
#define __END__ goto exit; exit: ; }
/* proceeds to "resource release" stage */
#define EXIT goto exit
/* Declares locally the function name for CV_ERROR() use */
#define CV_FUNCNAME( Name ) \
static char cvFuncName[] = Name
/* Raises an error within the current context */
#define CV_ERROR( Code, Msg ) \
/* Checks status after calling CXCORE function */
#define CV_CHECK() \
/* Provies shorthand for CXCORE function call and CV_CHECK() */
#define CV_CALL( Statement ) \
/* Checks some condition in both debug and release configurations */
#define CV_ASSERT( Condition ) \
/* these macros are similar to their CV_... counterparts, but they
do not need exit label nor cvFuncName to be defined */
#define OPENCV_ERROR(status,func_name,err_msg) ...
#define OPENCV_ERRCHK(func_name,err_msg) ...
#define OPENCV_ASSERT(condition,func_name,err_msg) ...
#define OPENCV_CALL(statement) ...
..
Instead of a discussion, below is a documented example of a typical CXCORE function and an example of the function use.
Example: Use of Error Handling Macros
-------------------------------------
::
#include "cxcore.h"
#include <stdio.h>
void cvResizeDCT( CvMat* input_array, CvMat* output_array )
{
CvMat* temp_array = 0; // declare pointer that should be released anyway.
CV_FUNCNAME( "cvResizeDCT" ); // declare cvFuncName
__BEGIN__; // start processing. There may be some declarations just after
// this macro, but they could not be accessed from the epilogue.
if( !CV_IS_MAT(input_array) || !CV_IS_MAT(output_array) )
// use CV_ERROR() to raise an error
CV_ERROR( CV_StsBadArg,
"input_array or output_array are not valid matrices" );
// some restrictions that are going to be removed later, may be checked
// with CV_ASSERT()
CV_ASSERT( input_array->rows == 1 && output_array->rows == 1 );
// use CV_CALL for safe function call
CV_CALL( temp_array = cvCreateMat( input_array->rows,
MAX(input_array->cols,
output_array->cols),
input_array->type ));
if( output_array->cols > input_array->cols )
CV_CALL( cvZero( temp_array ));
temp_array->cols = input_array->cols;
CV_CALL( cvDCT( input_array, temp_array, CV_DXT_FORWARD ));
temp_array->cols = output_array->cols;
CV_CALL( cvDCT( temp_array, output_array, CV_DXT_INVERSE ));
CV_CALL( cvScale( output_array,
output_array,
1./sqrt((double)input_array->cols*output_array->cols), 0 ));
__END__; // finish processing. Epilogue follows after the macro.
// release temp_array. If temp_array has not been allocated
// before an error occured, cvReleaseMat
// takes care of it and does nothing in this case.
cvReleaseMat( &temp_array );
}
int main( int argc, char** argv )
{
CvMat* src = cvCreateMat( 1, 512, CV_32F );
#if 1 /* no errors */
CvMat* dst = cvCreateMat( 1, 256, CV_32F );
#else
CvMat* dst = 0; /* test error processing mechanism */
#endif
cvSet( src, cvRealScalar(1.), 0 );
#if 0 /* change 0 to 1 to suppress error handler invocation */
cvSetErrMode( CV_ErrModeSilent );
#endif
cvResizeDCT( src, dst ); // if some error occurs, the message
// box will popup, or a message will be
// written to log, or some user-defined
// processing will be done
if( cvGetErrStatus() < 0 )
printf("Some error occured" );
else
printf("Everything is OK" );
return 0;
}
..
.. index:: GetErrStatus
.. _GetErrStatus:
GetErrStatus
------------
`id=0.158872599983 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/GetErrStatus>`__
.. cfunction:: int cvGetErrStatus( void )
Returns the current error status.
The function returns the current error status -
the value set with the last
:ref:`SetErrStatus`
call. Note that in
``Leaf``
mode, the program terminates immediately after an
error occurs, so to always gain control after the function call,
one should call
:ref:`SetErrMode`
and set the
``Parent``
or
``Silent``
error mode.
.. index:: SetErrStatus
.. _SetErrStatus:
SetErrStatus
------------
`id=0.548990286602 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/SetErrStatus>`__
.. cfunction:: void cvSetErrStatus( int status )
Sets the error status.
:param status: The error status
The function sets the error status to the specified value. Mostly, the function is used to reset the error status (set to it
``CV_StsOk``
) to recover after an error. In other cases it is more natural to call
:ref:`Error`
or
``CV_ERROR``
.
.. index:: GetErrMode
.. _GetErrMode:
GetErrMode
----------
`id=0.395450807117 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/GetErrMode>`__
.. cfunction:: int cvGetErrMode(void)
Returns the current error mode.
The function returns the current error mode - the value set with the last
:ref:`SetErrMode`
call.
.. index:: SetErrMode
.. _SetErrMode:
SetErrMode
----------
`id=0.837950474175 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/SetErrMode>`__
::
..
.. cfunction:: int cvSetErrMode( int mode )
Sets the error mode.
#define CV_ErrModeLeaf 0
#define CV_ErrModeParent 1
#define CV_ErrModeSilent 2
:param mode: The error mode
The function sets the specified error mode. For descriptions of different error modes, see the beginning of the error section.
.. index:: Error
.. _Error:
Error
-----
`id=0.755789688999 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/Error>`__
.. cfunction:: int cvError( int status, const char* func_name, const char* err_msg, const char* filename, int line )
Raises an error.
:param status: The error status
:param func_name: Name of the function where the error occured
:param err_msg: Additional information/diagnostics about the error
:param filename: Name of the file where the error occured
:param line: Line number, where the error occured
The function sets the error status to the specified value (via
:ref:`SetErrStatus`
) and, if the error mode is not
``Silent``
, calls the error handler.
.. index:: ErrorStr
.. _ErrorStr:
ErrorStr
--------
`id=0.116403749541 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/ErrorStr>`__
.. cfunction:: const char* cvErrorStr( int status )
Returns textual description of an error status code.
:param status: The error status
The function returns the textual description for
the specified error status code. In the case of unknown status, the function
returns a NULL pointer.
.. index:: RedirectError
.. _RedirectError:
RedirectError
-------------
`id=0.0620147644903 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/RedirectError>`__
.. cfunction:: CvErrorCallback cvRedirectError( CvErrorCallback error_handler, void* userdata=NULL, void** prevUserdata=NULL )
Sets a new error handler.
:param error_handler: The new error _ handler
:param userdata: Arbitrary pointer that is transparently passed to the error handler
:param prevUserdata: Pointer to the previously assigned user data pointer
::
typedef int (CV_CDECL *CvErrorCallback)( int status, const char* func_name,
const char* err_msg, const char* file_name, int line );
..
The function sets a new error handler that
can be one of the standard handlers or a custom handler
that has a specific interface. The handler takes the same parameters
as the
:ref:`Error`
function. If the handler returns a non-zero value, the
program is terminated; otherwise, it continues. The error handler may
check the current error mode with
:ref:`GetErrMode`
to make a decision.
.. index:: cvNulDevReport cvStdErrReport cvGuiBoxReport
.. _cvNulDevReport cvStdErrReport cvGuiBoxReport:
cvNulDevReport cvStdErrReport cvGuiBoxReport
--------------------------------------------
`id=0.940927070556 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/cvNulDevReport%20cvStdErrReport%20cvGuiBoxReport>`__
.. cfunction:: int cvNulDevReport( int status, const char* func_name, const char* err_msg, const char* file_name, int line, void* userdata )
.. cfunction:: int cvStdErrReport( int status, const char* func_name, const char* err_msg, const char* file_name, int line, void* userdata )
.. cfunction:: int cvGuiBoxReport( int status, const char* func_name, const char* err_msg, const char* file_name, int line, void* userdata )
Provide standard error handling.
:param status: The error status
:param func_name: Name of the function where the error occured
:param err_msg: Additional information/diagnostics about the error
:param filename: Name of the file where the error occured
:param line: Line number, where the error occured
:param userdata: Pointer to the user data. Ignored by the standard handlers
The functions
``cvNullDevReport``
,
``cvStdErrReport``
,
and
``cvGuiBoxReport``
provide standard error
handling.
``cvGuiBoxReport``
is the default error
handler on Win32 systems,
``cvStdErrReport``
is the default on other
systems.
``cvGuiBoxReport``
pops up a message box with the error
description and suggest a few options. Below is an example message box
that may be recieved with the sample code above, if one introduces an
error as described in the sample.
**Error Message Box**
.. image:: ../pics/errmsg.png
If the error handler is set to
``cvStdErrReport``
, the above message will be printed to standard error output and the program will be terminated or continued, depending on the current error mode.
**Error Message printed to Standard Error Output (in ``Leaf`` mode)**
::
OpenCV ERROR: Bad argument (input_array or output_array are not valid matrices)
in function cvResizeDCT, D:UserVPProjectsavl_probaa.cpp(75)
Terminating the application...
..
.. index:: Alloc
.. _Alloc:
Alloc
-----
`id=0.593055881775 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/Alloc>`__
.. cfunction:: void* cvAlloc( size_t size )
Allocates a memory buffer.
:param size: Buffer size in bytes
The function allocates
``size``
bytes and returns
a pointer to the allocated buffer. In the case of an error the function reports an
error and returns a NULL pointer. By default,
``cvAlloc``
calls
``icvAlloc``
which
itself calls
``malloc``
. However it is possible to assign user-defined memory
allocation/deallocation functions using the
:ref:`SetMemoryManager`
function.
.. index:: Free
.. _Free:
Free
----
`id=0.667310584005 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/Free>`__
.. cfunction:: void cvFree( void** ptr )
Deallocates a memory buffer.
:param ptr: Double pointer to released buffer
The function deallocates a memory buffer allocated by
:ref:`Alloc`
. It clears the pointer to buffer upon exit, which is why
the double pointer is used. If the
``*buffer``
is already NULL, the function
does nothing.
.. index:: GetTickCount
.. _GetTickCount:
GetTickCount
------------
`id=0.0577183375288 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/GetTickCount>`__
.. cfunction:: int64 cvGetTickCount( void )
Returns the number of ticks.
The function returns number of the ticks starting from some platform-dependent event (number of CPU ticks from the startup, number of milliseconds from 1970th year, etc.). The function is useful for accurate measurement of a function/user-code execution time. To convert the number of ticks to time units, use
:ref:`GetTickFrequency`
.
.. index:: GetTickFrequency
.. _GetTickFrequency:
GetTickFrequency
----------------
`id=0.796183003536 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/GetTickFrequency>`__
.. cfunction:: double cvGetTickFrequency( void )
Returns the number of ticks per microsecond.
The function returns the number of ticks per microsecond. Thus, the quotient of
:ref:`GetTickCount`
and
:ref:`GetTickFrequency`
will give the number of microseconds starting from the platform-dependent event.
.. index:: RegisterModule
.. _RegisterModule:
RegisterModule
--------------
`id=0.265903415766 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/RegisterModule>`__
::
..
.. cfunction:: int cvRegisterModule( const CvModuleInfo* moduleInfo )
Registers another module.
typedef struct CvPluginFuncInfo
{
void** func_addr;
void* default_func_addr;
const char* func_names;
int search_modules;
int loaded_from;
}
CvPluginFuncInfo;
typedef struct CvModuleInfo
{
struct CvModuleInfo* next;
const char* name;
const char* version;
CvPluginFuncInfo* func_tab;
}
CvModuleInfo;
:param moduleInfo: Information about the module
The function adds a module to the list of
registered modules. After the module is registered, information about
it can be retrieved using the
:ref:`GetModuleInfo`
function. Also, the
registered module makes full use of optimized plugins (IPP, MKL, ...),
supported by CXCORE. CXCORE itself, CV (computer vision), CVAUX (auxilary
computer vision), and HIGHGUI (visualization and image/video acquisition) are
examples of modules. Registration is usually done when the shared library
is loaded. See
``cxcore/src/cxswitcher.cpp``
and
``cv/src/cvswitcher.cpp``
for details about how registration is done
and look at
``cxcore/src/cxswitcher.cpp``
,
``cxcore/src/_cxipp.h``
on how IPP and MKL are connected to the modules.
.. index:: GetModuleInfo
.. _GetModuleInfo:
GetModuleInfo
-------------
`id=0.510096912729 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/GetModuleInfo>`__
.. cfunction:: void cvGetModuleInfo( const char* moduleName, const char** version, const char** loadedAddonPlugins)
Retrieves information about registered module(s) and plugins.
:param moduleName: Name of the module of interest, or NULL, which means all the modules
:param version: The output parameter. Information about the module(s), including version
:param loadedAddonPlugins: The list of names and versions of the optimized plugins that CXCORE was able to find and load
The function returns information about one or
all of the registered modules. The returned information is stored inside
the libraries, so the user should not deallocate or modify the returned
text strings.
.. index:: UseOptimized
.. _UseOptimized:
UseOptimized
------------
`id=0.657951043449 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/UseOptimized>`__
.. cfunction:: int cvUseOptimized( int onoff )
Switches between optimized/non-optimized modes.
:param onoff: Use optimized ( :math:`\ne 0` ) or not ( :math:`=0` )
The function switches between the mode, where
only pure C implementations from cxcore, OpenCV, etc. are used, and
the mode, where IPP and MKL functions are used if available. When
``cvUseOptimized(0)``
is called, all the optimized libraries are
unloaded. The function may be useful for debugging, IPP and MKL upgrading on
the fly, online speed comparisons, etc. It returns the number of optimized
functions loaded. Note that by default, the optimized plugins are loaded,
so it is not necessary to call
``cvUseOptimized(1)``
in the beginning of
the program (actually, it will only increase the startup time).
.. index:: SetMemoryManager
.. _SetMemoryManager:
SetMemoryManager
----------------
`id=0.591055548987 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/SetMemoryManager>`__
::
..
.. cfunction:: void cvSetMemoryManager( CvAllocFunc allocFunc=NULL, CvFreeFunc freeFunc=NULL, void* userdata=NULL )
Accesses custom/default memory managing functions.
typedef void* (CV_CDECL *CvAllocFunc)(size_t size, void* userdata);
typedef int (CV_CDECL *CvFreeFunc)(void* pptr, void* userdata);
:param allocFunc: Allocation function; the interface is similar to ``malloc`` , except that ``userdata`` may be used to determine the context
:param freeFunc: Deallocation function; the interface is similar to ``free``
:param userdata: User data that is transparently passed to the custom functions
The function sets user-defined memory
managment functions (substitutes for
``malloc``
and
``free``
) that will be called
by
``cvAlloc, cvFree``
and higher-level functions (e.g.,
``cvCreateImage``
). Note
that the function should be called when there is data allocated using
``cvAlloc``
. Also, to avoid infinite recursive calls, it is not
allowed to call
``cvAlloc``
and
:ref:`Free`
from the custom
allocation/deallocation functions.
If the
``alloc_func``
and
``free_func``
pointers are
``NULL``
, the default memory managing functions are restored.
.. index:: SetIPLAllocators
.. _SetIPLAllocators:
SetIPLAllocators
----------------
`id=0.433242475449 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/core/SetIPLAllocators>`__
::
\
\
..
.. cfunction:: void cvSetIPLAllocators( Cv_iplCreateImageHeader create_header, Cv_iplAllocateImageData allocate_data, Cv_iplDeallocate deallocate, Cv_iplCreateROI create_roi, Cv_iplCloneImage clone_image )
Switches to IPL functions for image allocation/deallocation.
typedef IplImage* (CV_STDCALL* Cv_iplCreateImageHeader)
(int,int,int,char*,char*,int,int,int,int,int,
IplROI*,IplImage*,void*,IplTileInfo*);
typedef void (CV_STDCALL* Cv_iplAllocateImageData)(IplImage*,int,int);
typedef void (CV_STDCALL* Cv_iplDeallocate)(IplImage*,int);
typedef IplROI* (CV_STDCALL* Cv_iplCreateROI)(int,int,int,int,int);
typedef IplImage* (CV_STDCALL* Cv_iplCloneImage)(const IplImage*);
#define CV_TURN_ON_IPL_COMPATIBILITY() cvSetIPLAllocators( iplCreateImageHeader, iplAllocateImage, iplDeallocate, iplCreateROI, iplCloneImage )
:param create_header: Pointer to iplCreateImageHeader
:param allocate_data: Pointer to iplAllocateImage
:param deallocate: Pointer to iplDeallocate
:param create_roi: Pointer to iplCreateROI
:param clone_image: Pointer to iplCloneImage
The function causes CXCORE to use IPL functions
for image allocation/deallocation operations. For convenience, there
is the wrapping macro
``CV_TURN_ON_IPL_COMPATIBILITY``
. The
function is useful for applications where IPL and CXCORE/OpenCV are used
together and still there are calls to
``iplCreateImageHeader``
,
etc. The function is not necessary if IPL is called only for data
processing and all the allocation/deallocation is done by CXCORE, or
if all the allocation/deallocation is done by IPL and some of OpenCV
functions are used to process the data.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,10 @@
*******************************************************
features2d. Feature Detection and Descriptor Extraction
*******************************************************
.. toctree::
:maxdepth: 2
features2d_feature_detection_and_description

View File

@ -0,0 +1,270 @@
Feature detection and description
=================================
.. highlight:: c
* **image** The image. Keypoints (corners) will be detected on this.
* **keypoints** Keypoints detected on the image.
* **threshold** Threshold on difference between intensity of center pixel and
pixels on circle around this pixel. See description of the algorithm.
* **nonmaxSupression** If it is true then non-maximum supression will be applied to detected corners (keypoints).
.. index:: ExtractSURF
.. _ExtractSURF:
ExtractSURF
-----------
`id=0.726137466362 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/features2d/ExtractSURF>`__
.. cfunction:: void cvExtractSURF( const CvArr* image, const CvArr* mask, CvSeq** keypoints, CvSeq** descriptors, CvMemStorage* storage, CvSURFParams params )
Extracts Speeded Up Robust Features from an image.
:param image: The input 8-bit grayscale image
:param mask: The optional input 8-bit mask. The features are only found in the areas that contain more than 50 % of non-zero mask pixels
:param keypoints: The output parameter; double pointer to the sequence of keypoints. The sequence of CvSURFPoint structures is as follows:
::
typedef struct CvSURFPoint
{
CvPoint2D32f pt; // position of the feature within the image
int laplacian; // -1, 0 or +1. sign of the laplacian at the point.
// can be used to speedup feature comparison
// (normally features with laplacians of different
// signs can not match)
int size; // size of the feature
float dir; // orientation of the feature: 0..360 degrees
float hessian; // value of the hessian (can be used to
// approximately estimate the feature strengths;
// see also params.hessianThreshold)
}
CvSURFPoint;
..
:param descriptors: The optional output parameter; double pointer to the sequence of descriptors. Depending on the params.extended value, each element of the sequence will be either a 64-element or a 128-element floating-point ( ``CV_32F`` ) vector. If the parameter is NULL, the descriptors are not computed
:param storage: Memory storage where keypoints and descriptors will be stored
:param params: Various algorithm parameters put to the structure CvSURFParams:
::
typedef struct CvSURFParams
{
int extended; // 0 means basic descriptors (64 elements each),
// 1 means extended descriptors (128 elements each)
double hessianThreshold; // only features with keypoint.hessian
// larger than that are extracted.
// good default value is ~300-500 (can depend on the
// average local contrast and sharpness of the image).
// user can further filter out some features based on
// their hessian values and other characteristics.
int nOctaves; // the number of octaves to be used for extraction.
// With each next octave the feature size is doubled
// (3 by default)
int nOctaveLayers; // The number of layers within each octave
// (4 by default)
}
CvSURFParams;
CvSURFParams cvSURFParams(double hessianThreshold, int extended=0);
// returns default parameters
..
The function cvExtractSURF finds robust features in the image, as
described in
Bay06
. For each feature it returns its location, size,
orientation and optionally the descriptor, basic or extended. The function
can be used for object tracking and localization, image stitching etc.
See the
``find_obj.cpp``
demo in OpenCV samples directory.
.. index:: GetStarKeypoints
.. _GetStarKeypoints:
GetStarKeypoints
----------------
`id=0.460873667573 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/features2d/GetStarKeypoints>`__
.. cfunction:: CvSeq* cvGetStarKeypoints( const CvArr* image, CvMemStorage* storage, CvStarDetectorParams params=cvStarDetectorParams() )
Retrieves keypoints using the StarDetector algorithm.
:param image: The input 8-bit grayscale image
:param storage: Memory storage where the keypoints will be stored
:param params: Various algorithm parameters given to the structure CvStarDetectorParams:
::
typedef struct CvStarDetectorParams
{
int maxSize; // maximal size of the features detected. The following
// values of the parameter are supported:
// 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128
int responseThreshold; // threshold for the approximatd laplacian,
// used to eliminate weak features
int lineThresholdProjected; // another threshold for laplacian to
// eliminate edges
int lineThresholdBinarized; // another threshold for the feature
// scale to eliminate edges
int suppressNonmaxSize; // linear size of a pixel neighborhood
// for non-maxima suppression
}
CvStarDetectorParams;
..
The function GetStarKeypoints extracts keypoints that are local
scale-space extremas. The scale-space is constructed by computing
approximate values of laplacians with different sigma's at each
pixel. Instead of using pyramids, a popular approach to save computing
time, all of the laplacians are computed at each pixel of the original
high-resolution image. But each approximate laplacian value is computed
in O(1) time regardless of the sigma, thanks to the use of integral
images. The algorithm is based on the paper
Agrawal08
, but instead
of a square, hexagon or octagon it uses an 8-end star shape, hence the name,
consisting of overlapping upright and tilted squares.
Each computed feature is represented by the following structure:
::
typedef struct CvStarKeypoint
{
CvPoint pt; // coordinates of the feature
int size; // feature size, see CvStarDetectorParams::maxSize
float response; // the approximated laplacian value at that point.
}
CvStarKeypoint;
inline CvStarKeypoint cvStarKeypoint(CvPoint pt, int size, float response);
..
Below is the small usage sample:
::
#include "cv.h"
#include "highgui.h"
int main(int argc, char** argv)
{
const char* filename = argc > 1 ? argv[1] : "lena.jpg";
IplImage* img = cvLoadImage( filename, 0 ), *cimg;
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* keypoints = 0;
int i;
if( !img )
return 0;
cvNamedWindow( "image", 1 );
cvShowImage( "image", img );
cvNamedWindow( "features", 1 );
cimg = cvCreateImage( cvGetSize(img), 8, 3 );
cvCvtColor( img, cimg, CV_GRAY2BGR );
keypoints = cvGetStarKeypoints( img, storage, cvStarDetectorParams(45) );
for( i = 0; i < (keypoints ? keypoints->total : 0); i++ )
{
CvStarKeypoint kpt = *(CvStarKeypoint*)cvGetSeqElem(keypoints, i);
int r = kpt.size/2;
cvCircle( cimg, kpt.pt, r, CV_RGB(0,255,0));
cvLine( cimg, cvPoint(kpt.pt.x + r, kpt.pt.y + r),
cvPoint(kpt.pt.x - r, kpt.pt.y - r), CV_RGB(0,255,0));
cvLine( cimg, cvPoint(kpt.pt.x - r, kpt.pt.y + r),
cvPoint(kpt.pt.x + r, kpt.pt.y - r), CV_RGB(0,255,0));
}
cvShowImage( "features", cimg );
cvWaitKey();
}
..

39
doc/opencv1/c/highgui.rst Normal file
View File

@ -0,0 +1,39 @@
*************************************
highgui. High-level GUI and Media I/O
*************************************
While OpenCV was designed for use in full-scale
applications and can be used within functionally rich UI frameworks (such as Qt, WinForms or Cocoa) or without any UI at all, sometimes there is a need to try some functionality quickly and visualize the results. This is what the HighGUI module has been designed for.
It provides easy interface to:
*
create and manipulate windows that can display images and "remember" their content (no need to handle repaint events from OS)
*
add trackbars to the windows, handle simple mouse events as well as keyboard commmands
*
read and write images to/from disk or memory.
*
read video from camera or file and write video to a file.
.. toctree::
:maxdepth: 2
highgui_user_interface
highgui_reading_and_writing_images_and_video
highgui_qt_new_functions

View File

@ -0,0 +1,674 @@
Qt new functions
================
.. highlight:: c
.. image:: ../pics/qtgui.png
This figure explains the new functionalities implemented with Qt GUI. As we can see, the new GUI provides a statusbar, a toolbar, and a control panel. The control panel can have trackbars and buttonbars attached to it.
*
To attach a trackbar, the window
_
name parameter must be NULL.
*
To attach a buttonbar, a button must be created.
If the last bar attached to the control panel is a buttonbar, the new button is added on the right of the last button.
If the last bar attached to the control panel is a trackbar, or the control panel is empty, a new buttonbar is created. Then a new button is attached to it.
The following code is an example used to generate the figure.
::
int main(int argc, char *argv[])
int value = 50;
int value2 = 0;
cvNamedWindow("main1",CV_WINDOW_NORMAL);
cvNamedWindow("main2",CV_WINDOW_AUTOSIZE | CV_GUI_NORMAL);
cvCreateTrackbar( "track1", "main1", &value, 255, NULL);//OK tested
char* nameb1 = "button1";
char* nameb2 = "button2";
cvCreateButton(nameb1,callbackButton,nameb1,CV_CHECKBOX,1);
cvCreateButton(nameb2,callbackButton,nameb2,CV_CHECKBOX,0);
cvCreateTrackbar( "track2", NULL, &value2, 255, NULL);
cvCreateButton("button5",callbackButton1,NULL,CV_RADIOBOX,0);
cvCreateButton("button6",callbackButton2,NULL,CV_RADIOBOX,1);
cvSetMouseCallback( "main2",on_mouse,NULL );
IplImage* img1 = cvLoadImage("files/flower.jpg");
IplImage* img2 = cvCreateImage(cvGetSize(img1),8,3);
CvCapture* video = cvCaptureFromFile("files/hockey.avi");
IplImage* img3 = cvCreateImage(cvGetSize(cvQueryFrame(video)),8,3);
while(cvWaitKey(33) != 27)
{
cvAddS(img1,cvScalarAll(value),img2);
cvAddS(cvQueryFrame(video),cvScalarAll(value2),img3);
cvShowImage("main1",img2);
cvShowImage("main2",img3);
}
cvDestroyAllWindows();
cvReleaseImage(&img1);
cvReleaseImage(&img2);
cvReleaseImage(&img3);
cvReleaseCapture(&video);
return 0;
}
..
.. index:: SetWindowProperty
.. _SetWindowProperty:
SetWindowProperty
-----------------
`id=0.0287199623208 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/SetWindowProperty>`__
.. cfunction:: void cvSetWindowProperty(const char* name, int prop_id, double prop_value)
Change the parameters of the window dynamically.
:param name: Name of the window.
:param prop_id: Window's property to edit. The operation flags:
* **CV_WND_PROP_FULLSCREEN** Change if the window is fullscreen ( ``CV_WINDOW_NORMAL`` or ``CV_WINDOW_FULLSCREEN`` ).
* **CV_WND_PROP_AUTOSIZE** Change if the user can resize the window (texttt {CV\_WINDOW\_NORMAL} or ``CV_WINDOW_AUTOSIZE`` ).
* **CV_WND_PROP_ASPECTRATIO** Change if the image's aspect ratio is preserved (texttt {CV\_WINDOW\_FREERATIO} or ``CV_WINDOW_KEEPRATIO`` ).
:param prop_value: New value of the Window's property. The operation flags:
* **CV_WINDOW_NORMAL** Change the window in normal size, or allows the user to resize the window.
* **CV_WINDOW_AUTOSIZE** The user cannot resize the window, the size is constrainted by the image displayed.
* **CV_WINDOW_FULLSCREEN** Change the window to fullscreen.
* **CV_WINDOW_FREERATIO** The image expends as much as it can (no ratio constraint)
* **CV_WINDOW_KEEPRATIO** The ration image is respected.
The function
`` cvSetWindowProperty``
allows to change the window's properties.
.. index:: GetWindowProperty
.. _GetWindowProperty:
GetWindowProperty
-----------------
`id=0.951341223423 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/GetWindowProperty>`__
.. cfunction:: void cvGetWindowProperty(const char* name, int prop_id)
Get the parameters of the window.
:param name: Name of the window.
:param prop_id: Window's property to retrive. The operation flags:
* **CV_WND_PROP_FULLSCREEN** Change if the window is fullscreen ( ``CV_WINDOW_NORMAL`` or ``CV_WINDOW_FULLSCREEN`` ).
* **CV_WND_PROP_AUTOSIZE** Change if the user can resize the window (texttt {CV\_WINDOW\_NORMAL} or ``CV_WINDOW_AUTOSIZE`` ).
* **CV_WND_PROP_ASPECTRATIO** Change if the image's aspect ratio is preserved (texttt {CV\_WINDOW\_FREERATIO} or ``CV_WINDOW_KEEPRATIO`` ).
See
:ref:`SetWindowProperty`
to know the meaning of the returned values.
The function
`` cvGetWindowProperty``
return window's properties.
.. index:: FontQt
.. _FontQt:
FontQt
------
`id=0.31590502208 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/FontQt>`__
:ref:`addText`
.. cfunction:: CvFont cvFontQt(const char* nameFont, int pointSize = -1, CvScalar color = cvScalarAll(0), int weight = CV_FONT_NORMAL, int style = CV_STYLE_NORMAL, int spacing = 0)
Create the font to be used to draw text on an image (with ).
:param nameFont: Name of the font. The name should match the name of a system font (such as ``Times''). If the font is not found, a default one will be used.
:param pointSize: Size of the font. If not specified, equal zero or negative, the point size of the font is set to a system-dependent default value. Generally, this is 12 points.
:param color: Color of the font in BGRA -- A = 255 is fully transparent. Use the macro CV _ RGB for simplicity.
:param weight: The operation flags:
* **CV_FONT_LIGHT** Weight of 25
* **CV_FONT_NORMAL** Weight of 50
* **CV_FONT_DEMIBOLD** Weight of 63
* **CV_FONT_BOLD** Weight of 75
* **CV_FONT_BLACK** Weight of 87
You can also specify a positive integer for more control.
:param style: The operation flags:
* **CV_STYLE_NORMAL** Font is normal
* **CV_STYLE_ITALIC** Font is in italic
* **CV_STYLE_OBLIQUE** Font is oblique
:param spacing: Spacing between characters. Can be negative or positive
The function
``cvFontQt``
creates a CvFont object to be used with
:ref:`addText`
. This CvFont is not compatible with cvPutText.
A basic usage of this function is:
::
CvFont font = cvFontQt(''Times'');
cvAddText( img1, ``Hello World !'', cvPoint(50,50), font);
..
.. index:: AddText
.. _AddText:
AddText
-------
`id=0.363444830722 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/AddText>`__
.. cfunction:: void cvAddText(const CvArr* img, const char* text, CvPoint location, CvFont *font)
Create the font to be used to draw text on an image
:param img: Image where the text should be drawn
:param text: Text to write on the image
:param location: Point(x,y) where the text should start on the image
:param font: Font to use to draw the text
The function
``cvAddText``
draw
*text*
on the image
*img*
using a specific font
*font*
(see example
:ref:`FontQt`
)
.. index:: DisplayOverlay
.. _DisplayOverlay:
DisplayOverlay
--------------
`id=0.523794338823 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/DisplayOverlay>`__
.. cfunction:: void cvDisplayOverlay(const char* name, const char* text, int delay)
Display text on the window's image as an overlay for delay milliseconds. This is not editing the image's data. The text is display on the top of the image.
:param name: Name of the window
:param text: Overlay text to write on the window's image
:param delay: Delay to display the overlay text. If this function is called before the previous overlay text time out, the timer is restarted and the text updated. . If this value is zero, the text never disapers.
The function
``cvDisplayOverlay``
aims at displaying useful information/tips on the window for a certain amount of time
*delay*
. This information is display on the top of the window.
.. index:: DisplayStatusBar
.. _DisplayStatusBar:
DisplayStatusBar
----------------
`id=0.240145617982 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/DisplayStatusBar>`__
.. cfunction:: void cvDisplayStatusBar(const char* name, const char* text, int delayms)
Display text on the window's statusbar as for delay milliseconds.
:param name: Name of the window
:param text: Text to write on the window's statusbar
:param delay: Delay to display the text. If this function is called before the previous text time out, the timer is restarted and the text updated. If this value is zero, the text never disapers.
The function
``cvDisplayOverlay``
aims at displaying useful information/tips on the window for a certain amount of time
*delay*
. This information is displayed on the window's statubar (the window must be created with
``CV_GUI_EXPANDED``
flags).
.. index:: CreateOpenGLCallback
.. _CreateOpenGLCallback:
CreateOpenGLCallback
--------------------
`id=0.0904185033479 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/CreateOpenGLCallback>`__
*_*
.. cfunction:: void cvCreateOpenGLCallback( const char* window_name, CvOpenGLCallback callbackOpenGL, void* userdata CV_DEFAULT(NULL), double angle CV_DEFAULT(-1), double zmin CV_DEFAULT(-1), double zmax CV_DEFAULT(-1)
Create a callback function called to draw OpenGL on top the the image display by windowname.
:param window_name: Name of the window
:param callbackOpenGL:
Pointer to the function to be called every frame.
This function should be prototyped as ``void Foo(*void);`` .
:param userdata: pointer passed to the callback function. *(Optional)*
:param angle: Specifies the field of view angle, in degrees, in the y direction.. *(Optional - Default 45 degree)*
:param zmin: Specifies the distance from the viewer to the near clipping plane (always positive). *(Optional - Default 0.01)*
:param zmax: Specifies the distance from the viewer to the far clipping plane (always positive). *(Optional - Default 1000)*
The function
``cvCreateOpenGLCallback``
can be used to draw 3D data on the window. An example of callback could be:
::
void on_opengl(void* param)
{
//draw scene here
glLoadIdentity();
glTranslated(0.0, 0.0, -1.0);
glRotatef( 55, 1, 0, 0 );
glRotatef( 45, 0, 1, 0 );
glRotatef( 0, 0, 0, 1 );
static const int coords[6][4][3] = {
{ { +1, -1, -1 }, { -1, -1, -1 }, { -1, +1, -1 }, { +1, +1, -1 } },
{ { +1, +1, -1 }, { -1, +1, -1 }, { -1, +1, +1 }, { +1, +1, +1 } },
{ { +1, -1, +1 }, { +1, -1, -1 }, { +1, +1, -1 }, { +1, +1, +1 } },
{ { -1, -1, -1 }, { -1, -1, +1 }, { -1, +1, +1 }, { -1, +1, -1 } },
{ { +1, -1, +1 }, { -1, -1, +1 }, { -1, -1, -1 }, { +1, -1, -1 } },
{ { -1, -1, +1 }, { +1, -1, +1 }, { +1, +1, +1 }, { -1, +1, +1 } }
};
for (int i = 0; i < 6; ++i) {
glColor3ub( i*20, 100+i*10, i*42 );
glBegin(GL_QUADS);
for (int j = 0; j < 4; ++j) {
glVertex3d(0.2 * coords[i][j][0], 0.2 * coords[i][j][1], 0.2 * coords[i][j][2]);
}
glEnd();
}
}
..
::
CV_EXTERN_C_FUNCPTR( *CvOpenGLCallback)(void* userdata));
..
.. index:: SaveWindowParameters
.. _SaveWindowParameters:
SaveWindowParameters
--------------------
`id=0.0271612689206 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/SaveWindowParameters>`__
*_*
.. cfunction:: void cvSaveWindowParameters(const char* name)
Save parameters of the window windowname.
:param name: Name of the window
The function
``cvSaveWindowParameters``
saves size, location, flags, trackbars' value, zoom and panning location of the window
*window_name*
.. index:: LoadWindowParameters
.. _LoadWindowParameters:
LoadWindowParameters
--------------------
`id=0.700334072235 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/LoadWindowParameters>`__
*_*
.. cfunction:: void cvLoadWindowParameters(const char* name)
Load parameters of the window windowname.
:param name: Name of the window
The function
``cvLoadWindowParameters``
load size, location, flags, trackbars' value, zoom and panning location of the window
*window_name*
.. index:: CreateButton
.. _CreateButton:
CreateButton
------------
`id=0.718841096532 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/CreateButton>`__
*_*
.. cfunction:: cvCreateButton( const char* button_name CV_DEFAULT(NULL),CvButtonCallback on_change CV_DEFAULT(NULL), void* userdata CV_DEFAULT(NULL) , int button_type CV_DEFAULT(CV_PUSH_BUTTON), int initial_button_state CV_DEFAULT(0)
Create a callback function called to draw OpenGL on top the the image display by windowname.
:param button_name: Name of the button *( if NULL, the name will be "button <number of boutton>")*
:param on_change:
Pointer to the function to be called every time the button changed its state.
This function should be prototyped as ``void Foo(int state,*void);`` . *state* is the current state of the button. It could be -1 for a push button, 0 or 1 for a check/radio box button.
:param userdata: pointer passed to the callback function. *(Optional)*
The
``button_type``
parameter can be :
*(Optional -- Will be a push button by default.)
* **CV_PUSH_BUTTON** The button will be a push button.
* **CV_CHECKBOX** The button will be a checkbox button.
* **CV_RADIOBOX** The button will be a radiobox button. The radiobox on the same buttonbar (same line) are exclusive; one on can be select at the time.
*
* **initial_button_state** Default state of the button. Use for checkbox and radiobox, its value could be 0 or 1. *(Optional)*
The function
``cvCreateButton``
attach button to the control panel. Each button is added to a buttonbar on the right of the last button.
A new buttonbar is create if nothing was attached to the control panel before, or if the last element attached to the control panel was a trackbar.
Here are various example of
``cvCreateButton``
function call:
::
cvCreateButton(NULL,callbackButton);//create a push button "button 0", that will call callbackButton.
cvCreateButton("button2",callbackButton,NULL,CV_CHECKBOX,0);
cvCreateButton("button3",callbackButton,&value);
cvCreateButton("button5",callbackButton1,NULL,CV_RADIOBOX);
cvCreateButton("button6",callbackButton2,NULL,CV_PUSH_BUTTON,1);
..
::
CV_EXTERN_C_FUNCPTR( *CvButtonCallback)(int state, void* userdata));
..

View File

@ -0,0 +1,726 @@
Reading and Writing Images and Video
====================================
.. highlight:: c
.. index:: LoadImage
.. _LoadImage:
LoadImage
---------
`id=0.469255746245 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/LoadImage>`__
.. cfunction:: IplImage* cvLoadImage( const char* filename, int iscolor=CV_LOAD_IMAGE_COLOR )
Loads an image from a file as an IplImage.
:param filename: Name of file to be loaded.
:param iscolor: Specific color type of the loaded image:
* **CV_LOAD_IMAGE_COLOR** the loaded image is forced to be a 3-channel color image
* **CV_LOAD_IMAGE_GRAYSCALE** the loaded image is forced to be grayscale
* **CV_LOAD_IMAGE_UNCHANGED** the loaded image will be loaded as is.
The function
``cvLoadImage``
loads an image from the specified file and returns the pointer to the loaded image. Currently the following file formats are supported:
*
Windows bitmaps - BMP, DIB
*
JPEG files - JPEG, JPG, JPE
*
Portable Network Graphics - PNG
*
Portable image format - PBM, PGM, PPM
*
Sun rasters - SR, RAS
*
TIFF files - TIFF, TIF
Note that in the current implementation the alpha channel, if any, is stripped from the output image, e.g. 4-channel RGBA image will be loaded as RGB.
.. index:: LoadImageM
.. _LoadImageM:
LoadImageM
----------
`id=0.563485365507 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/LoadImageM>`__
.. cfunction:: CvMat* cvLoadImageM( const char* filename, int iscolor=CV_LOAD_IMAGE_COLOR )
Loads an image from a file as a CvMat.
:param filename: Name of file to be loaded.
:param iscolor: Specific color type of the loaded image:
* **CV_LOAD_IMAGE_COLOR** the loaded image is forced to be a 3-channel color image
* **CV_LOAD_IMAGE_GRAYSCALE** the loaded image is forced to be grayscale
* **CV_LOAD_IMAGE_UNCHANGED** the loaded image will be loaded as is.
The function
``cvLoadImageM``
loads an image from the specified file and returns the pointer to the loaded image.
urrently the following file formats are supported:
*
Windows bitmaps - BMP, DIB
*
JPEG files - JPEG, JPG, JPE
*
Portable Network Graphics - PNG
*
Portable image format - PBM, PGM, PPM
*
Sun rasters - SR, RAS
*
TIFF files - TIFF, TIF
Note that in the current implementation the alpha channel, if any, is stripped from the output image, e.g. 4-channel RGBA image will be loaded as RGB.
.. index:: SaveImage
.. _SaveImage:
SaveImage
---------
`id=0.495970549198 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/SaveImage>`__
.. cfunction:: int cvSaveImage( const char* filename, const CvArr* image )
Saves an image to a specified file.
:param filename: Name of the file.
:param image: Image to be saved.
The function
``cvSaveImage``
saves the image to the specified file. The image format is chosen based on the
``filename``
extension, see
:ref:`LoadImage`
. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use
``cvCvtScale``
and
``cvCvtColor``
to convert it before saving, or use universal
``cvSave``
to save the image to XML or YAML format.
.. index:: CvCapture
.. _CvCapture:
CvCapture
---------
`id=0.279260095238 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/CvCapture>`__
.. ctype:: CvCapture
Video capturing structure.
.. cfunction:: typedef struct CvCapture CvCapture
The structure
``CvCapture``
does not have a public interface and is used only as a parameter for video capturing functions.
.. index:: CaptureFromCAM
.. _CaptureFromCAM:
CaptureFromCAM
--------------
`id=0.051648241367 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/CaptureFromCAM>`__
.. cfunction:: CvCapture* cvCaptureFromCAM( int index )
Initializes capturing a video from a camera.
:param index: Index of the camera to be used. If there is only one camera or it does not matter what camera is used -1 may be passed.
The function
``cvCaptureFromCAM``
allocates and initializes the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394).
To release the structure, use
:ref:`ReleaseCapture`
.
.. index:: CaptureFromFile
.. _CaptureFromFile:
CaptureFromFile
---------------
`id=0.832457799312 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/CaptureFromFile>`__
.. cfunction:: CvCapture* cvCaptureFromFile( const char* filename )
Initializes capturing a video from a file.
:param filename: Name of the video file.
The function
``cvCaptureFromFile``
allocates and initializes the CvCapture structure for reading the video stream from the specified file. Which codecs and file formats are supported depends on the back end library. On Windows HighGui uses Video for Windows (VfW), on Linux ffmpeg is used and on Mac OS X the back end is QuickTime. See VideoCodecs for some discussion on what to expect and how to prepare your video files.
After the allocated structure is not used any more it should be released by the
:ref:`ReleaseCapture`
function.
.. index:: GetCaptureProperty
.. _GetCaptureProperty:
GetCaptureProperty
------------------
`id=0.315272026867 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/GetCaptureProperty>`__
.. cfunction:: double cvGetCaptureProperty( CvCapture* capture, int property_id )
Gets video capturing properties.
:param capture: video capturing structure.
:param property_id: Property identifier. Can be one of the following:
* **CV_CAP_PROP_POS_MSEC** Film current position in milliseconds or video capture timestamp
* **CV_CAP_PROP_POS_FRAMES** 0-based index of the frame to be decoded/captured next
* **CV_CAP_PROP_POS_AVI_RATIO** Relative position of the video file (0 - start of the film, 1 - end of the film)
* **CV_CAP_PROP_FRAME_WIDTH** Width of the frames in the video stream
* **CV_CAP_PROP_FRAME_HEIGHT** Height of the frames in the video stream
* **CV_CAP_PROP_FPS** Frame rate
* **CV_CAP_PROP_FOURCC** 4-character code of codec
* **CV_CAP_PROP_FRAME_COUNT** Number of frames in the video file
* **CV_CAP_PROP_FORMAT** The format of the Mat objects returned by retrieve()
* **CV_CAP_PROP_MODE** A backend-specific value indicating the current capture mode
* **CV_CAP_PROP_BRIGHTNESS** Brightness of the image (only for cameras)
* **CV_CAP_PROP_CONTRAST** Contrast of the image (only for cameras)
* **CV_CAP_PROP_SATURATION** Saturation of the image (only for cameras)
* **CV_CAP_PROP_HUE** Hue of the image (only for cameras)
* **CV_CAP_PROP_GAIN** Gain of the image (only for cameras)
* **CV_CAP_PROP_EXPOSURE** Exposure (only for cameras)
* **CV_CAP_PROP_CONVERT_RGB** Boolean flags indicating whether images should be converted to RGB
* **CV_CAP_PROP_WHITE_BALANCE** Currently unsupported
* **CV_CAP_PROP_RECTIFICATION** TOWRITE (note: only supported by DC1394 v 2.x backend currently)
The function
``cvGetCaptureProperty``
retrieves the specified property of the camera or video file.
.. index:: GrabFrame
.. _GrabFrame:
GrabFrame
---------
`id=0.423832304356 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/GrabFrame>`__
.. cfunction:: int cvGrabFrame( CvCapture* capture )
Grabs the frame from a camera or file.
:param capture: video capturing structure.
The function
``cvGrabFrame``
grabs the frame from a camera or file. The grabbed frame is stored internally. The purpose of this function is to grab the frame
*quickly*
so that syncronization can occur if it has to read from several cameras simultaneously. The grabbed frames are not exposed because they may be stored in a compressed format (as defined by the camera/driver). To retrieve the grabbed frame,
:ref:`RetrieveFrame`
should be used.
.. index:: QueryFrame
.. _QueryFrame:
QueryFrame
----------
`id=0.155007724473 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/QueryFrame>`__
.. cfunction:: IplImage* cvQueryFrame( CvCapture* capture )
Grabs and returns a frame from a camera or file.
:param capture: video capturing structure.
The function
``cvQueryFrame``
grabs a frame from a camera or video file, decompresses it and returns it. This function is just a combination of
:ref:`GrabFrame`
and
:ref:`RetrieveFrame`
, but in one call. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.
.. index:: ReleaseCapture
.. _ReleaseCapture:
ReleaseCapture
--------------
`id=0.412581622343 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/ReleaseCapture>`__
.. cfunction:: void cvReleaseCapture( CvCapture** capture )
Releases the CvCapture structure.
:param capture: Pointer to video the capturing structure.
The function
``cvReleaseCapture``
releases the CvCapture structure allocated by
:ref:`CaptureFromFile`
or
:ref:`CaptureFromCAM`
.
.. index:: RetrieveFrame
.. _RetrieveFrame:
RetrieveFrame
-------------
`id=0.780832955331 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/RetrieveFrame>`__
.. cfunction:: IplImage* cvRetrieveFrame( CvCapture* capture )
Gets the image grabbed with cvGrabFrame.
:param capture: video capturing structure.
The function
``cvRetrieveFrame``
returns the pointer to the image grabbed with the
:ref:`GrabFrame`
function. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.
.. index:: SetCaptureProperty
.. _SetCaptureProperty:
SetCaptureProperty
------------------
`id=0.0459451505183 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/SetCaptureProperty>`__
.. cfunction:: int cvSetCaptureProperty( CvCapture* capture, int property_id, double value )
Sets video capturing properties.
:param capture: video capturing structure.
:param property_id: property identifier. Can be one of the following:
* **CV_CAP_PROP_POS_MSEC** Film current position in milliseconds or video capture timestamp
* **CV_CAP_PROP_POS_FRAMES** 0-based index of the frame to be decoded/captured next
* **CV_CAP_PROP_POS_AVI_RATIO** Relative position of the video file (0 - start of the film, 1 - end of the film)
* **CV_CAP_PROP_FRAME_WIDTH** Width of the frames in the video stream
* **CV_CAP_PROP_FRAME_HEIGHT** Height of the frames in the video stream
* **CV_CAP_PROP_FPS** Frame rate
* **CV_CAP_PROP_FOURCC** 4-character code of codec
* **CV_CAP_PROP_FRAME_COUNT** Number of frames in the video file
* **CV_CAP_PROP_FORMAT** The format of the Mat objects returned by retrieve()
* **CV_CAP_PROP_MODE** A backend-specific value indicating the current capture mode
* **CV_CAP_PROP_BRIGHTNESS** Brightness of the image (only for cameras)
* **CV_CAP_PROP_CONTRAST** Contrast of the image (only for cameras)
* **CV_CAP_PROP_SATURATION** Saturation of the image (only for cameras)
* **CV_CAP_PROP_HUE** Hue of the image (only for cameras)
* **CV_CAP_PROP_GAIN** Gain of the image (only for cameras)
* **CV_CAP_PROP_EXPOSURE** Exposure (only for cameras)
* **CV_CAP_PROP_CONVERT_RGB** Boolean flags indicating whether images should be converted to RGB
* **CV_CAP_PROP_WHITE_BALANCE** Currently unsupported
* **CV_CAP_PROP_RECTIFICATION** TOWRITE (note: only supported by DC1394 v 2.x backend currently)
:param value: value of the property.
The function
``cvSetCaptureProperty``
sets the specified property of video capturing. Currently the function supports only video files:
``CV_CAP_PROP_POS_MSEC, CV_CAP_PROP_POS_FRAMES, CV_CAP_PROP_POS_AVI_RATIO``
.
NB This function currently does nothing when using the latest CVS download on linux with FFMPEG (the function contents are hidden if 0 is used and returned).
.. index:: CreateVideoWriter
.. _CreateVideoWriter:
CreateVideoWriter
-----------------
`id=0.960560559623 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/CreateVideoWriter>`__
.. cfunction:: typedef struct CvVideoWriter CvVideoWriter CvVideoWriter* cvCreateVideoWriter( const char* filename, int fourcc, double fps, CvSize frame_size, int is_color=1 )
Creates the video file writer.
:param filename: Name of the output video file.
:param fourcc: 4-character code of codec used to compress the frames. For example, ``CV_FOURCC('P','I','M,'1')`` is a MPEG-1 codec, ``CV_FOURCC('M','J','P','G')`` is a motion-jpeg codec etc.
Under Win32 it is possible to pass -1 in order to choose compression method and additional compression parameters from dialog. Under Win32 if 0 is passed while using an avi filename it will create a video writer that creates an uncompressed avi file.
:param fps: Framerate of the created video stream.
:param frame_size: Size of the video frames.
:param is_color: If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).
The function
``cvCreateVideoWriter``
creates the video writer structure.
Which codecs and file formats are supported depends on the back end library. On Windows HighGui uses Video for Windows (VfW), on Linux ffmpeg is used and on Mac OS X the back end is QuickTime. See VideoCodecs for some discussion on what to expect.
.. index:: ReleaseVideoWriter
.. _ReleaseVideoWriter:
ReleaseVideoWriter
------------------
`id=0.271528060303 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/ReleaseVideoWriter>`__
.. cfunction:: void cvReleaseVideoWriter( CvVideoWriter** writer )
Releases the AVI writer.
:param writer: Pointer to the video file writer structure.
The function
``cvReleaseVideoWriter``
finishes writing to the video file and releases the structure.
.. index:: WriteFrame
.. _WriteFrame:
WriteFrame
----------
`id=0.0551918795805 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/WriteFrame>`__
.. cfunction:: int cvWriteFrame( CvVideoWriter* writer, const IplImage* image )
Writes a frame to a video file.
:param writer: Video writer structure
:param image: The written frame
The function
``cvWriteFrame``
writes/appends one frame to a video file.

View File

@ -0,0 +1,727 @@
User Interface
==============
.. highlight:: c
.. index:: ConvertImage
.. _ConvertImage:
ConvertImage
------------
`id=0.770096998899 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/ConvertImage>`__
.. cfunction:: void cvConvertImage( const CvArr* src, CvArr* dst, int flags=0 )
Converts one image to another with an optional vertical flip.
:param src: Source image.
:param dst: Destination image. Must be single-channel or 3-channel 8-bit image.
:param flags: The operation flags:
* **CV_CVTIMG_FLIP** Flips the image vertically
* **CV_CVTIMG_SWAP_RB** Swaps the red and blue channels. In OpenCV color images have ``BGR`` channel order, however on some systems the order needs to be reversed before displaying the image ( :ref:`ShowImage` does this automatically).
The function
``cvConvertImage``
converts one image to another and flips the result vertically if desired. The function is used by
:ref:`ShowImage`
.
.. index:: CreateTrackbar
.. _CreateTrackbar:
CreateTrackbar
--------------
`id=0.331453824667 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/CreateTrackbar>`__
.. cfunction:: int cvCreateTrackbar( const char* trackbarName, const char* windowName, int* value, int count, CvTrackbarCallback onChange )
Creates a trackbar and attaches it to the specified window
:param trackbarName: Name of the created trackbar.
:param windowName: Name of the window which will be used as a parent for created trackbar.
:param value: Pointer to an integer variable, whose value will reflect the position of the slider. Upon creation, the slider position is defined by this variable.
:param count: Maximal position of the slider. Minimal position is always 0.
:param onChange:
Pointer to the function to be called every time the slider changes position.
This function should be prototyped as ``void Foo(int);`` Can be NULL if callback is not required.
The function
``cvCreateTrackbar``
creates a trackbar (a.k.a. slider or range control) with the specified name and range, assigns a variable to be syncronized with trackbar position and specifies a callback function to be called on trackbar position change. The created trackbar is displayed on the top of the given window.
\
\
**[Qt Backend Only]**
qt-specific details:
* **windowName** Name of the window which will be used as a parent for created trackbar. Can be NULL if the trackbar should be attached to the control panel.
The created trackbar is displayed at the bottom of the given window if
*windowName*
is correctly provided, or displayed on the control panel if
*windowName*
is NULL.
By clicking on the label of each trackbar, it is possible to edit the trackbar's value manually for a more accurate control of it.
::
CV_EXTERN_C_FUNCPTR( void (*CvTrackbarCallback)(int pos) );
..
.. index:: DestroyAllWindows
.. _DestroyAllWindows:
DestroyAllWindows
-----------------
`id=0.237220691544 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/DestroyAllWindows>`__
.. cfunction:: void cvDestroyAllWindows(void)
Destroys all of the HighGUI windows.
The function
``cvDestroyAllWindows``
destroys all of the opened HighGUI windows.
.. index:: DestroyWindow
.. _DestroyWindow:
DestroyWindow
-------------
`id=0.224383930532 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/DestroyWindow>`__
.. cfunction:: void cvDestroyWindow( const char* name )
Destroys a window.
:param name: Name of the window to be destroyed.
The function
``cvDestroyWindow``
destroys the window with the given name.
.. index:: GetTrackbarPos
.. _GetTrackbarPos:
GetTrackbarPos
--------------
`id=0.99562223249 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/GetTrackbarPos>`__
.. cfunction:: int cvGetTrackbarPos( const char* trackbarName, const char* windowName )
Returns the trackbar position.
:param trackbarName: Name of the trackbar.
:param windowName: Name of the window which is the parent of the trackbar.
The function
``cvGetTrackbarPos``
returns the current position of the specified trackbar.
\
\
**[Qt Backend Only]**
qt-specific details:
* **windowName** Name of the window which is the parent of the trackbar. Can be NULL if the trackbar is attached to the control panel.
.. index:: GetWindowHandle
.. _GetWindowHandle:
GetWindowHandle
---------------
`id=0.506913773725 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/GetWindowHandle>`__
.. cfunction:: void* cvGetWindowHandle( const char* name )
Gets the window's handle by its name.
:param name: Name of the window
.
The function
``cvGetWindowHandle``
returns the native window handle (HWND in case of Win32 and GtkWidget in case of GTK+).
\
\
**[Qt Backend Only]**
qt-specific details:
The function
``cvGetWindowHandle``
returns the native window handle inheriting from the Qt class QWidget.
.. index:: GetWindowName
.. _GetWindowName:
GetWindowName
-------------
`id=0.793825437585 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/GetWindowName>`__
.. cfunction:: const char* cvGetWindowName( void* windowHandle )
Gets the window's name by its handle.
:param windowHandle: Handle of the window.
The function
``cvGetWindowName``
returns the name of the window given its native handle (HWND in case of Win32 and GtkWidget in case of GTK+).
\
\
**[Qt Backend Only]**
qt-specific details:
The function
``cvGetWindowName``
returns the name of the window given its native handle (QWidget).
.. index:: InitSystem
.. _InitSystem:
InitSystem
----------
`id=0.090225420475 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/InitSystem>`__
.. cfunction:: int cvInitSystem( int argc, char** argv )
Initializes HighGUI.
:param argc: Number of command line arguments
:param argv: Array of command line arguments
The function
``cvInitSystem``
initializes HighGUI. If it wasn't
called explicitly by the user before the first window was created, it is
called implicitly then with
``argc=0``
,
``argv=NULL``
. Under
Win32 there is no need to call it explicitly. Under X Window the arguments
may be used to customize a look of HighGUI windows and controls.
\
\
**[Qt Backend Only]**
qt-specific details:
The function
``cvInitSystem``
is automatically called at the first cvNameWindow call.
.. index:: MoveWindow
.. _MoveWindow:
MoveWindow
----------
`id=0.601355766212 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/MoveWindow>`__
.. cfunction:: void cvMoveWindow( const char* name, int x, int y )
Sets the position of the window.
:param name: Name of the window to be moved.
:param x: New x coordinate of the top-left corner
:param y: New y coordinate of the top-left corner
The function
``cvMoveWindow``
changes the position of the window.
.. index:: NamedWindow
.. _NamedWindow:
NamedWindow
-----------
`id=0.661605671694 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/NamedWindow>`__
.. cfunction:: int cvNamedWindow( const char* name, int flags )
Creates a window.
:param name: Name of the window in the window caption that may be used as a window identifier.
:param flags: Flags of the window. Currently the only supported flag is ``CV_WINDOW_AUTOSIZE`` . If this is set, window size is automatically adjusted to fit the displayed image (see :ref:`ShowImage` ), and the user can not change the window size manually.
The function
``cvNamedWindow``
creates a window which can be used as a placeholder for images and trackbars. Created windows are referred to by their names.
If a window with the same name already exists, the function does nothing.
\
\
**[Qt Backend Only]**
qt-specific details:
* **flags** Flags of the window. Currently the supported flags are:
* **CV_WINDOW_NORMAL or CV_WINDOW_AUTOSIZE:** ``CV_WINDOW_NORMAL`` let the user resize the window, whereas ``CV_WINDOW_AUTOSIZE`` adjusts automatically the window's size to fit the displayed image (see :ref:`ShowImage` ), and the user can not change the window size manually.
* **CV_WINDOW_FREERATIO or CV_WINDOW_KEEPRATIO:** ``CV_WINDOW_FREERATIO`` adjust the image without respect the its ration, whereas ``CV_WINDOW_KEEPRATIO`` keep the image's ratio.
* **CV_GUI_NORMAL or CV_GUI_EXPANDED:** ``CV_GUI_NORMAL`` is the old way to draw the window without statusbar and toolbar, whereas ``CV_GUI_EXPANDED`` is the new enhance GUI.
This parameter is optional. The default flags set for a new window are ``CV_WINDOW_AUTOSIZE`` , ``CV_WINDOW_KEEPRATIO`` , and ``CV_GUI_EXPANDED`` .
However, if you want to modify the flags, you can combine them using OR operator, ie:
::
cvNamedWindow( ``myWindow'', ``CV_WINDOW_NORMAL`` textbar ``CV_GUI_NORMAL`` );
..
.. index:: ResizeWindow
.. _ResizeWindow:
ResizeWindow
------------
`id=0.689489754706 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/ResizeWindow>`__
.. cfunction:: void cvResizeWindow( const char* name, int width, int height )
Sets the window size.
:param name: Name of the window to be resized.
:param width: New width
:param height: New height
The function
``cvResizeWindow``
changes the size of the window.
.. index:: SetMouseCallback
.. _SetMouseCallback:
SetMouseCallback
----------------
`id=0.619148465549 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/SetMouseCallback>`__
.. cfunction:: void cvSetMouseCallback( const char* windowName, CvMouseCallback onMouse, void* param=NULL )
Assigns callback for mouse events.
:param windowName: Name of the window.
:param onMouse: Pointer to the function to be called every time a mouse event occurs in the specified window. This function should be prototyped as ``void Foo(int event, int x, int y, int flags, void* param);``
where ``event`` is one of ``CV_EVENT_*`` , ``x`` and ``y`` are the coordinates of the mouse pointer in image coordinates (not window coordinates), ``flags`` is a combination of ``CV_EVENT_FLAG_*`` , and ``param`` is a user-defined parameter passed to the ``cvSetMouseCallback`` function call.
:param param: User-defined parameter to be passed to the callback function.
The function
``cvSetMouseCallback``
sets the callback function for mouse events occuring within the specified window.
The
``event``
parameter is one of:
* **CV_EVENT_MOUSEMOVE** Mouse movement
* **CV_EVENT_LBUTTONDOWN** Left button down
* **CV_EVENT_RBUTTONDOWN** Right button down
* **CV_EVENT_MBUTTONDOWN** Middle button down
* **CV_EVENT_LBUTTONUP** Left button up
* **CV_EVENT_RBUTTONUP** Right button up
* **CV_EVENT_MBUTTONUP** Middle button up
* **CV_EVENT_LBUTTONDBLCLK** Left button double click
* **CV_EVENT_RBUTTONDBLCLK** Right button double click
* **CV_EVENT_MBUTTONDBLCLK** Middle button double click
The
``flags``
parameter is a combination of :
* **CV_EVENT_FLAG_LBUTTON** Left button pressed
* **CV_EVENT_FLAG_RBUTTON** Right button pressed
* **CV_EVENT_FLAG_MBUTTON** Middle button pressed
* **CV_EVENT_FLAG_CTRLKEY** Control key pressed
* **CV_EVENT_FLAG_SHIFTKEY** Shift key pressed
* **CV_EVENT_FLAG_ALTKEY** Alt key pressed
.. index:: SetTrackbarPos
.. _SetTrackbarPos:
SetTrackbarPos
--------------
`id=0.998171131545 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/SetTrackbarPos>`__
.. cfunction:: void cvSetTrackbarPos( const char* trackbarName, const char* windowName, int pos )
Sets the trackbar position.
:param trackbarName: Name of the trackbar.
:param windowName: Name of the window which is the parent of trackbar.
:param pos: New position.
The function
``cvSetTrackbarPos``
sets the position of the specified trackbar.
\
\
**[Qt Backend Only]**
qt-specific details:
* **windowName** Name of the window which is the parent of trackbar. Can be NULL if the trackbar is attached to the control panel.
.. index:: ShowImage
.. _ShowImage:
ShowImage
---------
`id=0.466244635488 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/ShowImage>`__
.. cfunction:: void cvShowImage( const char* name, const CvArr* image )
Displays the image in the specified window
:param name: Name of the window.
:param image: Image to be shown.
The function
``cvShowImage``
displays the image in the specified window. If the window was created with the
``CV_WINDOW_AUTOSIZE``
flag then the image is shown with its original size, otherwise the image is scaled to fit in the window. The function may scale the image, depending on its depth:
*
If the image is 8-bit unsigned, it is displayed as is.
*
If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255].
*
If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255].
.. index:: WaitKey
.. _WaitKey:
WaitKey
-------
`id=0.996058007458 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/highgui/WaitKey>`__
.. cfunction:: int cvWaitKey( int delay=0 )
Waits for a pressed key.
:param delay: Delay in milliseconds.
The function
``cvWaitKey``
waits for key event infinitely (
:math:`\texttt{delay} <= 0`
) or for
``delay``
milliseconds. Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
**Note:**
This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing, unless HighGUI is used within some environment that takes care of event processing.
\
\
**[Qt Backend Only]**
qt-specific details:
With this current Qt implementation, this is the only way to process event such as repaint for the windows, and so on
ldots

18
doc/opencv1/c/imgproc.rst Normal file
View File

@ -0,0 +1,18 @@
*************************
imgproc. Image Processing
*************************
.. toctree::
:maxdepth: 2
imgproc_histograms
imgproc_image_filtering
imgproc_geometric_image_transformations
imgproc_miscellaneous_image_transformations
imgproc_structural_analysis_and_shape_descriptors
imgproc_planar_subdivisions
imgproc_motion_analysis_and_object_tracking
imgproc_feature_detection
imgproc_object_detection

View File

@ -0,0 +1,721 @@
Feature Detection
=================
.. highlight:: c
.. index:: Canny
.. _Canny:
Canny
-----
`id=0.99528666363 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Canny>`__
.. cfunction:: void cvCanny( const CvArr* image, CvArr* edges, double threshold1, double threshold2, int aperture_size=3 )
Implements the Canny algorithm for edge detection.
:param image: Single-channel input image
:param edges: Single-channel image to store the edges found by the function
:param threshold1: The first threshold
:param threshold2: The second threshold
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` )
The function finds the edges on the input image
``image``
and marks them in the output image
``edges``
using the Canny algorithm. The smallest value between
``threshold1``
and
``threshold2``
is used for edge linking, the largest value is used to find the initial segments of strong edges.
.. index:: CornerEigenValsAndVecs
.. _CornerEigenValsAndVecs:
CornerEigenValsAndVecs
----------------------
`id=0.880597486716 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CornerEigenValsAndVecs>`__
.. cfunction:: void cvCornerEigenValsAndVecs( const CvArr* image, CvArr* eigenvv, int blockSize, int aperture_size=3 )
Calculates eigenvalues and eigenvectors of image blocks for corner detection.
:param image: Input image
:param eigenvv: Image to store the results. It must be 6 times wider than the input image
:param blockSize: Neighborhood size (see discussion)
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` )
For every pixel, the function
``cvCornerEigenValsAndVecs``
considers a
:math:`\texttt{blockSize} \times \texttt{blockSize}`
neigborhood S(p). It calcualtes the covariation matrix of derivatives over the neigborhood as:
.. math::
M = \begin{bmatrix} \sum _{S(p)}(dI/dx)^2 & \sum _{S(p)}(dI/dx \cdot dI/dy)^2 \\ \sum _{S(p)}(dI/dx \cdot dI/dy)^2 & \sum _{S(p)}(dI/dy)^2 \end{bmatrix}
After that it finds eigenvectors and eigenvalues of the matrix and stores them into destination image in form
:math:`(\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)`
where
* :math:`\lambda_1, \lambda_2`
are the eigenvalues of
:math:`M`
; not sorted
* :math:`x_1, y_1`
are the eigenvectors corresponding to
:math:`\lambda_1`
* :math:`x_2, y_2`
are the eigenvectors corresponding to
:math:`\lambda_2`
.. index:: CornerHarris
.. _CornerHarris:
CornerHarris
------------
`id=0.765194293954 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CornerHarris>`__
.. cfunction:: void cvCornerHarris( const CvArr* image, CvArr* harris_dst, int blockSize, int aperture_size=3, double k=0.04 )
Harris edge detector.
:param image: Input image
:param harris_dst: Image to store the Harris detector responses. Should have the same size as ``image``
:param blockSize: Neighborhood size (see the discussion of :ref:`CornerEigenValsAndVecs` )
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` ).
:param k: Harris detector free parameter. See the formula below
The function runs the Harris edge detector on the image. Similarly to
:ref:`CornerMinEigenVal`
and
:ref:`CornerEigenValsAndVecs`
, for each pixel it calculates a
:math:`2\times2`
gradient covariation matrix
:math:`M`
over a
:math:`\texttt{blockSize} \times \texttt{blockSize}`
neighborhood. Then, it stores
.. math::
det(M) - k \, trace(M)^2
to the destination image. Corners in the image can be found as the local maxima of the destination image.
.. index:: CornerMinEigenVal
.. _CornerMinEigenVal:
CornerMinEigenVal
-----------------
`id=0.956867089452 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CornerMinEigenVal>`__
.. cfunction:: void cvCornerMinEigenVal( const CvArr* image, CvArr* eigenval, int blockSize, int aperture_size=3 )
Calculates the minimal eigenvalue of gradient matrices for corner detection.
:param image: Input image
:param eigenval: Image to store the minimal eigenvalues. Should have the same size as ``image``
:param blockSize: Neighborhood size (see the discussion of :ref:`CornerEigenValsAndVecs` )
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` ).
The function is similar to
:ref:`CornerEigenValsAndVecs`
but it calculates and stores only the minimal eigen value of derivative covariation matrix for every pixel, i.e.
:math:`min(\lambda_1, \lambda_2)`
in terms of the previous function.
.. index:: FindCornerSubPix
.. _FindCornerSubPix:
FindCornerSubPix
----------------
`id=0.941466183497 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/FindCornerSubPix>`__
.. cfunction:: void cvFindCornerSubPix( const CvArr* image, CvPoint2D32f* corners, int count, CvSize win, CvSize zero_zone, CvTermCriteria criteria )
Refines the corner locations.
:param image: Input image
:param corners: Initial coordinates of the input corners; refined coordinates on output
:param count: Number of corners
:param win: Half of the side length of the search window. For example, if ``win`` =(5,5), then a :math:`5*2+1 \times 5*2+1 = 11 \times 11` search window would be used
:param zero_zone: Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size
:param criteria: Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after a certain number of iterations or when a required accuracy is achieved. The ``criteria`` may specify either of or both the maximum number of iteration and the required accuracy
The function iterates to find the sub-pixel accurate location of corners, or radial saddle points, as shown in on the picture below.
.. image:: ../pics/cornersubpix.png
Sub-pixel accurate corner locator is based on the observation that every vector from the center
:math:`q`
to a point
:math:`p`
located within a neighborhood of
:math:`q`
is orthogonal to the image gradient at
:math:`p`
subject to image and measurement noise. Consider the expression:
.. math::
\epsilon _i = {DI_{p_i}}^T \cdot (q - p_i)
where
:math:`{DI_{p_i}}`
is the image gradient at the one of the points
:math:`p_i`
in a neighborhood of
:math:`q`
. The value of
:math:`q`
is to be found such that
:math:`\epsilon_i`
is minimized. A system of equations may be set up with
:math:`\epsilon_i`
set to zero:
.. math::
\sum _i(DI_{p_i} \cdot {DI_{p_i}}^T) q = \sum _i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i)
where the gradients are summed within a neighborhood ("search window") of
:math:`q`
. Calling the first gradient term
:math:`G`
and the second gradient term
:math:`b`
gives:
.. math::
q = G^{-1} \cdot b
The algorithm sets the center of the neighborhood window at this new center
:math:`q`
and then iterates until the center keeps within a set threshold.
.. index:: GoodFeaturesToTrack
.. _GoodFeaturesToTrack:
GoodFeaturesToTrack
-------------------
`id=0.0876392134647 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GoodFeaturesToTrack>`__
.. cfunction:: void cvGoodFeaturesToTrack( const CvArr* image CvArr* eigImage, CvArr* tempImage CvPoint2D32f* corners int* cornerCount double qualityLevel double minDistance const CvArr* mask=NULL int blockSize=3 int useHarris=0 double k=0.04 )
Determines strong corners on an image.
:param image: The source 8-bit or floating-point 32-bit, single-channel image
:param eigImage: Temporary floating-point 32-bit image, the same size as ``image``
:param tempImage: Another temporary image, the same size and format as ``eigImage``
:param corners: Output parameter; detected corners
:param cornerCount: Output parameter; number of detected corners
:param qualityLevel: Multiplier for the max/min eigenvalue; specifies the minimal accepted quality of image corners
:param minDistance: Limit, specifying the minimum possible distance between the returned corners; Euclidian distance is used
:param mask: Region of interest. The function selects points either in the specified region or in the whole image if the mask is NULL
:param blockSize: Size of the averaging block, passed to the underlying :ref:`CornerMinEigenVal` or :ref:`CornerHarris` used by the function
:param useHarris: If nonzero, Harris operator ( :ref:`CornerHarris` ) is used instead of default :ref:`CornerMinEigenVal`
:param k: Free parameter of Harris detector; used only if ( :math:`\texttt{useHarris} != 0` )
The function finds the corners with big eigenvalues in the image. The function first calculates the minimal
eigenvalue for every source image pixel using the
:ref:`CornerMinEigenVal`
function and stores them in
``eigImage``
. Then it performs
non-maxima suppression (only the local maxima in
:math:`3\times 3`
neighborhood
are retained). The next step rejects the corners with the minimal
eigenvalue less than
:math:`\texttt{qualityLevel} \cdot max(\texttt{eigImage}(x,y))`
.
Finally, the function ensures that the distance between any two corners is not smaller than
``minDistance``
. The weaker corners (with a smaller min eigenvalue) that are too close to the stronger corners are rejected.
Note that the if the function is called with different values
``A``
and
``B``
of the parameter
``qualityLevel``
, and
``A``
> {B}, the array of returned corners with
``qualityLevel=A``
will be the prefix of the output corners array with
``qualityLevel=B``
.
.. index:: HoughLines2
.. _HoughLines2:
HoughLines2
-----------
`id=0.689753287363 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/HoughLines2>`__
.. cfunction:: CvSeq* cvHoughLines2( CvArr* image, void* storage, int method, double rho, double theta, int threshold, double param1=0, double param2=0 )
Finds lines in a binary image using a Hough transform.
:param image: The 8-bit, single-channel, binary source image. In the case of a probabilistic method, the image is modified by the function
:param storage: The storage for the lines that are detected. It can
be a memory storage (in this case a sequence of lines is created in
the storage and returned by the function) or single row/single column
matrix (CvMat*) of a particular type (see below) to which the lines'
parameters are written. The matrix header is modified by the function
so its ``cols`` or ``rows`` will contain the number of lines
detected. If ``storage`` is a matrix and the actual number
of lines exceeds the matrix size, the maximum possible number of lines
is returned (in the case of standard hough transform the lines are sorted
by the accumulator value)
:param method: The Hough transform variant, one of the following:
* **CV_HOUGH_STANDARD** classical or standard Hough transform. Every line is represented by two floating-point numbers :math:`(\rho, \theta)` , where :math:`\rho` is a distance between (0,0) point and the line, and :math:`\theta` is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of ``CV_32FC2`` type
* **CV_HOUGH_PROBABILISTIC** probabilistic Hough transform (more efficient in case if picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of ``CV_32SC4`` type
* **CV_HOUGH_MULTI_SCALE** multi-scale variant of the classical Hough transform. The lines are encoded the same way as ``CV_HOUGH_STANDARD``
:param rho: Distance resolution in pixel-related units
:param theta: Angle resolution measured in radians
:param threshold: Threshold parameter. A line is returned by the function if the corresponding accumulator value is greater than ``threshold``
:param param1: The first method-dependent parameter:
* For the classical Hough transform it is not used (0).
* For the probabilistic Hough transform it is the minimum line length.
* For the multi-scale Hough transform it is the divisor for the distance resolution :math:`\rho` . (The coarse distance resolution will be :math:`\rho` and the accurate resolution will be :math:`(\rho / \texttt{param1})` ).
:param param2: The second method-dependent parameter:
* For the classical Hough transform it is not used (0).
* For the probabilistic Hough transform it is the maximum gap between line segments lying on the same line to treat them as a single line segment (i.e. to join them).
* For the multi-scale Hough transform it is the divisor for the angle resolution :math:`\theta` . (The coarse angle resolution will be :math:`\theta` and the accurate resolution will be :math:`(\theta / \texttt{param2})` ).
The function implements a few variants of the Hough transform for line detection.
**Example. Detecting lines with Hough transform.**
::
/* This is a standalone program. Pass an image name as a first parameter
of the program. Switch between standard and probabilistic Hough transform
by changing "#if 1" to "#if 0" and back */
#include <cv.h>
#include <highgui.h>
#include <math.h>
int main(int argc, char** argv)
{
IplImage* src;
if( argc == 2 && (src=cvLoadImage(argv[1], 0))!= 0)
{
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 1 );
IplImage* color_dst = cvCreateImage( cvGetSize(src), 8, 3 );
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* lines = 0;
int i;
cvCanny( src, dst, 50, 200, 3 );
cvCvtColor( dst, color_dst, CV_GRAY2BGR );
#if 1
lines = cvHoughLines2( dst,
storage,
CV_HOUGH_STANDARD,
1,
CV_PI/180,
100,
0,
0 );
for( i = 0; i < MIN(lines->total,100); i++ )
{
float* line = (float*)cvGetSeqElem(lines,i);
float rho = line[0];
float theta = line[1];
CvPoint pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
cvLine( color_dst, pt1, pt2, CV_RGB(255,0,0), 3, 8 );
}
#else
lines = cvHoughLines2( dst,
storage,
CV_HOUGH_PROBABILISTIC,
1,
CV_PI/180,
80,
30,
10 );
for( i = 0; i < lines->total; i++ )
{
CvPoint* line = (CvPoint*)cvGetSeqElem(lines,i);
cvLine( color_dst, line[0], line[1], CV_RGB(255,0,0), 3, 8 );
}
#endif
cvNamedWindow( "Source", 1 );
cvShowImage( "Source", src );
cvNamedWindow( "Hough", 1 );
cvShowImage( "Hough", color_dst );
cvWaitKey(0);
}
}
..
This is the sample picture the function parameters have been tuned for:
.. image:: ../pics/building.jpg
And this is the output of the above program in the case of probabilistic Hough transform (
``#if 0``
case):
.. image:: ../pics/houghp.png
.. index:: PreCornerDetect
.. _PreCornerDetect:
PreCornerDetect
---------------
`id=0.671562199289 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/PreCornerDetect>`__
.. cfunction:: void cvPreCornerDetect( const CvArr* image, CvArr* corners, int apertureSize=3 )
Calculates the feature map for corner detection.
:param image: Input image
:param corners: Image to store the corner candidates
:param apertureSize: Aperture parameter for the Sobel operator (see :ref:`Sobel` )
The function calculates the function
.. math::
D_x^2 D_{yy} + D_y^2 D_{xx} - 2 D_x D_y D_{xy}
where
:math:`D_?`
denotes one of the first image derivatives and
:math:`D_{??}`
denotes a second image derivative.
The corners can be found as local maximums of the function below:
::
// assume that the image is floating-point
IplImage* corners = cvCloneImage(image);
IplImage* dilated_corners = cvCloneImage(image);
IplImage* corner_mask = cvCreateImage( cvGetSize(image), 8, 1 );
cvPreCornerDetect( image, corners, 3 );
cvDilate( corners, dilated_corners, 0, 1 );
cvSubS( corners, dilated_corners, corners );
cvCmpS( corners, 0, corner_mask, CV_CMP_GE );
cvReleaseImage( &corners );
cvReleaseImage( &dilated_corners );
..
.. index:: SampleLine
.. _SampleLine:
SampleLine
----------
`id=0.852353847021 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/SampleLine>`__
.. cfunction:: int cvSampleLine( const CvArr* image CvPoint pt1 CvPoint pt2 void* buffer int connectivity=8 )
Reads the raster line to the buffer.
:param image: Image to sample the line from
:param pt1: Starting line point
:param pt2: Ending line point
:param buffer: Buffer to store the line points; must have enough size to store :math:`max( |\texttt{pt2.x} - \texttt{pt1.x}|+1, |\texttt{pt2.y} - \texttt{pt1.y}|+1 )`
points in the case of an 8-connected line and :math:`(|\texttt{pt2.x}-\texttt{pt1.x}|+|\texttt{pt2.y}-\texttt{pt1.y}|+1)`
in the case of a 4-connected line
:param connectivity: The line connectivity, 4 or 8
The function implements a particular application of line iterators. The function reads all of the image points lying on the line between
``pt1``
and
``pt2``
, including the end points, and stores them into the buffer.

View File

@ -0,0 +1,737 @@
Geometric Image Transformations
===============================
.. highlight:: c
The functions in this section perform various geometrical transformations of 2D images. That is, they do not change the image content, but deform the pixel grid, and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel
:math:`(x, y)`
of the destination image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value, that is:
.. math::
\texttt{dst} (x,y)= \texttt{src} (f_x(x,y), f_y(x,y))
In the case when the user specifies the forward mapping:
:math:`\left<g_x, g_y\right>: \texttt{src} \rightarrow \texttt{dst}`
, the OpenCV functions first compute the corresponding inverse mapping:
:math:`\left<f_x, f_y\right>: \texttt{dst} \rightarrow \texttt{src}`
and then use the above formula.
The actual implementations of the geometrical transformations, from the most generic
:ref:`Remap`
and to the simplest and the fastest
:ref:`Resize`
, need to solve the 2 main problems with the above formula:
#.
extrapolation of non-existing pixels. Similarly to the filtering functions, described in the previous section, for some
:math:`(x,y)`
one of
:math:`f_x(x,y)`
or
:math:`f_y(x,y)`
, or they both, may fall outside of the image, in which case some extrapolation method needs to be used. OpenCV provides the same selection of the extrapolation methods as in the filtering functions, but also an additional method
``BORDER_TRANSPARENT``
, which means that the corresponding pixels in the destination image will not be modified at all.
#.
interpolation of pixel values. Usually
:math:`f_x(x,y)`
and
:math:`f_y(x,y)`
are floating-point numbers (i.e.
:math:`\left<f_x, f_y\right>`
can be an affine or perspective transformation, or radial lens distortion correction etc.), so a pixel values at fractional coordinates needs to be retrieved. In the simplest case the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel used, which is called nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated
`interpolation methods <http://en.wikipedia.org/wiki/Multivariate_interpolation>`_
, where a polynomial function is fit into some neighborhood of the computed pixel
:math:`(f_x(x,y), f_y(x,y))`
and then the value of the polynomial at
:math:`(f_x(x,y), f_y(x,y))`
is taken as the interpolated pixel value. In OpenCV you can choose between several interpolation methods, see
:ref:`Resize`
.
.. index:: GetRotationMatrix2D
.. _GetRotationMatrix2D:
GetRotationMatrix2D
-------------------
`id=0.623450579574 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GetRotationMatrix2D>`__
.. cfunction:: CvMat* cv2DRotationMatrix( CvPoint2D32f center, double angle, double scale, CvMat* mapMatrix )
Calculates the affine matrix of 2d rotation.
:param center: Center of the rotation in the source image
:param angle: The rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner)
:param scale: Isotropic scale factor
:param mapMatrix: Pointer to the destination :math:`2\times 3` matrix
The function
``cv2DRotationMatrix``
calculates the following matrix:
.. math::
\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot \texttt{center.x} - \beta \cdot \texttt{center.y} \\ - \beta & \alpha & \beta \cdot \texttt{center.x} - (1- \alpha ) \cdot \texttt{center.y} \end{bmatrix}
where
.. math::
\alpha = \texttt{scale} \cdot cos( \texttt{angle} ), \beta = \texttt{scale} \cdot sin( \texttt{angle} )
The transformation maps the rotation center to itself. If this is not the purpose, the shift should be adjusted.
.. index:: GetAffineTransform
.. _GetAffineTransform:
GetAffineTransform
------------------
`id=0.933805421933 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GetAffineTransform>`__
.. cfunction:: CvMat* cvGetAffineTransform( const CvPoint2D32f* src, const CvPoint2D32f* dst, CvMat* mapMatrix )
Calculates the affine transform from 3 corresponding points.
:param src: Coordinates of 3 triangle vertices in the source image
:param dst: Coordinates of the 3 corresponding triangle vertices in the destination image
:param mapMatrix: Pointer to the destination :math:`2 \times 3` matrix
The function cvGetAffineTransform calculates the matrix of an affine transform such that:
.. math::
\begin{bmatrix} x'_i \\ y'_i \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}
where
.. math::
dst(i)=(x'_i,y'_i),
src(i)=(x_i, y_i),
i=0,1,2
.. index:: GetPerspectiveTransform
.. _GetPerspectiveTransform:
GetPerspectiveTransform
-----------------------
`id=0.709057737517 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GetPerspectiveTransform>`__
.. cfunction:: CvMat* cvGetPerspectiveTransform( const CvPoint2D32f* src, const CvPoint2D32f* dst, CvMat* mapMatrix )
Calculates the perspective transform from 4 corresponding points.
:param src: Coordinates of 4 quadrangle vertices in the source image
:param dst: Coordinates of the 4 corresponding quadrangle vertices in the destination image
:param mapMatrix: Pointer to the destination :math:`3\times 3` matrix
The function
``cvGetPerspectiveTransform``
calculates a matrix of perspective transforms such that:
.. math::
\begin{bmatrix} x'_i \\ y'_i \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}
where
.. math::
dst(i)=(x'_i,y'_i),
src(i)=(x_i, y_i),
i=0,1,2,3
.. index:: GetQuadrangleSubPix
.. _GetQuadrangleSubPix:
GetQuadrangleSubPix
-------------------
`id=0.480550634961 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GetQuadrangleSubPix>`__
.. cfunction:: void cvGetQuadrangleSubPix( const CvArr* src, CvArr* dst, const CvMat* mapMatrix )
Retrieves the pixel quadrangle from an image with sub-pixel accuracy.
:param src: Source image
:param dst: Extracted quadrangle
:param mapMatrix: The transformation :math:`2 \times 3` matrix :math:`[A|b]` (see the discussion)
The function
``cvGetQuadrangleSubPix``
extracts pixels from
``src``
at sub-pixel accuracy and stores them to
``dst``
as follows:
.. math::
dst(x, y)= src( A_{11} x' + A_{12} y' + b_1, A_{21} x' + A_{22} y' + b_2)
where
.. math::
x'=x- \frac{(width(dst)-1)}{2} ,
y'=y- \frac{(height(dst)-1)}{2}
and
.. math::
\texttt{mapMatrix} = \begin{bmatrix} A_{11} & A_{12} & b_1 \\ A_{21} & A_{22} & b_2 \end{bmatrix}
The values of pixels at non-integer coordinates are retrieved using bilinear interpolation. When the function needs pixels outside of the image, it uses replication border mode to reconstruct the values. Every channel of multiple-channel images is processed independently.
.. index:: GetRectSubPix
.. _GetRectSubPix:
GetRectSubPix
-------------
`id=0.37305758361 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GetRectSubPix>`__
.. cfunction:: void cvGetRectSubPix( const CvArr* src, CvArr* dst, CvPoint2D32f center )
Retrieves the pixel rectangle from an image with sub-pixel accuracy.
:param src: Source image
:param dst: Extracted rectangle
:param center: Floating point coordinates of the extracted rectangle center within the source image. The center must be inside the image
The function
``cvGetRectSubPix``
extracts pixels from
``src``
:
.. math::
dst(x, y) = src(x + \texttt{center.x} - (width( \texttt{dst} )-1)*0.5, y + \texttt{center.y} - (height( \texttt{dst} )-1)*0.5)
where the values of the pixels at non-integer coordinates are retrieved
using bilinear interpolation. Every channel of multiple-channel
images is processed independently. While the rectangle center
must be inside the image, parts of the rectangle may be
outside. In this case, the replication border mode is used to get
pixel values beyond the image boundaries.
.. index:: LogPolar
.. _LogPolar:
LogPolar
--------
`id=0.0887380164552 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/LogPolar>`__
.. cfunction:: void cvLogPolar( const CvArr* src, CvArr* dst, CvPoint2D32f center, double M, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS )
Remaps an image to log-polar space.
:param src: Source image
:param dst: Destination image
:param center: The transformation center; where the output precision is maximal
:param M: Magnitude scale parameter. See below
:param flags: A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero
* **CV_WARP_INVERSE_MAP** See below
The function
``cvLogPolar``
transforms the source image using the following transformation:
Forward transformation (
``CV_WARP_INVERSE_MAP``
is not set):
.. math::
dst( \phi , \rho ) = src(x,y)
Inverse transformation (
``CV_WARP_INVERSE_MAP``
is set):
.. math::
dst(x,y) = src( \phi , \rho )
where
.. math::
\rho = M \cdot \log{\sqrt{x^2 + y^2}} , \phi =atan(y/x)
The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth.
The function can not operate in-place.
::
#include <cv.h>
#include <highgui.h>
int main(int argc, char** argv)
{
IplImage* src;
if( argc == 2 && (src=cvLoadImage(argv[1],1) != 0 )
{
IplImage* dst = cvCreateImage( cvSize(256,256), 8, 3 );
IplImage* src2 = cvCreateImage( cvGetSize(src), 8, 3 );
cvLogPolar( src, dst, cvPoint2D32f(src->width/2,src->height/2), 40,
CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS );
cvLogPolar( dst, src2, cvPoint2D32f(src->width/2,src->height/2), 40,
CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS+CV_WARP_INVERSE_MAP );
cvNamedWindow( "log-polar", 1 );
cvShowImage( "log-polar", dst );
cvNamedWindow( "inverse log-polar", 1 );
cvShowImage( "inverse log-polar", src2 );
cvWaitKey();
}
return 0;
}
..
And this is what the program displays when
``opencv/samples/c/fruits.jpg``
is passed to it
.. image:: ../pics/logpolar.jpg
.. image:: ../pics/inv_logpolar.jpg
.. index:: Remap
.. _Remap:
Remap
-----
`id=0.485916549227 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Remap>`__
.. cfunction:: void cvRemap( const CvArr* src, CvArr* dst, const CvArr* mapx, const CvArr* mapy, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )
Applies a generic geometrical transformation to the image.
:param src: Source image
:param dst: Destination image
:param mapx: The map of x-coordinates (CV _ 32FC1 image)
:param mapy: The map of y-coordinates (CV _ 32FC1 image)
:param flags: A combination of interpolation method and the following optional flag(s):
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to ``fillval``
:param fillval: A value used to fill outliers
The function
``cvRemap``
transforms the source image using the specified map:
.. math::
\texttt{dst} (x,y) = \texttt{src} ( \texttt{mapx} (x,y), \texttt{mapy} (x,y))
Similar to other geometrical transformations, some interpolation method (specified by user) is used to extract pixels with non-integer coordinates.
Note that the function can not operate in-place.
.. index:: Resize
.. _Resize:
Resize
------
`id=0.249690626324 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Resize>`__
.. cfunction:: void cvResize( const CvArr* src, CvArr* dst, int interpolation=CV_INTER_LINEAR )
Resizes an image.
:param src: Source image
:param dst: Destination image
:param interpolation: Interpolation method:
* **CV_INTER_NN** nearest-neigbor interpolation
* **CV_INTER_LINEAR** bilinear interpolation (used by default)
* **CV_INTER_AREA** resampling using pixel area relation. It is the preferred method for image decimation that gives moire-free results. In terms of zooming it is similar to the ``CV_INTER_NN`` method
* **CV_INTER_CUBIC** bicubic interpolation
The function
``cvResize``
resizes an image
``src``
so that it fits exactly into
``dst``
. If ROI is set, the function considers the ROI as supported.
.. index:: WarpAffine
.. _WarpAffine:
WarpAffine
----------
`id=0.0915967317176 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/WarpAffine>`__
.. cfunction:: void cvWarpAffine( const CvArr* src, CvArr* dst, const CvMat* mapMatrix, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )
Applies an affine transformation to an image.
:param src: Source image
:param dst: Destination image
:param mapMatrix: :math:`2\times 3` transformation matrix
:param flags: A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels; if some of them correspond to outliers in the source image, they are set to ``fillval``
* **CV_WARP_INVERSE_MAP** indicates that ``matrix`` is inversely
transformed from the destination image to the source and, thus, can be used
directly for pixel interpolation. Otherwise, the function finds
the inverse transform from ``mapMatrix``
:param fillval: A value used to fill outliers
The function
``cvWarpAffine``
transforms the source image using the specified matrix:
.. math::
dst(x',y') = src(x,y)
where
.. math::
\begin{matrix} \begin{bmatrix} x' \\ y' \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} & \mbox{if CV\_WARP\_INVERSE\_MAP is not set} \\ \begin{bmatrix} x \\ y \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} & \mbox{otherwise} \end{matrix}
The function is similar to
:ref:`GetQuadrangleSubPix`
but they are not exactly the same.
:ref:`WarpAffine`
requires input and output image have the same data type, has larger overhead (so it is not quite suitable for small images) and can leave part of destination image unchanged. While
:ref:`GetQuadrangleSubPix`
may extract quadrangles from 8-bit images into floating-point buffer, has smaller overhead and always changes the whole destination image content.
Note that the function can not operate in-place.
To transform a sparse set of points, use the
:ref:`Transform`
function from cxcore.
.. index:: WarpPerspective
.. _WarpPerspective:
WarpPerspective
---------------
`id=0.647385091755 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/WarpPerspective>`__
.. cfunction:: void cvWarpPerspective( const CvArr* src, CvArr* dst, const CvMat* mapMatrix, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )
Applies a perspective transformation to an image.
:param src: Source image
:param dst: Destination image
:param mapMatrix: :math:`3\times 3` transformation matrix
:param flags: A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels; if some of them correspond to outliers in the source image, they are set to ``fillval``
* **CV_WARP_INVERSE_MAP** indicates that ``matrix`` is inversely transformed from the destination image to the source and, thus, can be used directly for pixel interpolation. Otherwise, the function finds the inverse transform from ``mapMatrix``
:param fillval: A value used to fill outliers
The function
``cvWarpPerspective``
transforms the source image using the specified matrix:
.. math::
\begin{matrix} \begin{bmatrix} x' \\ y' \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} & \mbox{if CV\_WARP\_INVERSE\_MAP is not set} \\ \begin{bmatrix} x \\ y \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} & \mbox{otherwise} \end{matrix}
Note that the function can not operate in-place.
For a sparse set of points use the
:ref:`PerspectiveTransform`
function from CxCore.

View File

@ -0,0 +1,941 @@
Histograms
==========
.. highlight:: c
.. index:: CvHistogram
.. _CvHistogram:
CvHistogram
-----------
`id=0.29416496784 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CvHistogram>`__
.. ctype:: CvHistogram
Multi-dimensional histogram.
::
typedef struct CvHistogram
{
int type;
CvArr* bins;
float thresh[CV_MAX_DIM][2]; /* for uniform histograms */
float** thresh2; /* for non-uniform histograms */
CvMatND mat; /* embedded matrix header for array histograms */
}
CvHistogram;
..
.. index:: CalcBackProject
.. _CalcBackProject:
CalcBackProject
---------------
`id=0.262445080297 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CalcBackProject>`__
.. cfunction:: void cvCalcBackProject( IplImage** image, CvArr* back_project, const CvHistogram* hist )
Calculates the back projection.
:param image: Source images (though you may pass CvMat** as well)
:param back_project: Destination back projection image of the same type as the source images
:param hist: Histogram
The function calculates the back project of the histogram. For each
tuple of pixels at the same position of all input single-channel images
the function puts the value of the histogram bin, corresponding to the
tuple in the destination image. In terms of statistics, the value of
each output image pixel is the probability of the observed tuple given
the distribution (histogram). For example, to find a red object in the
picture, one may do the following:
#.
Calculate a hue histogram for the red object assuming the image contains only this object. The histogram is likely to have a strong maximum, corresponding to red color.
#.
Calculate back projection of a hue plane of input image where the object is searched, using the histogram. Threshold the image.
#.
Find connected components in the resulting picture and choose the right component using some additional criteria, for example, the largest connected component.
That is the approximate algorithm of Camshift color object tracker, except for the 3rd step, instead of which CAMSHIFT algorithm is used to locate the object on the back projection given the previous object position.
.. index:: CalcBackProjectPatch
.. _CalcBackProjectPatch:
CalcBackProjectPatch
--------------------
`id=0.510320009557 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CalcBackProjectPatch>`__
.. cfunction:: void cvCalcBackProjectPatch( IplImage** images, CvArr* dst, CvSize patch_size, CvHistogram* hist, int method, double factor )
Locates a template within an image by using a histogram comparison.
:param images: Source images (though, you may pass CvMat** as well)
:param dst: Destination image
:param patch_size: Size of the patch slid though the source image
:param hist: Histogram
:param method: Comparison method, passed to :ref:`CompareHist` (see description of that function)
:param factor: Normalization factor for histograms, will affect the normalization scale of the destination image, pass 1 if unsure
The function calculates the back projection by comparing histograms of the source image patches with the given histogram. Taking measurement results from some image at each location over ROI creates an array
``image``
. These results might be one or more of hue,
``x``
derivative,
``y``
derivative, Laplacian filter, oriented Gabor filter, etc. Each measurement output is collected into its own separate image. The
``image``
image array is a collection of these measurement images. A multi-dimensional histogram
``hist``
is constructed by sampling from the
``image``
image array. The final histogram is normalized. The
``hist``
histogram has as many dimensions as the number of elements in
``image``
array.
Each new image is measured and then converted into an
``image``
image array over a chosen ROI. Histograms are taken from this
``image``
image in an area covered by a "patch" with an anchor at center as shown in the picture below. The histogram is normalized using the parameter
``norm_factor``
so that it may be compared with
``hist``
. The calculated histogram is compared to the model histogram;
``hist``
uses The function
``cvCompareHist``
with the comparison method=
``method``
). The resulting output is placed at the location corresponding to the patch anchor in the probability image
``dst``
. This process is repeated as the patch is slid over the ROI. Iterative histogram update by subtracting trailing pixels covered by the patch and adding newly covered pixels to the histogram can save a lot of operations, though it is not implemented yet.
Back Project Calculation by Patches
.. image:: ../pics/backprojectpatch.png
.. index:: CalcHist
.. _CalcHist:
CalcHist
--------
`id=0.247250829359 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CalcHist>`__
.. cfunction:: void cvCalcHist( IplImage** image, CvHistogram* hist, int accumulate=0, const CvArr* mask=NULL )
Calculates the histogram of image(s).
:param image: Source images (though you may pass CvMat** as well)
:param hist: Pointer to the histogram
:param accumulate: Accumulation flag. If it is set, the histogram is not cleared in the beginning. This feature allows user to compute a single histogram from several images, or to update the histogram online
:param mask: The operation mask, determines what pixels of the source images are counted
The function calculates the histogram of one or more
single-channel images. The elements of a tuple that is used to increment
a histogram bin are taken at the same location from the corresponding
input images.
::
#include <cv.h>
#include <highgui.h>
int main( int argc, char** argv )
{
IplImage* src;
if( argc == 2 && (src=cvLoadImage(argv[1], 1))!= 0)
{
IplImage* h_plane = cvCreateImage( cvGetSize(src), 8, 1 );
IplImage* s_plane = cvCreateImage( cvGetSize(src), 8, 1 );
IplImage* v_plane = cvCreateImage( cvGetSize(src), 8, 1 );
IplImage* planes[] = { h_plane, s_plane };
IplImage* hsv = cvCreateImage( cvGetSize(src), 8, 3 );
int h_bins = 30, s_bins = 32;
int hist_size[] = {h_bins, s_bins};
/* hue varies from 0 (~0 deg red) to 180 (~360 deg red again) */
float h_ranges[] = { 0, 180 };
/* saturation varies from 0 (black-gray-white) to
255 (pure spectrum color) */
float s_ranges[] = { 0, 255 };
float* ranges[] = { h_ranges, s_ranges };
int scale = 10;
IplImage* hist_img =
cvCreateImage( cvSize(h_bins*scale,s_bins*scale), 8, 3 );
CvHistogram* hist;
float max_value = 0;
int h, s;
cvCvtColor( src, hsv, CV_BGR2HSV );
cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 );
hist = cvCreateHist( 2, hist_size, CV_HIST_ARRAY, ranges, 1 );
cvCalcHist( planes, hist, 0, 0 );
cvGetMinMaxHistValue( hist, 0, &max_value, 0, 0 );
cvZero( hist_img );
for( h = 0; h < h_bins; h++ )
{
for( s = 0; s < s_bins; s++ )
{
float bin_val = cvQueryHistValue_2D( hist, h, s );
int intensity = cvRound(bin_val*255/max_value);
cvRectangle( hist_img, cvPoint( h*scale, s*scale ),
cvPoint( (h+1)*scale - 1, (s+1)*scale - 1),
CV_RGB(intensity,intensity,intensity),
CV_FILLED );
}
}
cvNamedWindow( "Source", 1 );
cvShowImage( "Source", src );
cvNamedWindow( "H-S Histogram", 1 );
cvShowImage( "H-S Histogram", hist_img );
cvWaitKey(0);
}
}
..
.. index:: CalcProbDensity
.. _CalcProbDensity:
CalcProbDensity
---------------
`id=0.806356307482 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CalcProbDensity>`__
.. cfunction:: void cvCalcProbDensity( const CvHistogram* hist1, const CvHistogram* hist2, CvHistogram* dst_hist, double scale=255 )
Divides one histogram by another.
:param hist1: first histogram (the divisor)
:param hist2: second histogram
:param dst_hist: destination histogram
:param scale: scale factor for the destination histogram
The function calculates the object probability density from the two histograms as:
.. math::
\texttt{dist\_hist} (I)= \forkthree{0}{if $\texttt{hist1}(I)=0$}{\texttt{scale}}{if $\texttt{hist1}(I) \ne 0$ and $\texttt{hist2}(I) > \texttt{hist1}(I)$}{\frac{\texttt{hist2}(I) \cdot \texttt{scale}}{\texttt{hist1}(I)}}{if $\texttt{hist1}(I) \ne 0$ and $\texttt{hist2}(I) \le \texttt{hist1}(I)$}
So the destination histogram bins are within less than
``scale``
.
.. index:: ClearHist
.. _ClearHist:
ClearHist
---------
`id=0.835401602212 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/ClearHist>`__
.. cfunction:: void cvClearHist( CvHistogram* hist )
Clears the histogram.
:param hist: Histogram
The function sets all of the histogram bins to 0 in the case of a dense histogram and removes all histogram bins in the case of a sparse array.
.. index:: CompareHist
.. _CompareHist:
CompareHist
-----------
`id=0.50848857362 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CompareHist>`__
.. cfunction:: double cvCompareHist( const CvHistogram* hist1, const CvHistogram* hist2, int method )
Compares two dense histograms.
:param hist1: The first dense histogram
:param hist2: The second dense histogram
:param method: Comparison method, one of the following:
* **CV_COMP_CORREL** Correlation
* **CV_COMP_CHISQR** Chi-Square
* **CV_COMP_INTERSECT** Intersection
* **CV_COMP_BHATTACHARYYA** Bhattacharyya distance
The function compares two dense histograms using the specified method (
:math:`H_1`
denotes the first histogram,
:math:`H_2`
the second):
* Correlation (method=CV\_COMP\_CORREL)
.. math::
d(H_1,H_2) = \frac{\sum_I (H'_1(I) \cdot H'_2(I))}{\sqrt{\sum_I(H'_1(I)^2) \cdot \sum_I(H'_2(I)^2)}}
where
.. math::
H'_k(I) = \frac{H_k(I) - 1}{N \cdot \sum_J H_k(J)}
where N is the number of histogram bins.
* Chi-Square (method=CV\_COMP\_CHISQR)
.. math::
d(H_1,H_2) = \sum _I \frac{(H_1(I)-H_2(I))^2}{H_1(I)+H_2(I)}
* Intersection (method=CV\_COMP\_INTERSECT)
.. math::
d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I))
* Bhattacharyya distance (method=CV\_COMP\_BHATTACHARYYA)
.. math::
d(H_1,H_2) = \sqrt{1 - \sum_I \frac{\sqrt{H_1(I) \cdot H_2(I)}}{ \sqrt{ \sum_I H_1(I) \cdot \sum_I H_2(I) }}}
The function returns
:math:`d(H_1, H_2)`
.
Note: the method
``CV_COMP_BHATTACHARYYA``
only works with normalized histograms.
To compare a sparse histogram or more general sparse configurations of weighted points, consider using the
:ref:`CalcEMD2`
function.
.. index:: CopyHist
.. _CopyHist:
CopyHist
--------
`id=0.454990024463 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CopyHist>`__
.. cfunction:: void cvCopyHist( const CvHistogram* src, CvHistogram** dst )
Copies a histogram.
:param src: Source histogram
:param dst: Pointer to destination histogram
The function makes a copy of the histogram. If the
second histogram pointer
``*dst``
is NULL, a new histogram of the
same size as
``src``
is created. Otherwise, both histograms must
have equal types and sizes. Then the function copies the source histogram's
bin values to the destination histogram and sets the same bin value ranges
as in
``src``
.
.. index:: CreateHist
.. _CreateHist:
CreateHist
----------
`id=0.761254826094 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CreateHist>`__
.. cfunction:: CvHistogram* cvCreateHist( int dims, int* sizes, int type, float** ranges=NULL, int uniform=1 )
Creates a histogram.
:param dims: Number of histogram dimensions
:param sizes: Array of the histogram dimension sizes
:param type: Histogram representation format: ``CV_HIST_ARRAY`` means that the histogram data is represented as a multi-dimensional dense array CvMatND; ``CV_HIST_SPARSE`` means that histogram data is represented as a multi-dimensional sparse array CvSparseMat
:param ranges: Array of ranges for the histogram bins. Its meaning depends on the ``uniform`` parameter value. The ranges are used for when the histogram is calculated or backprojected to determine which histogram bin corresponds to which value/tuple of values from the input image(s)
:param uniform: Uniformity flag; if not 0, the histogram has evenly
spaced bins and for every :math:`0<=i<cDims` ``ranges[i]``
is an array of two numbers: lower and upper boundaries for the i-th
histogram dimension.
The whole range [lower,upper] is then split
into ``dims[i]`` equal parts to determine the ``i-th`` input
tuple value ranges for every histogram bin. And if ``uniform=0`` ,
then ``i-th`` element of ``ranges`` array contains ``dims[i]+1`` elements: :math:`\texttt{lower}_0, \texttt{upper}_0,
\texttt{lower}_1, \texttt{upper}_1 = \texttt{lower}_2,
...
\texttt{upper}_{dims[i]-1}`
where :math:`\texttt{lower}_j` and :math:`\texttt{upper}_j`
are lower and upper
boundaries of ``i-th`` input tuple value for ``j-th``
bin, respectively. In either case, the input values that are beyond
the specified range for a histogram bin are not counted by :ref:`CalcHist` and filled with 0 by :ref:`CalcBackProject`
The function creates a histogram of the specified
size and returns a pointer to the created histogram. If the array
``ranges``
is 0, the histogram bin ranges must be specified later
via the function
:ref:`SetHistBinRanges`
. Though
:ref:`CalcHist`
and
:ref:`CalcBackProject`
may process 8-bit images without setting
bin ranges, they assume thy are equally spaced in 0 to 255 bins.
.. index:: GetHistValue*D
.. _GetHistValue*D:
GetHistValue*D
--------------
`id=0.909653638644 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GetHistValue%2AD>`__
.. cfunction:: float cvGetHistValue_1D(hist, idx0)
.. cfunction:: float cvGetHistValue_2D(hist, idx0, idx1)
.. cfunction:: float cvGetHistValue_3D(hist, idx0, idx1, idx2)
.. cfunction:: float cvGetHistValue_nD(hist, idx)
Returns a pointer to the histogram bin.
:param hist: Histogram
:param idx0, idx1, idx2, idx3: Indices of the bin
:param idx: Array of indices
::
#define cvGetHistValue_1D( hist, idx0 )
((float*)(cvPtr1D( (hist)->bins, (idx0), 0 ))
#define cvGetHistValue_2D( hist, idx0, idx1 )
((float*)(cvPtr2D( (hist)->bins, (idx0), (idx1), 0 )))
#define cvGetHistValue_3D( hist, idx0, idx1, idx2 )
((float*)(cvPtr3D( (hist)->bins, (idx0), (idx1), (idx2), 0 )))
#define cvGetHistValue_nD( hist, idx )
((float*)(cvPtrND( (hist)->bins, (idx), 0 )))
..
The macros
``GetHistValue``
return a pointer to the specified bin of the 1D, 2D, 3D or N-D histogram. In the case of a sparse histogram the function creates a new bin and sets it to 0, unless it exists already.
.. index:: GetMinMaxHistValue
.. _GetMinMaxHistValue:
GetMinMaxHistValue
------------------
`id=0.649289865958 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/GetMinMaxHistValue>`__
.. cfunction:: void cvGetMinMaxHistValue( const CvHistogram* hist, float* min_value, float* max_value, int* min_idx=NULL, int* max_idx=NULL )
Finds the minimum and maximum histogram bins.
:param hist: Histogram
:param min_value: Pointer to the minimum value of the histogram
:param max_value: Pointer to the maximum value of the histogram
:param min_idx: Pointer to the array of coordinates for the minimum
:param max_idx: Pointer to the array of coordinates for the maximum
The function finds the minimum and
maximum histogram bins and their positions. All of output arguments are
optional. Among several extremas with the same value the ones with the
minimum index (in lexicographical order) are returned. In the case of several maximums
or minimums, the earliest in lexicographical order (extrema locations)
is returned.
.. index:: MakeHistHeaderForArray
.. _MakeHistHeaderForArray:
MakeHistHeaderForArray
----------------------
`id=0.153593673347 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/MakeHistHeaderForArray>`__
.. cfunction:: CvHistogram* cvMakeHistHeaderForArray( int dims, int* sizes, CvHistogram* hist, float* data, float** ranges=NULL, int uniform=1 )
Makes a histogram out of an array.
:param dims: Number of histogram dimensions
:param sizes: Array of the histogram dimension sizes
:param hist: The histogram header initialized by the function
:param data: Array that will be used to store histogram bins
:param ranges: Histogram bin ranges, see :ref:`CreateHist`
:param uniform: Uniformity flag, see :ref:`CreateHist`
The function initializes the histogram, whose header and bins are allocated by th user.
:ref:`ReleaseHist`
does not need to be called afterwards. Only dense histograms can be initialized this way. The function returns
``hist``
.
.. index:: NormalizeHist
.. _NormalizeHist:
NormalizeHist
-------------
`id=0.494984568711 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/NormalizeHist>`__
.. cfunction:: void cvNormalizeHist( CvHistogram* hist, double factor )
Normalizes the histogram.
:param hist: Pointer to the histogram
:param factor: Normalization factor
The function normalizes the histogram bins by scaling them, such that the sum of the bins becomes equal to
``factor``
.
.. index:: QueryHistValue*D
.. _QueryHistValue*D:
QueryHistValue*D
----------------
`id=0.0495732815752 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/QueryHistValue%2AD>`__
.. cfunction:: float QueryHistValue_1D(CvHistogram hist, int idx0)
Queries the value of the histogram bin.
:param hist: Histogram
:param idx0, idx1, idx2, idx3: Indices of the bin
:param idx: Array of indices
::
#define cvQueryHistValue_1D( hist, idx0 ) \
cvGetReal1D( (hist)->bins, (idx0) )
#define cvQueryHistValue_2D( hist, idx0, idx1 ) \
cvGetReal2D( (hist)->bins, (idx0), (idx1) )
#define cvQueryHistValue_3D( hist, idx0, idx1, idx2 ) \
cvGetReal3D( (hist)->bins, (idx0), (idx1), (idx2) )
#define cvQueryHistValue_nD( hist, idx ) \
cvGetRealND( (hist)->bins, (idx) )
..
The macros return the value of the specified bin of the 1D, 2D, 3D or N-D histogram. In the case of a sparse histogram the function returns 0, if the bin is not present in the histogram no new bin is created.
.. index:: ReleaseHist
.. _ReleaseHist:
ReleaseHist
-----------
`id=0.635490375005 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/ReleaseHist>`__
.. cfunction:: void cvReleaseHist( CvHistogram** hist )
Releases the histogram.
:param hist: Double pointer to the released histogram
The function releases the histogram (header and the data). The pointer to the histogram is cleared by the function. If
``*hist``
pointer is already
``NULL``
, the function does nothing.
.. index:: SetHistBinRanges
.. _SetHistBinRanges:
SetHistBinRanges
----------------
`id=0.097775620677 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/SetHistBinRanges>`__
.. cfunction:: void cvSetHistBinRanges( CvHistogram* hist, float** ranges, int uniform=1 )
Sets the bounds of the histogram bins.
:param hist: Histogram
:param ranges: Array of bin ranges arrays, see :ref:`CreateHist`
:param uniform: Uniformity flag, see :ref:`CreateHist`
The function is a stand-alone function for setting bin ranges in the histogram. For a more detailed description of the parameters
``ranges``
and
``uniform``
see the
:ref:`CalcHist`
function, that can initialize the ranges as well. Ranges for the histogram bins must be set before the histogram is calculated or the backproject of the histogram is calculated.
.. index:: ThreshHist
.. _ThreshHist:
ThreshHist
----------
`id=0.2471087134 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/ThreshHist>`__
.. cfunction:: void cvThreshHist( CvHistogram* hist, double threshold )
Thresholds the histogram.
:param hist: Pointer to the histogram
:param threshold: Threshold level
The function clears histogram bins that are below the specified threshold.

View File

@ -0,0 +1,711 @@
Image Filtering
===============
.. highlight:: c
Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images (represented as
:func:`Mat`
's), that is, for each pixel location
:math:`(x,y)`
in the source image some its (normally rectangular) neighborhood is considered and used to compute the response. In case of a linear filter it is a weighted sum of pixel values, in case of morphological operations it is the minimum or maximum etc. The computed response is stored to the destination image at the same location
:math:`(x,y)`
. It means, that the output image will be of the same size as the input image. Normally, the functions supports multi-channel arrays, in which case every channel is processed independently, therefore the output image will also have the same number of channels as the input one.
Another common feature of the functions and classes described in this section is that, unlike simple arithmetic functions, they need to extrapolate values of some non-existing pixels. For example, if we want to smooth an image using a Gaussian
:math:`3 \times 3`
filter, then during the processing of the left-most pixels in each row we need pixels to the left of them, i.e. outside of the image. We can let those pixels be the same as the left-most image pixels (i.e. use "replicated border" extrapolation method), or assume that all the non-existing pixels are zeros ("contant border" extrapolation method) etc.
.. index:: IplConvKernel
.. _IplConvKernel:
IplConvKernel
-------------
`id=0.193062601082 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/IplConvKernel>`__
.. ctype:: IplConvKernel
An IplConvKernel is a rectangular convolution kernel, created by function
:ref:`CreateStructuringElementEx`
.
.. index:: CopyMakeBorder
.. _CopyMakeBorder:
CopyMakeBorder
--------------
`id=0.294015080522 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CopyMakeBorder>`__
.. cfunction:: void cvCopyMakeBorder( const CvArr* src, CvArr* dst, CvPoint offset, int bordertype, CvScalar value=cvScalarAll(0) )
Copies an image and makes a border around it.
:param src: The source image
:param dst: The destination image
:param offset: Coordinates of the top-left corner (or bottom-left in the case of images with bottom-left origin) of the destination image rectangle where the source image (or its ROI) is copied. Size of the rectanlge matches the source image size/ROI size
:param bordertype: Type of the border to create around the copied source image rectangle; types include:
* **IPL_BORDER_CONSTANT** border is filled with the fixed value, passed as last parameter of the function.
* **IPL_BORDER_REPLICATE** the pixels from the top and bottom rows, the left-most and right-most columns are replicated to fill the border.
(The other two border types from IPL, ``IPL_BORDER_REFLECT`` and ``IPL_BORDER_WRAP`` , are currently unsupported)
:param value: Value of the border pixels if ``bordertype`` is ``IPL_BORDER_CONSTANT``
The function copies the source 2D array into the interior of the destination array and makes a border of the specified type around the copied area. The function is useful when one needs to emulate border type that is different from the one embedded into a specific algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV, internally use replication border type, while the user may need a zero border or a border, filled with 1's or 255's.
.. index:: CreateStructuringElementEx
.. _CreateStructuringElementEx:
CreateStructuringElementEx
--------------------------
`id=0.198112593438 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CreateStructuringElementEx>`__
.. cfunction:: IplConvKernel* cvCreateStructuringElementEx( int cols, int rows, int anchorX, int anchorY, int shape, int* values=NULL )
Creates a structuring element.
:param cols: Number of columns in the structuring element
:param rows: Number of rows in the structuring element
:param anchorX: Relative horizontal offset of the anchor point
:param anchorY: Relative vertical offset of the anchor point
:param shape: Shape of the structuring element; may have the following values:
* **CV_SHAPE_RECT** a rectangular element
* **CV_SHAPE_CROSS** a cross-shaped element
* **CV_SHAPE_ELLIPSE** an elliptic element
* **CV_SHAPE_CUSTOM** a user-defined element. In this case the parameter ``values`` specifies the mask, that is, which neighbors of the pixel must be considered
:param values: Pointer to the structuring element data, a plane array, representing row-by-row scanning of the element matrix. Non-zero values indicate points that belong to the element. If the pointer is ``NULL`` , then all values are considered non-zero, that is, the element is of a rectangular shape. This parameter is considered only if the shape is ``CV_SHAPE_CUSTOM``
The function CreateStructuringElementEx allocates and fills the structure
``IplConvKernel``
, which can be used as a structuring element in the morphological operations.
.. index:: Dilate
.. _Dilate:
Dilate
------
`id=0.862952069683 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Dilate>`__
.. cfunction:: void cvDilate( const CvArr* src, CvArr* dst, IplConvKernel* element=NULL, int iterations=1 )
Dilates an image by using a specific structuring element.
:param src: Source image
:param dst: Destination image
:param element: Structuring element used for dilation. If it is ``NULL`` ,
a :math:`3\times 3` rectangular structuring element is used
:param iterations: Number of times dilation is applied
The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:
.. math::
\max _{(x',y') \, in \, \texttt{element} }src(x+x',y+y')
The function supports the in-place mode. Dilation can be applied several (
``iterations``
) times. For color images, each channel is processed independently.
.. index:: Erode
.. _Erode:
Erode
-----
`id=0.789537037619 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Erode>`__
.. cfunction:: void cvErode( const CvArr* src, CvArr* dst, IplConvKernel* element=NULL, int iterations=1)
Erodes an image by using a specific structuring element.
:param src: Source image
:param dst: Destination image
:param element: Structuring element used for erosion. If it is ``NULL`` ,
a :math:`3\times 3` rectangular structuring element is used
:param iterations: Number of times erosion is applied
The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:
.. math::
\min _{(x',y') \, in \, \texttt{element} }src(x+x',y+y')
The function supports the in-place mode. Erosion can be applied several (
``iterations``
) times. For color images, each channel is processed independently.
.. index:: Filter2D
.. _Filter2D:
Filter2D
--------
`id=0.417959887843 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Filter2D>`__
.. cfunction:: void cvFilter2D( const CvArr* src, CvArr* dst, const CvMat* kernel, CvPoint anchor=cvPoint(-1,-1))
Convolves an image with the kernel.
:param src: The source image
:param dst: The destination image
:param kernel: Convolution kernel, a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using :ref:`Split` and process them individually
:param anchor: The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center
The function applies an arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that are inside the image.
.. index:: Laplace
.. _Laplace:
Laplace
-------
`id=0.525523278714 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Laplace>`__
.. cfunction:: void cvLaplace( const CvArr* src, CvArr* dst, int apertureSize=3)
Calculates the Laplacian of an image.
:param src: Source image
:param dst: Destination image
:param apertureSize: Aperture size (it has the same meaning as :ref:`Sobel` )
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
.. math::
\texttt{dst} (x,y) = \frac{d^2 \texttt{src}}{dx^2} + \frac{d^2 \texttt{src}}{dy^2}
Setting
``apertureSize``
= 1 gives the fastest variant that is equal to convolving the image with the following kernel:
.. math::
\vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}
Similar to the
:ref:`Sobel`
function, no scaling is done and the same combinations of input and output formats are supported.
.. index:: MorphologyEx
.. _MorphologyEx:
MorphologyEx
------------
`id=0.564904115593 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/MorphologyEx>`__
.. cfunction:: void cvMorphologyEx( const CvArr* src, CvArr* dst, CvArr* temp, IplConvKernel* element, int operation, int iterations=1 )
Performs advanced morphological transformations.
:param src: Source image
:param dst: Destination image
:param temp: Temporary image, required in some cases
:param element: Structuring element
:param operation: Type of morphological operation, one of the following:
* **CV_MOP_OPEN** opening
* **CV_MOP_CLOSE** closing
* **CV_MOP_GRADIENT** morphological gradient
* **CV_MOP_TOPHAT** "top hat"
* **CV_MOP_BLACKHAT** "black hat"
:param iterations: Number of times erosion and dilation are applied
The function can perform advanced morphological transformations using erosion and dilation as basic operations.
Opening:
.. math::
dst=open(src,element)=dilate(erode(src,element),element)
Closing:
.. math::
dst=close(src,element)=erode(dilate(src,element),element)
Morphological gradient:
.. math::
dst=morph \_ grad(src,element)=dilate(src,element)-erode(src,element)
"Top hat":
.. math::
dst=tophat(src,element)=src-open(src,element)
"Black hat":
.. math::
dst=blackhat(src,element)=close(src,element)-src
The temporary image
``temp``
is required for a morphological gradient and, in the case of in-place operation, for "top hat" and "black hat".
.. index:: PyrDown
.. _PyrDown:
PyrDown
-------
`id=0.202607003604 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/PyrDown>`__
.. cfunction:: void cvPyrDown( const CvArr* src, CvArr* dst, int filter=CV_GAUSSIAN_5x5 )
Downsamples an image.
:param src: The source image
:param dst: The destination image, should have a half as large width and height than the source
:param filter: Type of the filter used for convolution; only ``CV_GAUSSIAN_5x5`` is currently supported
The function performs the downsampling step of the Gaussian pyramid decomposition. First it convolves the source image with the specified filter and then downsamples the image by rejecting even rows and columns.
.. index:: ReleaseStructuringElement
.. _ReleaseStructuringElement:
ReleaseStructuringElement
-------------------------
`id=0.80859820706 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/ReleaseStructuringElement>`__
.. cfunction:: void cvReleaseStructuringElement( IplConvKernel** element )
Deletes a structuring element.
:param element: Pointer to the deleted structuring element
The function releases the structure
``IplConvKernel``
that is no longer needed. If
``*element``
is
``NULL``
, the function has no effect.
.. index:: Smooth
.. _Smooth:
Smooth
------
`id=0.653842638158 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Smooth>`__
.. cfunction:: void cvSmooth( const CvArr* src, CvArr* dst, int smoothtype=CV_GAUSSIAN, int param1=3, int param2=0, double param3=0, double param4=0)
Smooths the image in one of several ways.
:param src: The source image
:param dst: The destination image
:param smoothtype: Type of the smoothing:
* **CV_BLUR_NO_SCALE** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's). If you want to smooth different pixels with different-size box kernels, you can use the integral image that is computed using :ref:`Integral`
* **CV_BLUR** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's) with subsequent scaling by :math:`1/(\texttt{param1}\cdot\texttt{param2})`
* **CV_GAUSSIAN** linear convolution with a :math:`\texttt{param1}\times\texttt{param2}` Gaussian kernel
* **CV_MEDIAN** median filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture
* **CV_BILATERAL** bilateral filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture, color sigma= ``param3`` and spatial sigma= ``param4`` . If ``param1=0`` , the aperture square side is set to ``cvRound(param4*1.5)*2+1`` . Information about bilateral filtering can be found at http://www.dai.ed.ac.uk/CVonline/LOCAL\_COPIES/MANDUCHI1/Bilateral\_Filtering.html
:param param1: The first parameter of the smoothing operation, the aperture width. Must be a positive odd number (1, 3, 5, ...)
:param param2: The second parameter of the smoothing operation, the aperture height. Ignored by ``CV_MEDIAN`` and ``CV_BILATERAL`` methods. In the case of simple scaled/non-scaled and Gaussian blur if ``param2`` is zero, it is set to ``param1`` . Otherwise it must be a positive odd number.
:param param3: In the case of a Gaussian parameter this parameter may specify Gaussian :math:`\sigma` (standard deviation). If it is zero, it is calculated from the kernel size:
.. math::
\sigma = 0.3 (n/2 - 1) + 0.8 \quad \text{where} \quad n= \begin{array}{l l} \mbox{\texttt{param1} for horizontal kernel} \\ \mbox{\texttt{param2} for vertical kernel} \end{array}
Using standard sigma for small kernels ( :math:`3\times 3` to :math:`7\times 7` ) gives better speed. If ``param3`` is not zero, while ``param1`` and ``param2`` are zeros, the kernel size is calculated from the sigma (to provide accurate enough operation).
The function smooths an image using one of several methods. Every of the methods has some features and restrictions listed below
Blur with no scaling works with single-channel images only and supports accumulation of 8-bit to 16-bit format (similar to
:ref:`Sobel`
and
:ref:`Laplace`
) and 32-bit floating point to 32-bit floating-point format.
Simple blur and Gaussian blur support 1- or 3-channel, 8-bit and 32-bit floating point images. These two methods can process images in-place.
Median and bilateral filters work with 1- or 3-channel 8-bit images and can not process images in-place.
.. index:: Sobel
.. _Sobel:
Sobel
-----
`id=0.415353284486 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Sobel>`__
.. cfunction:: void cvSobel( const CvArr* src, CvArr* dst, int xorder, int yorder, int apertureSize=3 )
Calculates the first, second, third or mixed image derivatives using an extended Sobel operator.
:param src: Source image of type CvArr*
:param dst: Destination image
:param xorder: Order of the derivative x
:param yorder: Order of the derivative y
:param apertureSize: Size of the extended Sobel kernel, must be 1, 3, 5 or 7
In all cases except 1, an
:math:`\texttt{apertureSize} \times
\texttt{apertureSize}`
separable kernel will be used to calculate the
derivative. For
:math:`\texttt{apertureSize} = 1`
a
:math:`3 \times 1`
or
:math:`1 \times 3`
a kernel is used (Gaussian smoothing is not done). There is also the special
value
``CV_SCHARR``
(-1) that corresponds to a
:math:`3\times3`
Scharr
filter that may give more accurate results than a
:math:`3\times3`
Sobel. Scharr
aperture is
.. math::
\vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}
for the x-derivative or transposed for the y-derivative.
The function calculates the image derivative by convolving the image with the appropriate kernel:
.. math::
\texttt{dst} (x,y) = \frac{d^{xorder+yorder} \texttt{src}}{dx^{xorder} \cdot dy^{yorder}}
The Sobel operators combine Gaussian smoothing and differentiation
so the result is more or less resistant to the noise. Most often,
the function is called with (
``xorder``
= 1,
``yorder``
= 0,
``apertureSize``
= 3) or (
``xorder``
= 0,
``yorder``
= 1,
``apertureSize``
= 3) to calculate the first x- or y- image
derivative. The first case corresponds to a kernel of:
.. math::
\vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}
and the second one corresponds to a kernel of:
.. math::
\vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}
or a kernel of:
.. math::
\vecthreethree{1}{2}{1}{0}{0}{0}{-1}{2}{-1}
depending on the image origin (
``origin``
field of
``IplImage``
structure). No scaling is done, so the destination image
usually has larger numbers (in absolute values) than the source image does. To
avoid overflow, the function requires a 16-bit destination image if the
source image is 8-bit. The result can be converted back to 8-bit using the
:ref:`ConvertScale`
or the
:ref:`ConvertScaleAbs`
function. Besides 8-bit images
the function can process 32-bit floating-point images. Both the source and the
destination must be single-channel images of equal size or equal ROI size.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,196 @@
Motion Analysis and Object Tracking
===================================
.. highlight:: c
.. index:: Acc
.. _Acc:
Acc
---
`id=0.999960514281 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Acc>`__
.. cfunction:: void cvAcc( const CvArr* image, CvArr* sum, const CvArr* mask=NULL )
Adds a frame to an accumulator.
:param image: Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently)
:param sum: Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point
:param mask: Optional operation mask
The function adds the whole image
``image``
or its selected region to the accumulator
``sum``
:
.. math::
\texttt{sum} (x,y) \leftarrow \texttt{sum} (x,y) + \texttt{image} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0
.. index:: MultiplyAcc
.. _MultiplyAcc:
MultiplyAcc
-----------
`id=0.550586168837 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/MultiplyAcc>`__
.. cfunction:: void cvMultiplyAcc( const CvArr* image1, const CvArr* image2, CvArr* acc, const CvArr* mask=NULL )
Adds the product of two input images to the accumulator.
:param image1: First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
:param image2: Second input image, the same format as the first one
:param acc: Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point
:param mask: Optional operation mask
The function adds the product of 2 images or their selected regions to the accumulator
``acc``
:
.. math::
\texttt{acc} (x,y) \leftarrow \texttt{acc} (x,y) + \texttt{image1} (x,y) \cdot \texttt{image2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0
.. index:: RunningAvg
.. _RunningAvg:
RunningAvg
----------
`id=0.0736920452652 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/RunningAvg>`__
.. cfunction:: void cvRunningAvg( const CvArr* image, CvArr* acc, double alpha, const CvArr* mask=NULL )
Updates the running average.
:param image: Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
:param acc: Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point
:param alpha: Weight of input image
:param mask: Optional operation mask
The function calculates the weighted sum of the input image
``image``
and the accumulator
``acc``
so that
``acc``
becomes a running average of frame sequence:
.. math::
\texttt{acc} (x,y) \leftarrow (1- \alpha ) \cdot \texttt{acc} (x,y) + \alpha \cdot \texttt{image} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0
where
:math:`\alpha`
regulates the update speed (how fast the accumulator forgets about previous frames).
.. index:: SquareAcc
.. _SquareAcc:
SquareAcc
---------
`id=0.22065009551 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/SquareAcc>`__
.. cfunction:: void cvSquareAcc( const CvArr* image, CvArr* sqsum, const CvArr* mask=NULL )
Adds the square of the source image to the accumulator.
:param image: Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
:param sqsum: Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point
:param mask: Optional operation mask
The function adds the input image
``image``
or its selected region, raised to power 2, to the accumulator
``sqsum``
:
.. math::
\texttt{sqsum} (x,y) \leftarrow \texttt{sqsum} (x,y) + \texttt{image} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

View File

@ -0,0 +1,149 @@
Object Detection
================
.. highlight:: c
.. index:: MatchTemplate
.. _MatchTemplate:
MatchTemplate
-------------
`id=0.133207508798 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/MatchTemplate>`__
.. cfunction:: void cvMatchTemplate( const CvArr* image, const CvArr* templ, CvArr* result, int method )
Compares a template against overlapped image regions.
:param image: Image where the search is running; should be 8-bit or 32-bit floating-point
:param templ: Searched template; must be not greater than the source image and the same data type as the image
:param result: A map of comparison results; single-channel 32-bit floating-point.
If ``image`` is :math:`W \times H` and ``templ`` is :math:`w \times h` then ``result`` must be :math:`(W-w+1) \times (H-h+1)`
:param method: Specifies the way the template must be compared with the image regions (see below)
The function is similar to
:ref:`CalcBackProjectPatch`
. It slides through
``image``
, compares the
overlapped patches of size
:math:`w \times h`
against
``templ``
using the specified method and stores the comparison results to
``result``
. Here are the formulas for the different comparison
methods one may use (
:math:`I`
denotes
``image``
,
:math:`T`
``template``
,
:math:`R`
``result``
). The summation is done over template and/or the
image patch:
:math:`x' = 0...w-1, y' = 0...h-1`
* method=CV\_TM\_SQDIFF
.. math::
R(x,y)= \sum _{x',y'} (T(x',y')-I(x+x',y+y'))^2
* method=CV\_TM\_SQDIFF\_NORMED
.. math::
R(x,y)= \frac{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}
* method=CV\_TM\_CCORR
.. math::
R(x,y)= \sum _{x',y'} (T(x',y') \cdot I(x+x',y+y'))
* method=CV\_TM\_CCORR\_NORMED
.. math::
R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I'(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}
* method=CV\_TM\_CCOEFF
.. math::
R(x,y)= \sum _{x',y'} (T'(x',y') \cdot I(x+x',y+y'))
where
.. math::
\begin{array}{l} T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum _{x'',y''} T(x'',y'') \\ I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum _{x'',y''} I(x+x'',y+y'') \end{array}
* method=CV\_TM\_CCOEFF\_NORMED
.. math::
R(x,y)= \frac{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) }{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} }
After the function finishes the comparison, the best matches can be found as global minimums (
``CV_TM_SQDIFF``
) or maximums (
``CV_TM_CCORR``
and
``CV_TM_CCOEFF``
) using the
:ref:`MinMaxLoc`
function. In the case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels (and separate mean values are used for each channel).

View File

@ -0,0 +1,612 @@
Planar Subdivisions
===================
.. highlight:: c
.. index:: CvSubdiv2D
.. _CvSubdiv2D:
CvSubdiv2D
----------
`id=0.0330142359402 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CvSubdiv2D>`__
.. ctype:: CvSubdiv2D
Planar subdivision.
::
#define CV_SUBDIV2D_FIELDS() \
CV_GRAPH_FIELDS() \
int quad_edges; \
int is_geometry_valid; \
CvSubdiv2DEdge recent_edge; \
CvPoint2D32f topleft; \
CvPoint2D32f bottomright;
typedef struct CvSubdiv2D
{
CV_SUBDIV2D_FIELDS()
}
CvSubdiv2D;
..
Planar subdivision is the subdivision of a plane into a set of
non-overlapped regions (facets) that cover the whole plane. The above
structure describes a subdivision built on a 2d point set, where the points
are linked together and form a planar graph, which, together with a few
edges connecting the exterior subdivision points (namely, convex hull points)
with infinity, subdivides a plane into facets by its edges.
For every subdivision there exists a dual subdivision in which facets and
points (subdivision vertices) swap their roles, that is, a facet is
treated as a vertex (called a virtual point below) of the dual subdivision and
the original subdivision vertices become facets. On the picture below
original subdivision is marked with solid lines and dual subdivision
with dotted lines.
.. image:: ../pics/subdiv.png
OpenCV subdivides a plane into triangles using Delaunay's
algorithm. Subdivision is built iteratively starting from a dummy
triangle that includes all the subdivision points for sure. In this
case the dual subdivision is a Voronoi diagram of the input 2d point set. The
subdivisions can be used for the 3d piece-wise transformation of a plane,
morphing, fast location of points on the plane, building special graphs
(such as NNG,RNG) and so forth.
.. index:: CvQuadEdge2D
.. _CvQuadEdge2D:
CvQuadEdge2D
------------
`id=0.774421357321 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CvQuadEdge2D>`__
.. ctype:: CvQuadEdge2D
Quad-edge of planar subdivision.
::
/* one of edges within quad-edge, lower 2 bits is index (0..3)
and upper bits are quad-edge pointer */
typedef long CvSubdiv2DEdge;
/* quad-edge structure fields */
#define CV_QUADEDGE2D_FIELDS() \
int flags; \
struct CvSubdiv2DPoint* pt[4]; \
CvSubdiv2DEdge next[4];
typedef struct CvQuadEdge2D
{
CV_QUADEDGE2D_FIELDS()
}
CvQuadEdge2D;
..
Quad-edge is a basic element of subdivision containing four edges (e, eRot, reversed e and reversed eRot):
.. image:: ../pics/quadedge.png
.. index:: CvSubdiv2DPoint
.. _CvSubdiv2DPoint:
CvSubdiv2DPoint
---------------
`id=0.348865048627 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CvSubdiv2DPoint>`__
.. ctype:: CvSubdiv2DPoint
Point of original or dual subdivision.
::
#define CV_SUBDIV2D_POINT_FIELDS()\
int flags; \
CvSubdiv2DEdge first; \
CvPoint2D32f pt; \
int id;
#define CV_SUBDIV2D_VIRTUAL_POINT_FLAG (1 << 30)
typedef struct CvSubdiv2DPoint
{
CV_SUBDIV2D_POINT_FIELDS()
}
CvSubdiv2DPoint;
..
* id
This integer can be used to index auxillary data associated with each vertex of the planar subdivision
.. index:: CalcSubdivVoronoi2D
.. _CalcSubdivVoronoi2D:
CalcSubdivVoronoi2D
-------------------
`id=0.780234504298 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CalcSubdivVoronoi2D>`__
.. cfunction:: void cvCalcSubdivVoronoi2D( CvSubdiv2D* subdiv )
Calculates the coordinates of Voronoi diagram cells.
:param subdiv: Delaunay subdivision, in which all the points are already added
The function calculates the coordinates
of virtual points. All virtual points corresponding to some vertex of the
original subdivision form (when connected together) a boundary of the Voronoi
cell at that point.
.. index:: ClearSubdivVoronoi2D
.. _ClearSubdivVoronoi2D:
ClearSubdivVoronoi2D
--------------------
`id=0.598833189257 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/ClearSubdivVoronoi2D>`__
.. cfunction:: void cvClearSubdivVoronoi2D( CvSubdiv2D* subdiv )
Removes all virtual points.
:param subdiv: Delaunay subdivision
The function removes all of the virtual points. It
is called internally in
:ref:`CalcSubdivVoronoi2D`
if the subdivision
was modified after previous call to the function.
.. index:: CreateSubdivDelaunay2D
.. _CreateSubdivDelaunay2D:
CreateSubdivDelaunay2D
----------------------
`id=0.740903386025 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/CreateSubdivDelaunay2D>`__
.. cfunction:: CvSubdiv2D* cvCreateSubdivDelaunay2D( CvRect rect, CvMemStorage* storage )
Creates an empty Delaunay triangulation.
:param rect: Rectangle that includes all of the 2d points that are to be added to the subdivision
:param storage: Container for subdivision
The function creates an empty Delaunay
subdivision, where 2d points can be added using the function
:ref:`SubdivDelaunay2DInsert`
. All of the points to be added must be within
the specified rectangle, otherwise a runtime error will be raised.
Note that the triangulation is a single large triangle that covers the given rectangle. Hence the three vertices of this triangle are outside the rectangle
``rect``
.
.. index:: FindNearestPoint2D
.. _FindNearestPoint2D:
FindNearestPoint2D
------------------
`id=0.89077983265 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/FindNearestPoint2D>`__
.. cfunction:: CvSubdiv2DPoint* cvFindNearestPoint2D( CvSubdiv2D* subdiv, CvPoint2D32f pt )
Finds the closest subdivision vertex to the given point.
:param subdiv: Delaunay or another subdivision
:param pt: Input point
The function is another function that
locates the input point within the subdivision. It finds the subdivision vertex that
is the closest to the input point. It is not necessarily one of vertices
of the facet containing the input point, though the facet (located using
:ref:`Subdiv2DLocate`
) is used as a starting
point. The function returns a pointer to the found subdivision vertex.
.. index:: Subdiv2DEdgeDst
.. _Subdiv2DEdgeDst:
Subdiv2DEdgeDst
---------------
`id=0.475748447952 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Subdiv2DEdgeDst>`__
.. cfunction:: CvSubdiv2DPoint* cvSubdiv2DEdgeDst( CvSubdiv2DEdge edge )
Returns the edge destination.
:param edge: Subdivision edge (not a quad-edge)
The function returns the edge destination. The
returned pointer may be NULL if the edge is from dual subdivision and
the virtual point coordinates are not calculated yet. The virtual points
can be calculated using the function
:ref:`CalcSubdivVoronoi2D`
.
.. index:: Subdiv2DGetEdge
.. _Subdiv2DGetEdge:
Subdiv2DGetEdge
---------------
`id=0.128594743275 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Subdiv2DGetEdge>`__
.. cfunction:: CvSubdiv2DEdge cvSubdiv2DGetEdge( CvSubdiv2DEdge edge, CvNextEdgeType type )
Returns one of the edges related to the given edge.
:param edge: Subdivision edge (not a quad-edge)
:param type: Specifies which of the related edges to return, one of the following:
* **CV_NEXT_AROUND_ORG** next around the edge origin ( ``eOnext`` on the picture below if ``e`` is the input edge)
* **CV_NEXT_AROUND_DST** next around the edge vertex ( ``eDnext`` )
* **CV_PREV_AROUND_ORG** previous around the edge origin (reversed ``eRnext`` )
* **CV_PREV_AROUND_DST** previous around the edge destination (reversed ``eLnext`` )
* **CV_NEXT_AROUND_LEFT** next around the left facet ( ``eLnext`` )
* **CV_NEXT_AROUND_RIGHT** next around the right facet ( ``eRnext`` )
* **CV_PREV_AROUND_LEFT** previous around the left facet (reversed ``eOnext`` )
* **CV_PREV_AROUND_RIGHT** previous around the right facet (reversed ``eDnext`` )
.. image:: ../pics/quadedge.png
The function returns one of the edges related to the input edge.
.. index:: Subdiv2DNextEdge
.. _Subdiv2DNextEdge:
Subdiv2DNextEdge
----------------
`id=0.250529497726 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Subdiv2DNextEdge>`__
.. cfunction:: CvSubdiv2DEdge cvSubdiv2DNextEdge( CvSubdiv2DEdge edge )
Returns next edge around the edge origin
:param edge: Subdivision edge (not a quad-edge)
.. image:: ../pics/quadedge.png
The function returns the next edge around the edge origin:
``eOnext``
on the picture above if
``e``
is the input edge)
.. index:: Subdiv2DLocate
.. _Subdiv2DLocate:
Subdiv2DLocate
--------------
`id=0.195353110226 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Subdiv2DLocate>`__
.. cfunction:: CvSubdiv2DPointLocation cvSubdiv2DLocate( CvSubdiv2D* subdiv, CvPoint2D32f pt, CvSubdiv2DEdge* edge, CvSubdiv2DPoint** vertex=NULL )
Returns the location of a point within a Delaunay triangulation.
:param subdiv: Delaunay or another subdivision
:param pt: The point to locate
:param edge: The output edge the point falls onto or right to
:param vertex: Optional output vertex double pointer the input point coinsides with
The function locates the input point within the subdivision. There are 5 cases:
*
The point falls into some facet. The function returns
``CV_PTLOC_INSIDE``
and
``*edge``
will contain one of edges of the facet.
*
The point falls onto the edge. The function returns
``CV_PTLOC_ON_EDGE``
and
``*edge``
will contain this edge.
*
The point coincides with one of the subdivision vertices. The function returns
``CV_PTLOC_VERTEX``
and
``*vertex``
will contain a pointer to the vertex.
*
The point is outside the subdivsion reference rectangle. The function returns
``CV_PTLOC_OUTSIDE_RECT``
and no pointers are filled.
*
One of input arguments is invalid. A runtime error is raised or, if silent or "parent" error processing mode is selected,
``CV_PTLOC_ERROR``
is returnd.
.. index:: Subdiv2DRotateEdge
.. _Subdiv2DRotateEdge:
Subdiv2DRotateEdge
------------------
`id=0.808074440668 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/Subdiv2DRotateEdge>`__
.. cfunction:: CvSubdiv2DEdge cvSubdiv2DRotateEdge( CvSubdiv2DEdge edge, int rotate )
Returns another edge of the same quad-edge.
:param edge: Subdivision edge (not a quad-edge)
:param rotate: Specifies which of the edges of the same quad-edge as the input one to return, one of the following:
* **0** the input edge ( ``e`` on the picture below if ``e`` is the input edge)
* **1** the rotated edge ( ``eRot`` )
* **2** the reversed edge (reversed ``e`` (in green))
* **3** the reversed rotated edge (reversed ``eRot`` (in green))
.. image:: ../pics/quadedge.png
The function returns one of the edges of the same quad-edge as the input edge.
.. index:: SubdivDelaunay2DInsert
.. _SubdivDelaunay2DInsert:
SubdivDelaunay2DInsert
----------------------
`id=0.318236209384 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/imgproc/SubdivDelaunay2DInsert>`__
.. cfunction:: CvSubdiv2DPoint* cvSubdivDelaunay2DInsert( CvSubdiv2D* subdiv, CvPoint2D32f pt)
Inserts a single point into a Delaunay triangulation.
:param subdiv: Delaunay subdivision created by the function :ref:`CreateSubdivDelaunay2D`
:param pt: Inserted point
The function inserts a single point into a subdivision and modifies the subdivision topology appropriately. If a point with the same coordinates exists already, no new point is added. The function returns a pointer to the allocated point. No virtual point coordinates are calculated at this stage.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,10 @@
***************************
objdetect. Object Detection
***************************
.. toctree::
:maxdepth: 2
objdetect_cascade_classification

View File

@ -0,0 +1,522 @@
Cascade Classification
======================
.. highlight:: c
Haar Feature-based Cascade Classifier for Object Detection
----------------------------------------------------------
The object detector described below has been initially proposed by Paul Viola
:ref:`Viola01`
and improved by Rainer Lienhart
:ref:`Lienhart02`
. First, a classifier (namely a
*cascade of boosted classifiers working with haar-like features*
) is trained with a few hundred sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size.
After a classifier is trained, it can be applied to a region of interest
(of the same size as used during the training) in an input image. The
classifier outputs a "1" if the region is likely to show the object
(i.e., face/car), and "0" otherwise. To search for the object in the
whole image one can move the search window across the image and check
every location using the classifier. The classifier is designed so that
it can be easily "resized" in order to be able to find the objects of
interest at different sizes, which is more efficient than resizing the
image itself. So, to find an object of an unknown size in the image the
scan procedure should be done several times at different scales.
The word "cascade" in the classifier name means that the resultant
classifier consists of several simpler classifiers (
*stages*
) that
are applied subsequently to a region of interest until at some stage the
candidate is rejected or all the stages are passed. The word "boosted"
means that the classifiers at every stage of the cascade are complex
themselves and they are built out of basic classifiers using one of four
different
``boosting``
techniques (weighted voting). Currently
Discrete Adaboost, Real Adaboost, Gentle Adaboost and Logitboost are
supported. The basic classifiers are decision-tree classifiers with at
least 2 leaves. Haar-like features are the input to the basic classifers,
and are calculated as described below. The current algorithm uses the
following Haar-like features:
.. image:: ../pics/haarfeatures.png
The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied). For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the
:ref:`Integral`
description).
To see the object detector at work, have a look at the HaarFaceDetect demo.
The following reference is for the detection part only. There
is a separate application called
``haartraining``
that can
train a cascade of boosted classifiers from a set of samples. See
``opencv/apps/haartraining``
for details.
.. index:: CvHaarFeature, CvHaarClassifier, CvHaarStageClassifier, CvHaarClassifierCascade
.. _CvHaarFeature, CvHaarClassifier, CvHaarStageClassifier, CvHaarClassifierCascade:
CvHaarFeature, CvHaarClassifier, CvHaarStageClassifier, CvHaarClassifierCascade
-------------------------------------------------------------------------------
`id=0.970306065104 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/objdetect/CvHaarFeature%2C%20CvHaarClassifier%2C%20CvHaarStageClassifier%2C%20CvHaarClassifierCascade>`__
.. ctype:: CvHaarFeature, CvHaarClassifier, CvHaarStageClassifier, CvHaarClassifierCascade
Boosted Haar classifier structures.
::
#define CV_HAAR_FEATURE_MAX 3
/* a haar feature consists of 2-3 rectangles with appropriate weights */
typedef struct CvHaarFeature
{
int tilted; /* 0 means up-right feature, 1 means 45--rotated feature */
/* 2-3 rectangles with weights of opposite signs and
with absolute values inversely proportional to the areas of the
rectangles. If rect[2].weight !=0, then
the feature consists of 3 rectangles, otherwise it consists of 2 */
struct
{
CvRect r;
float weight;
} rect[CV_HAAR_FEATURE_MAX];
}
CvHaarFeature;
/* a single tree classifier (stump in the simplest case) that returns the
response for the feature at the particular image location (i.e. pixel
sum over subrectangles of the window) and gives out a value depending
on the response */
typedef struct CvHaarClassifier
{
int count; /* number of nodes in the decision tree */
/* these are "parallel" arrays. Every index ``i``
corresponds to a node of the decision tree (root has 0-th index).
left[i] - index of the left child (or negated index if the
left child is a leaf)
right[i] - index of the right child (or negated index if the
right child is a leaf)
threshold[i] - branch threshold. if feature responce is <= threshold,
left branch is chosen, otherwise right branch is chosen.
alpha[i] - output value correponding to the leaf. */
CvHaarFeature* haar_feature;
float* threshold;
int* left;
int* right;
float* alpha;
}
CvHaarClassifier;
/* a boosted battery of classifiers(=stage classifier):
the stage classifier returns 1
if the sum of the classifiers responses
is greater than ``threshold`` and 0 otherwise */
typedef struct CvHaarStageClassifier
{
int count; /* number of classifiers in the battery */
float threshold; /* threshold for the boosted classifier */
CvHaarClassifier* classifier; /* array of classifiers */
/* these fields are used for organizing trees of stage classifiers,
rather than just stright cascades */
int next;
int child;
int parent;
}
CvHaarStageClassifier;
typedef struct CvHidHaarClassifierCascade CvHidHaarClassifierCascade;
/* cascade or tree of stage classifiers */
typedef struct CvHaarClassifierCascade
{
int flags; /* signature */
int count; /* number of stages */
CvSize orig_window_size; /* original object size (the cascade is
trained for) */
/* these two parameters are set by cvSetImagesForHaarClassifierCascade */
CvSize real_window_size; /* current object size */
double scale; /* current scale */
CvHaarStageClassifier* stage_classifier; /* array of stage classifiers */
CvHidHaarClassifierCascade* hid_cascade; /* hidden optimized
representation of the
cascade, created by
cvSetImagesForHaarClassifierCascade */
}
CvHaarClassifierCascade;
..
All the structures are used for representing a cascaded of boosted Haar classifiers. The cascade has the following hierarchical structure:
\begin{verbatim}
Cascade:
Stage,,1,,:
Classifier,,11,,:
Feature,,11,,
Classifier,,12,,:
Feature,,12,,
...
Stage,,2,,:
Classifier,,21,,:
Feature,,21,,
...
...
\end{verbatim}
The whole hierarchy can be constructed manually or loaded from a file or an embedded base using the function
:ref:`LoadHaarClassifierCascade`
.
.. index:: LoadHaarClassifierCascade
.. _LoadHaarClassifierCascade:
LoadHaarClassifierCascade
-------------------------
`id=0.804773488212 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/objdetect/LoadHaarClassifierCascade>`__
.. cfunction:: CvHaarClassifierCascade* cvLoadHaarClassifierCascade( const char* directory, CvSize orig_window_size )
Loads a trained cascade classifier from a file or the classifier database embedded in OpenCV.
:param directory: Name of the directory containing the description of a trained cascade classifier
:param orig_window_size: Original size of the objects the cascade has been trained on. Note that it is not stored in the cascade and therefore must be specified separately
The function loads a trained cascade
of haar classifiers from a file or the classifier database embedded in
OpenCV. The base can be trained using the
``haartraining``
application
(see opencv/apps/haartraining for details).
**The function is obsolete**
. Nowadays object detection classifiers are stored in XML or YAML files, rather than in directories. To load a cascade from a file, use the
:ref:`Load`
function.
.. index:: HaarDetectObjects
.. _HaarDetectObjects:
HaarDetectObjects
-----------------
`id=0.264108155188 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/objdetect/HaarDetectObjects>`__
::
..
.. cfunction:: CvSeq* cvHaarDetectObjects( const CvArr* image, CvHaarClassifierCascade* cascade, CvMemStorage* storage, double scaleFactor=1.1, int minNeighbors=3, int flags=0, CvSize minSize=cvSize(0, 0), CvSize maxSize=cvSize(0,0) )
Detects objects in the image.
typedef struct CvAvgComp
{
CvRect rect; /* bounding rectangle for the object (average rectangle of a group) */
int neighbors; /* number of neighbor rectangles in the group */
}
CvAvgComp;
:param image: Image to detect objects in
:param cascade: Haar classifier cascade in internal representation
:param storage: Memory storage to store the resultant sequence of the object candidate rectangles
:param scaleFactor: The factor by which the search window is scaled between the subsequent scans, 1.1 means increasing window by 10 %
:param minNeighbors: Minimum number (minus 1) of neighbor rectangles that makes up an object. All the groups of a smaller number of rectangles than ``min_neighbors`` -1 are rejected. If ``minNeighbors`` is 0, the function does not any grouping at all and returns all the detected candidate rectangles, which may be useful if the user wants to apply a customized grouping procedure
:param flags: Mode of operation. Currently the only flag that may be specified is ``CV_HAAR_DO_CANNY_PRUNING`` . If it is set, the function uses Canny edge detector to reject some image regions that contain too few or too much edges and thus can not contain the searched object. The particular threshold values are tuned for face detection and in this case the pruning speeds up the processing
:param minSize: Minimum window size. By default, it is set to the size of samples the classifier has been trained on ( :math:`\sim 20\times 20` for face detection)
:param maxSize: Maximum window size to use. By default, it is set to the size of the image.
The function finds rectangular regions in the given image that are likely to contain objects the cascade has been trained for and returns those regions as a sequence of rectangles. The function scans the image several times at different scales (see
:ref:`SetImagesForHaarClassifierCascade`
). Each time it considers overlapping regions in the image and applies the classifiers to the regions using
:ref:`RunHaarClassifierCascade`
. It may also apply some heuristics to reduce number of analyzed regions, such as Canny prunning. After it has proceeded and collected the candidate rectangles (regions that passed the classifier cascade), it groups them and returns a sequence of average rectangles for each large enough group. The default parameters (
``scale_factor``
=1.1,
``min_neighbors``
=3,
``flags``
=0) are tuned for accurate yet slow object detection. For a faster operation on real video images the settings are:
``scale_factor``
=1.2,
``min_neighbors``
=2,
``flags``
=
``CV_HAAR_DO_CANNY_PRUNING``
,
``min_size``
=
*minimum possible face size*
(for example,
:math:`\sim`
1/4 to 1/16 of the image area in the case of video conferencing).
::
#include "cv.h"
#include "highgui.h"
CvHaarClassifierCascade* load_object_detector( const char* cascade_path )
{
return (CvHaarClassifierCascade*)cvLoad( cascade_path );
}
void detect_and_draw_objects( IplImage* image,
CvHaarClassifierCascade* cascade,
int do_pyramids )
{
IplImage* small_image = image;
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* faces;
int i, scale = 1;
/* if the flag is specified, down-scale the input image to get a
performance boost w/o loosing quality (perhaps) */
if( do_pyramids )
{
small_image = cvCreateImage( cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3 );
cvPyrDown( image, small_image, CV_GAUSSIAN_5x5 );
scale = 2;
}
/* use the fastest variant */
faces = cvHaarDetectObjects( small_image, cascade, storage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING );
/* draw all the rectangles */
for( i = 0; i < faces->total; i++ )
{
/* extract the rectanlges only */
CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i );
cvRectangle( image, cvPoint(face_rect.x*scale,face_rect.y*scale),
cvPoint((face_rect.x+face_rect.width)*scale,
(face_rect.y+face_rect.height)*scale),
CV_RGB(255,0,0), 3 );
}
if( small_image != image )
cvReleaseImage( &small_image );
cvReleaseMemStorage( &storage );
}
/* takes image filename and cascade path from the command line */
int main( int argc, char** argv )
{
IplImage* image;
if( argc==3 && (image = cvLoadImage( argv[1], 1 )) != 0 )
{
CvHaarClassifierCascade* cascade = load_object_detector(argv[2]);
detect_and_draw_objects( image, cascade, 1 );
cvNamedWindow( "test", 0 );
cvShowImage( "test", image );
cvWaitKey(0);
cvReleaseHaarClassifierCascade( &cascade );
cvReleaseImage( &image );
}
return 0;
}
..
.. index:: SetImagesForHaarClassifierCascade
.. _SetImagesForHaarClassifierCascade:
SetImagesForHaarClassifierCascade
---------------------------------
`id=0.160913357144 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/objdetect/SetImagesForHaarClassifierCascade>`__
.. cfunction:: void cvSetImagesForHaarClassifierCascade( CvHaarClassifierCascade* cascade, const CvArr* sum, const CvArr* sqsum, const CvArr* tilted_sum, double scale )
Assigns images to the hidden cascade.
:param cascade: Hidden Haar classifier cascade, created by :ref:`CreateHidHaarClassifierCascade`
:param sum: Integral (sum) single-channel image of 32-bit integer format. This image as well as the two subsequent images are used for fast feature evaluation and brightness/contrast normalization. They all can be retrieved from input 8-bit or floating point single-channel image using the function :ref:`Integral`
:param sqsum: Square sum single-channel image of 64-bit floating-point format
:param tilted_sum: Tilted sum single-channel image of 32-bit integer format
:param scale: Window scale for the cascade. If ``scale`` =1, the original window size is used (objects of that size are searched) - the same size as specified in :ref:`LoadHaarClassifierCascade` (24x24 in the case of ``default_face_cascade`` ), if ``scale`` =2, a two times larger window is used (48x48 in the case of default face cascade). While this will speed-up search about four times, faces smaller than 48x48 cannot be detected
The function assigns images and/or window scale to the hidden classifier cascade. If image pointers are NULL, the previously set images are used further (i.e. NULLs mean "do not change images"). Scale parameter has no such a "protection" value, but the previous value can be retrieved by the
:ref:`GetHaarClassifierCascadeScale`
function and reused again. The function is used to prepare cascade for detecting object of the particular size in the particular image. The function is called internally by
:ref:`HaarDetectObjects`
, but it can be called by the user if they are using the lower-level function
:ref:`RunHaarClassifierCascade`
.
.. index:: ReleaseHaarClassifierCascade
.. _ReleaseHaarClassifierCascade:
ReleaseHaarClassifierCascade
----------------------------
`id=0.359777913959 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/objdetect/ReleaseHaarClassifierCascade>`__
.. cfunction:: void cvReleaseHaarClassifierCascade( CvHaarClassifierCascade** cascade )
Releases the haar classifier cascade.
:param cascade: Double pointer to the released cascade. The pointer is cleared by the function
The function deallocates the cascade that has been created manually or loaded using
:ref:`LoadHaarClassifierCascade`
or
:ref:`Load`
.
.. index:: RunHaarClassifierCascade
.. _RunHaarClassifierCascade:
RunHaarClassifierCascade
------------------------
`id=0.100465569078 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/c/objdetect/RunHaarClassifierCascade>`__
.. cfunction:: int cvRunHaarClassifierCascade( CvHaarClassifierCascade* cascade, CvPoint pt, int start_stage=0 )
Runs a cascade of boosted classifiers at the given image location.
:param cascade: Haar classifier cascade
:param pt: Top-left corner of the analyzed region. Size of the region is a original window size scaled by the currenly set scale. The current window size may be retrieved using the :ref:`GetHaarClassifierCascadeWindowSize` function
:param start_stage: Initial zero-based index of the cascade stage to start from. The function assumes that all the previous stages are passed. This feature is used internally by :ref:`HaarDetectObjects` for better processor cache utilization
The function runs the Haar classifier
cascade at a single image location. Before using this function the
integral images and the appropriate scale (window size) should be set
using
:ref:`SetImagesForHaarClassifierCascade`
. The function returns
a positive value if the analyzed rectangle passed all the classifier stages
(it is a candidate) and a zero or negative value otherwise.

10
doc/opencv1/c/video.rst Normal file
View File

@ -0,0 +1,10 @@
*********************
video. Video Analysis
*********************
.. toctree::
:maxdepth: 2
video_motion_analysis_and_object_tracking

File diff suppressed because it is too large Load Diff

220
doc/opencv1/conf.py Normal file
View File

@ -0,0 +1,220 @@
# -*- coding: utf-8 -*-
#
# opencvstd documentation build configuration file, created by
# sphinx-quickstart on Mon Feb 14 00:30:43 2011.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.pngmath', 'sphinx.ext.ifconfig', 'sphinx.ext.todo']
doctest_test_doctest_blocks = 'block'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'opencvrefman1x'
copyright = u'2011, opencv dev team'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '2.3'
# The full version, including alpha/beta/rc tags.
release = '2.3'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
todo_include_todos=True
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'blue'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['../_themes']
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = '../opencv-logo2.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['../_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'opencv1x'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'opencv1x.tex', u'The OpenCV 1.x API Reference Manual',
u'', 'manual'),
]
latex_elements = {'preamble': '\usepackage{mymath}\usepackage{amssymb}\usepackage{amsmath}\usepackage{bbm}'}
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
latex_use_parts = True
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'opencv1x', u'The OpenCV 1.x API Reference Manual',
[u'opencv-dev@itseez.com'], 1)
]

16
doc/opencv1/index.rst Normal file
View File

@ -0,0 +1,16 @@
Welcome to opencv 1.x reference manual
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Contents:
.. toctree::
:maxdepth: 2
c/c_index
py/py_index
bibliography
Indices and tables
~~~~~~~~~~~~~~~~~~
* :ref:`genindex`
* :ref:`search`

View File

@ -0,0 +1,10 @@
*******************************************************
calib3d. Camera Calibration, Pose Estimation and Stereo
*******************************************************
.. toctree::
:maxdepth: 2
calib3d_camera_calibration_and_3d_reconstruction

File diff suppressed because it is too large Load Diff

206
doc/opencv1/py/conf.py Normal file
View File

@ -0,0 +1,206 @@
# -*- coding: utf-8 -*-
#
# opencv documentation build configuration file, created by
# sphinx-quickstart on Thu Jun 4 21:06:43 2009.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.append(os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.pngmath', 'sphinx.ext.doctest'] # , 'sphinx.ext.intersphinx']
doctest_test_doctest_blocks = 'block'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'opencv'
copyright = u'2010, authors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '2.2'
# The full version, including alpha/beta/rc tags.
release = '2.2.9'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'blue'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
"lang" : "%LANG%" # buildall substitutes this for c, cpp, py
}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['../_themes']
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = '../opencv-logo2.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['../_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'opencvdoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'opencv.tex', u'opencv Documentation',
u'author', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True
pngmath_latex_preamble = '\usepackage{mymath}\usepackage{amsmath}\usepackage{bbm}\usepackage[usenames]{color}'
# intersphinx_mapping = {
# 'http://docs.python.org/': None,
# }
intersphinx_mapping = {}
latex_elements = {'preamble': '\usepackage{mymath}\usepackage{amssymb}\usepackage{amsmath}\usepackage{bbm}'}

371
doc/opencv1/py/cookbook.rst Normal file
View File

@ -0,0 +1,371 @@
Cookbook
========
.. highlight:: python
Here is a collection of code fragments demonstrating some features
of the OpenCV Python bindings.
Convert an image
----------------
.. doctest::
>>> import cv
>>> im = cv.LoadImageM("building.jpg")
>>> print type(im)
<type 'cv.cvmat'>
>>> cv.SaveImage("foo.png", im)
..
Resize an image
---------------
To resize an image in OpenCV, create a destination image of the appropriate size, then call
:ref:`Resize`
.
.. doctest::
>>> import cv
>>> original = cv.LoadImageM("building.jpg")
>>> thumbnail = cv.CreateMat(original.rows / 10, original.cols / 10, cv.CV_8UC3)
>>> cv.Resize(original, thumbnail)
..
Compute the Laplacian
---------------------
.. doctest::
>>> import cv
>>> im = cv.LoadImageM("building.jpg", 1)
>>> dst = cv.CreateImage(cv.GetSize(im), cv.IPL_DEPTH_16S, 3)
>>> laplace = cv.Laplace(im, dst)
>>> cv.SaveImage("foo-laplace.png", dst)
..
Using GoodFeaturesToTrack
-------------------------
To find the 10 strongest corner features in an image, use
:ref:`GoodFeaturesToTrack`
like this:
.. doctest::
>>> import cv
>>> img = cv.LoadImageM("building.jpg", cv.CV_LOAD_IMAGE_GRAYSCALE)
>>> eig_image = cv.CreateMat(img.rows, img.cols, cv.CV_32FC1)
>>> temp_image = cv.CreateMat(img.rows, img.cols, cv.CV_32FC1)
>>> for (x,y) in cv.GoodFeaturesToTrack(img, eig_image, temp_image, 10, 0.04, 1.0, useHarris = True):
... print "good feature at", x,y
good feature at 198.0 514.0
good feature at 791.0 260.0
good feature at 370.0 467.0
good feature at 374.0 469.0
good feature at 490.0 520.0
good feature at 262.0 278.0
good feature at 781.0 134.0
good feature at 3.0 247.0
good feature at 667.0 321.0
good feature at 764.0 304.0
..
Using GetSubRect
----------------
GetSubRect returns a rectangular part of another image. It does this without copying any data.
.. doctest::
>>> import cv
>>> img = cv.LoadImageM("building.jpg")
>>> sub = cv.GetSubRect(img, (60, 70, 32, 32)) # sub is 32x32 patch within img
>>> cv.SetZero(sub) # clear sub to zero, which also clears 32x32 pixels in img
..
Using CreateMat, and accessing an element
-----------------------------------------
.. doctest::
>>> import cv
>>> mat = cv.CreateMat(5, 5, cv.CV_32FC1)
>>> cv.Set(mat, 1.0)
>>> mat[3,1] += 0.375
>>> print mat[3,1]
1.375
>>> print [mat[3,i] for i in range(5)]
[1.0, 1.375, 1.0, 1.0, 1.0]
..
ROS image message to OpenCV
---------------------------
See this tutorial:
`Using CvBridge to convert between ROS images And OpenCV images <http://www.ros.org/wiki/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages>`_
.
PIL Image to OpenCV
-------------------
(For details on PIL see the
`PIL handbook <http://www.pythonware.com/library/pil/handbook/image.htm>`_
.)
.. doctest::
>>> import Image, cv
>>> pi = Image.open('building.jpg') # PIL image
>>> cv_im = cv.CreateImageHeader(pi.size, cv.IPL_DEPTH_8U, 3)
>>> cv.SetData(cv_im, pi.tostring())
>>> print pi.size, cv.GetSize(cv_im)
(868, 600) (868, 600)
>>> print pi.tostring() == cv_im.tostring()
True
..
OpenCV to PIL Image
-------------------
.. doctest::
>>> import Image, cv
>>> cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 1)
>>> pi = Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring())
>>> print pi.size
(320, 200)
..
NumPy and OpenCV
----------------
Using the
`array interface <http://docs.scipy.org/doc/numpy/reference/arrays.interface.html>`_
, to use an OpenCV CvMat in NumPy:
.. doctest::
>>> import cv, numpy
>>> mat = cv.CreateMat(3, 5, cv.CV_32FC1)
>>> cv.Set(mat, 7)
>>> a = numpy.asarray(mat)
>>> print a
[[ 7. 7. 7. 7. 7.]
[ 7. 7. 7. 7. 7.]
[ 7. 7. 7. 7. 7.]]
..
and to use a NumPy array in OpenCV:
.. doctest::
>>> import cv, numpy
>>> a = numpy.ones((480, 640))
>>> mat = cv.fromarray(a)
>>> print mat.rows
480
>>> print mat.cols
640
..
also, most OpenCV functions can work on NumPy arrays directly, for example:
.. doctest::
>>> picture = numpy.ones((640, 480))
>>> cv.Smooth(picture, picture, cv.CV_GAUSSIAN, 15, 15)
..
Given a 2D array,
the
:ref:`fromarray`
function (or the implicit version shown above)
returns a single-channel
:ref:`CvMat`
of the same size.
For a 3D array of size
:math:`j \times k \times l`
, it returns a
:ref:`CvMat`
sized
:math:`j \times k`
with
:math:`l`
channels.
Alternatively, use
:ref:`fromarray`
with the
``allowND``
option to always return a
:ref:`cvMatND`
.
OpenCV to pygame
----------------
To convert an OpenCV image to a
`pygame <http://www.pygame.org/>`_
surface:
.. doctest::
>>> import pygame.image, cv
>>> src = cv.LoadImage("lena.jpg")
>>> src_rgb = cv.CreateMat(src.height, src.width, cv.CV_8UC3)
>>> cv.CvtColor(src, src_rgb, cv.CV_BGR2RGB)
>>> pg_img = pygame.image.frombuffer(src_rgb.tostring(), cv.GetSize(src_rgb), "RGB")
>>> print pg_img
<Surface(512x512x24 SW)>
..
OpenCV and OpenEXR
------------------
Using
`OpenEXR's Python bindings <http://www.excamera.com/sphinx/articles-openexr.html>`_
you can make a simple
image viewer:
::
import OpenEXR, Imath, cv
filename = "GoldenGate.exr"
exrimage = OpenEXR.InputFile(filename)
dw = exrimage.header()['dataWindow']
(width, height) = (dw.max.x - dw.min.x + 1, dw.max.y - dw.min.y + 1)
def fromstr(s):
mat = cv.CreateMat(height, width, cv.CV_32FC1)
cv.SetData(mat, s)
return mat
pt = Imath.PixelType(Imath.PixelType.FLOAT)
(r, g, b) = [fromstr(s) for s in exrimage.channels("RGB", pt)]
bgr = cv.CreateMat(height, width, cv.CV_32FC3)
cv.Merge(b, g, r, None, bgr)
cv.ShowImage(filename, bgr)
cv.WaitKey()
..

16
doc/opencv1/py/core.rst Normal file
View File

@ -0,0 +1,16 @@
****************************
core. The Core Functionality
****************************
.. toctree::
:maxdepth: 2
core_basic_structures
core_operations_on_arrays
core_dynamic_structures
core_drawing_functions
core_xml_yaml_persistence
core_clustering
core_utility_and_system_functions_and_macros

View File

@ -0,0 +1,520 @@
Basic Structures
================
.. highlight:: python
.. index:: CvPoint
.. _CvPoint:
CvPoint
-------
`id=0.407060643954 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvPoint>`__
.. class:: CvPoint
2D point with integer coordinates (usually zero-based).
2D point, represented as a tuple
``(x, y)``
, where x and y are integers.
.. index:: CvPoint2D32f
.. _CvPoint2D32f:
CvPoint2D32f
------------
`id=0.638091190655 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvPoint2D32f>`__
.. class:: CvPoint2D32f
2D point with floating-point coordinates
2D point, represented as a tuple
``(x, y)``
, where x and y are floats.
.. index:: CvPoint3D32f
.. _CvPoint3D32f:
CvPoint3D32f
------------
`id=0.334583364495 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvPoint3D32f>`__
.. class:: CvPoint3D32f
3D point with floating-point coordinates
3D point, represented as a tuple
``(x, y, z)``
, where x, y and z are floats.
.. index:: CvPoint2D64f
.. _CvPoint2D64f:
CvPoint2D64f
------------
`id=0.352962148614 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvPoint2D64f>`__
.. class:: CvPoint2D64f
2D point with double precision floating-point coordinates
2D point, represented as a tuple
``(x, y)``
, where x and y are floats.
.. index:: CvPoint3D64f
.. _CvPoint3D64f:
CvPoint3D64f
------------
`id=0.00812295344272 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvPoint3D64f>`__
.. class:: CvPoint3D64f
3D point with double precision floating-point coordinates
3D point, represented as a tuple
``(x, y, z)``
, where x, y and z are floats.
.. index:: CvSize
.. _CvSize:
CvSize
------
`id=0.980418044509 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvSize>`__
.. class:: CvSize
Pixel-accurate size of a rectangle.
Size of a rectangle, represented as a tuple
``(width, height)``
, where width and height are integers.
.. index:: CvSize2D32f
.. _CvSize2D32f:
CvSize2D32f
-----------
`id=0.623013904609 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvSize2D32f>`__
.. class:: CvSize2D32f
Sub-pixel accurate size of a rectangle.
Size of a rectangle, represented as a tuple
``(width, height)``
, where width and height are floats.
.. index:: CvRect
.. _CvRect:
CvRect
------
`id=0.706717090055 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvRect>`__
.. class:: CvRect
Offset (usually the top-left corner) and size of a rectangle.
Rectangle, represented as a tuple
``(x, y, width, height)``
, where all are integers.
.. index:: CvScalar
.. _CvScalar:
CvScalar
--------
`id=0.733448405451 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvScalar>`__
.. class:: CvScalar
A container for 1-,2-,3- or 4-tuples of doubles.
CvScalar is always represented as a 4-tuple.
.. doctest::
>>> import cv
>>> cv.Scalar(1, 2, 3, 4)
(1.0, 2.0, 3.0, 4.0)
>>> cv.ScalarAll(7)
(7.0, 7.0, 7.0, 7.0)
>>> cv.RealScalar(7)
(7.0, 0.0, 0.0, 0.0)
>>> cv.RGB(17, 110, 255)
(255.0, 110.0, 17.0, 0.0)
..
.. index:: CvTermCriteria
.. _CvTermCriteria:
CvTermCriteria
--------------
`id=0.996691519996 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvTermCriteria>`__
.. class:: CvTermCriteria
Termination criteria for iterative algorithms.
Represented by a tuple
``(type, max_iter, epsilon)``
.
.. attribute:: type
``CV_TERMCRIT_ITER`` , ``CV_TERMCRIT_EPS`` or ``CV_TERMCRIT_ITER | CV_TERMCRIT_EPS``
.. attribute:: max_iter
Maximum number of iterations
.. attribute:: epsilon
Required accuracy
::
(cv.CV_TERMCRIT_ITER, 10, 0) # terminate after 10 iterations
(cv.CV_TERMCRIT_EPS, 0, 0.01) # terminate when epsilon reaches 0.01
(cv.CV_TERMCRIT_ITER | cv.CV_TERMCRIT_EPS, 10, 0.01) # terminate as soon as either condition is met
..
.. index:: CvMat
.. _CvMat:
CvMat
-----
`id=0.619633266675 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvMat>`__
.. class:: CvMat
A multi-channel 2D matrix. Created by
:ref:`CreateMat`
,
:ref:`LoadImageM`
,
:ref:`CreateMatHeader`
,
:ref:`fromarray`
.
.. attribute:: type
A CvMat signature containing the type of elements and flags, int
.. attribute:: step
Full row length in bytes, int
.. attribute:: rows
Number of rows, int
.. attribute:: cols
Number of columns, int
.. method:: tostring() -> str
Returns the contents of the CvMat as a single string.
.. index:: CvMatND
.. _CvMatND:
CvMatND
-------
`id=0.493284398358 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvMatND>`__
.. class:: CvMatND
Multi-dimensional dense multi-channel array.
.. attribute:: type
A CvMatND signature combining the type of elements and flags, int
.. method:: tostring() -> str
Returns the contents of the CvMatND as a single string.
.. index:: IplImage
.. _IplImage:
IplImage
--------
`id=0.479556472461 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/IplImage>`__
.. class:: IplImage
The
:ref:`IplImage`
object was inherited from the Intel Image Processing
Library, in which the format is native. OpenCV only supports a subset
of possible
:ref:`IplImage`
formats.
.. attribute:: nChannels
Number of channels, int.
.. attribute:: width
Image width in pixels
.. attribute:: height
Image height in pixels
.. attribute:: depth
Pixel depth in bits. The supported depths are:
.. attribute:: IPL_DEPTH_8U
Unsigned 8-bit integer
.. attribute:: IPL_DEPTH_8S
Signed 8-bit integer
.. attribute:: IPL_DEPTH_16U
Unsigned 16-bit integer
.. attribute:: IPL_DEPTH_16S
Signed 16-bit integer
.. attribute:: IPL_DEPTH_32S
Signed 32-bit integer
.. attribute:: IPL_DEPTH_32F
Single-precision floating point
.. attribute:: IPL_DEPTH_64F
Double-precision floating point
.. attribute:: origin
0 - top-left origin, 1 - bottom-left origin (Windows bitmap style)
.. method:: tostring() -> str
Returns the contents of the CvMatND as a single string.
.. index:: CvArr
.. _CvArr:
CvArr
-----
`id=0.249942454209 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvArr>`__
.. class:: CvArr
Arbitrary array
``CvArr``
is used
*only*
as a function parameter to specify that the parameter can be:
* an :ref:`IplImage`
* a :ref:`CvMat`
* any other type that exports the `array interface <http://docs.scipy.org/doc/numpy/reference/arrays.interface.html>`_

View File

@ -0,0 +1,60 @@
Clustering
==========
.. highlight:: python
.. index:: KMeans2
.. _KMeans2:
KMeans2
-------
`id=0.682106387651 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/KMeans2>`__
.. function:: KMeans2(samples,nclusters,labels,termcrit)-> None
Splits set of vectors by a given number of clusters.
:param samples: Floating-point matrix of input samples, one row per sample
:type samples: :class:`CvArr`
:param nclusters: Number of clusters to split the set by
:type nclusters: int
:param labels: Output integer vector storing cluster indices for every sample
:type labels: :class:`CvArr`
:param termcrit: Specifies maximum number of iterations and/or accuracy (distance the centers can move by between subsequent iterations)
:type termcrit: :class:`CvTermCriteria`
The function
``cvKMeans2``
implements a k-means algorithm that finds the
centers of
``nclusters``
clusters and groups the input samples
around the clusters. On output,
:math:`\texttt{labels}_i`
contains a cluster index for
samples stored in the i-th row of the
``samples``
matrix.

View File

@ -0,0 +1,967 @@
Drawing Functions
=================
.. highlight:: python
Drawing functions work with matrices/images of arbitrary depth.
The boundaries of the shapes can be rendered with antialiasing (implemented only for 8-bit images for now).
All the functions include the parameter color that uses a rgb value (that may be constructed
with
``CV_RGB``
) for color
images and brightness for grayscale images. For color images the order channel
is normally
*Blue, Green, Red*
, this is what
:func:`imshow`
,
:func:`imread`
and
:func:`imwrite`
expect
If you are using your own image rendering and I/O functions, you can use any channel ordering, the drawing functions process each channel independently and do not depend on the channel order or even on the color space used. The whole image can be converted from BGR to RGB or to a different color space using
:func:`cvtColor`
.
If a drawn figure is partially or completely outside the image, the drawing functions clip it. Also, many drawing functions can handle pixel coordinates specified with sub-pixel accuracy, that is, the coordinates can be passed as fixed-point numbers, encoded as integers. The number of fractional bits is specified by the
``shift``
parameter and the real point coordinates are calculated as
:math:`\texttt{Point}(x,y)\rightarrow\texttt{Point2f}(x*2^{-shift},y*2^{-shift})`
. This feature is especially effective wehn rendering antialiased shapes.
Also, note that the functions do not support alpha-transparency - when the target image is 4-channnel, then the
``color[3]``
is simply copied to the repainted pixels. Thus, if you want to paint semi-transparent shapes, you can paint them in a separate buffer and then blend it with the main image.
.. index:: Circle
.. _Circle:
Circle
------
`id=0.300689351141 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/Circle>`__
.. function:: Circle(img,center,radius,color,thickness=1,lineType=8,shift=0)-> None
Draws a circle.
:param img: Image where the circle is drawn
:type img: :class:`CvArr`
:param center: Center of the circle
:type center: :class:`CvPoint`
:param radius: Radius of the circle
:type radius: int
:param color: Circle color
:type color: :class:`CvScalar`
:param thickness: Thickness of the circle outline if positive, otherwise this indicates that a filled circle is to be drawn
:type thickness: int
:param lineType: Type of the circle boundary, see :ref:`Line` description
:type lineType: int
:param shift: Number of fractional bits in the center coordinates and radius value
:type shift: int
The function draws a simple or filled circle with a
given center and radius.
.. index:: ClipLine
.. _ClipLine:
ClipLine
--------
`id=0.251101842576 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/ClipLine>`__
.. function:: ClipLine(imgSize, pt1, pt2) -> (clipped_pt1, clipped_pt2)
Clips the line against the image rectangle.
:param imgSize: Size of the image
:type imgSize: :class:`CvSize`
:param pt1: First ending point of the line segment.
:type pt1: :class:`CvPoint`
:param pt2: Second ending point of the line segment.
:type pt2: :class:`CvPoint`
The function calculates a part of the line segment which is entirely within the image.
If the line segment is outside the image, it returns None. If the line segment is inside the image it returns a new pair of points.
.. index:: DrawContours
.. _DrawContours:
DrawContours
------------
`id=0.919530584794 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/DrawContours>`__
.. function:: DrawContours(img,contour,external_color,hole_color,max_level,thickness=1,lineType=8,offset=(0,0))-> None
Draws contour outlines or interiors in an image.
:param img: Image where the contours are to be drawn. As with any other drawing function, the contours are clipped with the ROI.
:type img: :class:`CvArr`
:param contour: Pointer to the first contour
:type contour: :class:`CvSeq`
:param external_color: Color of the external contours
:type external_color: :class:`CvScalar`
:param hole_color: Color of internal contours (holes)
:type hole_color: :class:`CvScalar`
:param max_level: Maximal level for drawn contours. If 0, only ``contour`` is drawn. If 1, the contour and all contours following
it on the same level are drawn. If 2, all contours following and all
contours one level below the contours are drawn, and so forth. If the value
is negative, the function does not draw the contours following after ``contour`` but draws the child contours of ``contour`` up
to the :math:`|\texttt{max\_level}|-1` level.
:type max_level: int
:param thickness: Thickness of lines the contours are drawn with.
If it is negative (For example, =CV _ FILLED), the contour interiors are
drawn.
:type thickness: int
:param lineType: Type of the contour segments, see :ref:`Line` description
:type lineType: int
The function draws contour outlines in the image if
:math:`\texttt{thickness} \ge 0`
or fills the area bounded by the contours if
:math:`\texttt{thickness}<0`
.
.. index:: Ellipse
.. _Ellipse:
Ellipse
-------
`id=0.149495013833 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/Ellipse>`__
.. function:: Ellipse(img,center,axes,angle,start_angle,end_angle,color,thickness=1,lineType=8,shift=0)-> None
Draws a simple or thick elliptic arc or an fills ellipse sector.
:param img: The image
:type img: :class:`CvArr`
:param center: Center of the ellipse
:type center: :class:`CvPoint`
:param axes: Length of the ellipse axes
:type axes: :class:`CvSize`
:param angle: Rotation angle
:type angle: float
:param start_angle: Starting angle of the elliptic arc
:type start_angle: float
:param end_angle: Ending angle of the elliptic arc.
:type end_angle: float
:param color: Ellipse color
:type color: :class:`CvScalar`
:param thickness: Thickness of the ellipse arc outline if positive, otherwise this indicates that a filled ellipse sector is to be drawn
:type thickness: int
:param lineType: Type of the ellipse boundary, see :ref:`Line` description
:type lineType: int
:param shift: Number of fractional bits in the center coordinates and axes' values
:type shift: int
The function draws a simple or thick elliptic
arc or fills an ellipse sector. The arc is clipped by the ROI rectangle.
A piecewise-linear approximation is used for antialiased arcs and
thick arcs. All the angles are given in degrees. The picture below
explains the meaning of the parameters.
Parameters of Elliptic Arc
.. image:: ../pics/ellipse.png
.. index:: EllipseBox
.. _EllipseBox:
EllipseBox
----------
`id=0.217567751917 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/EllipseBox>`__
.. function:: EllipseBox(img,box,color,thickness=1,lineType=8,shift=0)-> None
Draws a simple or thick elliptic arc or fills an ellipse sector.
:param img: Image
:type img: :class:`CvArr`
:param box: The enclosing box of the ellipse drawn
:type box: :class:`CvBox2D`
:param thickness: Thickness of the ellipse boundary
:type thickness: int
:param lineType: Type of the ellipse boundary, see :ref:`Line` description
:type lineType: int
:param shift: Number of fractional bits in the box vertex coordinates
:type shift: int
The function draws a simple or thick ellipse outline, or fills an ellipse. The functions provides a convenient way to draw an ellipse approximating some shape; that is what
:ref:`CamShift`
and
:ref:`FitEllipse`
do. The ellipse drawn is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs.
.. index:: FillConvexPoly
.. _FillConvexPoly:
FillConvexPoly
--------------
`id=0.27807950676 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/FillConvexPoly>`__
.. function:: FillConvexPoly(img,pn,color,lineType=8,shift=0)-> None
Fills a convex polygon.
:param img: Image
:type img: :class:`CvArr`
:param pn: List of coordinate pairs
:type pn: :class:`CvPoints`
:param color: Polygon color
:type color: :class:`CvScalar`
:param lineType: Type of the polygon boundaries, see :ref:`Line` description
:type lineType: int
:param shift: Number of fractional bits in the vertex coordinates
:type shift: int
The function fills a convex polygon's interior.
This function is much faster than the function
``cvFillPoly``
and can fill not only convex polygons but any monotonic polygon,
i.e., a polygon whose contour intersects every horizontal line (scan
line) twice at the most.
.. index:: FillPoly
.. _FillPoly:
FillPoly
--------
`id=0.470054743188 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/FillPoly>`__
.. function:: FillPoly(img,polys,color,lineType=8,shift=0)-> None
Fills a polygon's interior.
:param img: Image
:type img: :class:`CvArr`
:param polys: List of lists of (x,y) pairs. Each list of points is a polygon.
:type polys: list of lists of (x,y) pairs
:param color: Polygon color
:type color: :class:`CvScalar`
:param lineType: Type of the polygon boundaries, see :ref:`Line` description
:type lineType: int
:param shift: Number of fractional bits in the vertex coordinates
:type shift: int
The function fills an area bounded by several
polygonal contours. The function fills complex areas, for example,
areas with holes, contour self-intersection, and so forth.
.. index:: GetTextSize
.. _GetTextSize:
GetTextSize
-----------
`id=0.723985190989 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/GetTextSize>`__
.. function:: GetTextSize(textString,font)-> (textSize,baseline)
Retrieves the width and height of a text string.
:param font: Pointer to the font structure
:type font: :class:`CvFont`
:param textString: Input string
:type textString: str
:param textSize: Resultant size of the text string. Height of the text does not include the height of character parts that are below the baseline.
:type textSize: :class:`CvSize`
:param baseline: y-coordinate of the baseline relative to the bottom-most text point
:type baseline: int
The function calculates the dimensions of a rectangle to enclose a text string when a specified font is used.
.. index:: InitFont
.. _InitFont:
InitFont
--------
`id=0.526488936836 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/InitFont>`__
.. function:: InitFont(fontFace,hscale,vscale,shear=0,thickness=1,lineType=8)-> font
Initializes font structure.
:param font: Pointer to the font structure initialized by the function
:type font: :class:`CvFont`
:param fontFace: Font name identifier. Only a subset of Hershey fonts http://sources.isc.org/utils/misc/hershey-font.txt are supported now:
* **CV_FONT_HERSHEY_SIMPLEX** normal size sans-serif font
* **CV_FONT_HERSHEY_PLAIN** small size sans-serif font
* **CV_FONT_HERSHEY_DUPLEX** normal size sans-serif font (more complex than ``CV_FONT_HERSHEY_SIMPLEX`` )
* **CV_FONT_HERSHEY_COMPLEX** normal size serif font
* **CV_FONT_HERSHEY_TRIPLEX** normal size serif font (more complex than ``CV_FONT_HERSHEY_COMPLEX`` )
* **CV_FONT_HERSHEY_COMPLEX_SMALL** smaller version of ``CV_FONT_HERSHEY_COMPLEX``
* **CV_FONT_HERSHEY_SCRIPT_SIMPLEX** hand-writing style font
* **CV_FONT_HERSHEY_SCRIPT_COMPLEX** more complex variant of ``CV_FONT_HERSHEY_SCRIPT_SIMPLEX``
The parameter can be composited from one of the values above and an optional ``CV_FONT_ITALIC`` flag, which indicates italic or oblique font.
:type fontFace: int
:param hscale: Horizontal scale. If equal to ``1.0f`` , the characters have the original width depending on the font type. If equal to ``0.5f`` , the characters are of half the original width.
:type hscale: float
:param vscale: Vertical scale. If equal to ``1.0f`` , the characters have the original height depending on the font type. If equal to ``0.5f`` , the characters are of half the original height.
:type vscale: float
:param shear: Approximate tangent of the character slope relative to the vertical line. A zero value means a non-italic font, ``1.0f`` means about a 45 degree slope, etc.
:type shear: float
:param thickness: Thickness of the text strokes
:type thickness: int
:param lineType: Type of the strokes, see :ref:`Line` description
:type lineType: int
The function initializes the font structure that can be passed to text rendering functions.
.. index:: InitLineIterator
.. _InitLineIterator:
InitLineIterator
----------------
`id=0.352578115956 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/InitLineIterator>`__
.. function:: InitLineIterator(image, pt1, pt2, connectivity=8, left_to_right=0) -> line_iterator
Initializes the line iterator.
:param image: Image to sample the line from
:type image: :class:`CvArr`
:param pt1: First ending point of the line segment
:type pt1: :class:`CvPoint`
:param pt2: Second ending point of the line segment
:type pt2: :class:`CvPoint`
:param connectivity: The scanned line connectivity, 4 or 8.
:type connectivity: int
:param left_to_right:
If ( :math:`\texttt{left\_to\_right} = 0` ) then the line is scanned in the specified order, from ``pt1`` to ``pt2`` .
If ( :math:`\texttt{left\_to\_right} \ne 0` ) the line is scanned from left-most point to right-most.
:type left_to_right: int
:param line_iterator: Iterator over the pixels of the line
:type line_iterator: :class:`iter`
The function returns an iterator over the pixels connecting the two points.
The points on the line are
calculated one by one using a 4-connected or 8-connected Bresenham
algorithm.
Example: Using line iterator to calculate the sum of pixel values along a color line
.. doctest::
>>> import cv
>>> img = cv.LoadImageM("building.jpg", cv.CV_LOAD_IMAGE_COLOR)
>>> li = cv.InitLineIterator(img, (100, 100), (125, 150))
>>> red_sum = 0
>>> green_sum = 0
>>> blue_sum = 0
>>> for (r, g, b) in li:
... red_sum += r
... green_sum += g
... blue_sum += b
>>> print red_sum, green_sum, blue_sum
10935.0 9496.0 7946.0
..
or more concisely using
`zip <http://docs.python.org/library/functions.html#zip>`_
:
.. doctest::
>>> import cv
>>> img = cv.LoadImageM("building.jpg", cv.CV_LOAD_IMAGE_COLOR)
>>> li = cv.InitLineIterator(img, (100, 100), (125, 150))
>>> print [sum(c) for c in zip(*li)]
[10935.0, 9496.0, 7946.0]
..
.. index:: Line
.. _Line:
Line
----
`id=0.32347581651 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/Line>`__
.. function:: Line(img,pt1,pt2,color,thickness=1,lineType=8,shift=0)-> None
Draws a line segment connecting two points.
:param img: The image
:type img: :class:`CvArr`
:param pt1: First point of the line segment
:type pt1: :class:`CvPoint`
:param pt2: Second point of the line segment
:type pt2: :class:`CvPoint`
:param color: Line color
:type color: :class:`CvScalar`
:param thickness: Line thickness
:type thickness: int
:param lineType: Type of the line:
* **8** (or omitted) 8-connected line.
* **4** 4-connected line.
* **CV_AA** antialiased line.
:type lineType: int
:param shift: Number of fractional bits in the point coordinates
:type shift: int
The function draws the line segment between
``pt1``
and
``pt2``
points in the image. The line is
clipped by the image or ROI rectangle. For non-antialiased lines
with integer coordinates the 8-connected or 4-connected Bresenham
algorithm is used. Thick lines are drawn with rounding endings.
Antialiased lines are drawn using Gaussian filtering. To specify
the line color, the user may use the macro
``CV_RGB( r, g, b )``
.
.. index:: PolyLine
.. _PolyLine:
PolyLine
--------
`id=0.899614274707 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/PolyLine>`__
.. function:: PolyLine(img,polys,is_closed,color,thickness=1,lineType=8,shift=0)-> None
Draws simple or thick polygons.
:param polys: List of lists of (x,y) pairs. Each list of points is a polygon.
:type polys: list of lists of (x,y) pairs
:param img: Image
:type img: :class:`CvArr`
:param is_closed: Indicates whether the polylines must be drawn
closed. If closed, the function draws the line from the last vertex
of every contour to the first vertex.
:type is_closed: int
:param color: Polyline color
:type color: :class:`CvScalar`
:param thickness: Thickness of the polyline edges
:type thickness: int
:param lineType: Type of the line segments, see :ref:`Line` description
:type lineType: int
:param shift: Number of fractional bits in the vertex coordinates
:type shift: int
The function draws single or multiple polygonal curves.
.. index:: PutText
.. _PutText:
PutText
-------
`id=0.414755160642 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/PutText>`__
.. function:: PutText(img,text,org,font,color)-> None
Draws a text string.
:param img: Input image
:type img: :class:`CvArr`
:param text: String to print
:type text: str
:param org: Coordinates of the bottom-left corner of the first letter
:type org: :class:`CvPoint`
:param font: Pointer to the font structure
:type font: :class:`CvFont`
:param color: Text color
:type color: :class:`CvScalar`
The function renders the text in the image with
the specified font and color. The printed text is clipped by the ROI
rectangle. Symbols that do not belong to the specified font are
replaced with the symbol for a rectangle.
.. index:: Rectangle
.. _Rectangle:
Rectangle
---------
`id=0.243634323886 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/Rectangle>`__
.. function:: Rectangle(img,pt1,pt2,color,thickness=1,lineType=8,shift=0)-> None
Draws a simple, thick, or filled rectangle.
:param img: Image
:type img: :class:`CvArr`
:param pt1: One of the rectangle's vertices
:type pt1: :class:`CvPoint`
:param pt2: Opposite rectangle vertex
:type pt2: :class:`CvPoint`
:param color: Line color (RGB) or brightness (grayscale image)
:type color: :class:`CvScalar`
:param thickness: Thickness of lines that make up the rectangle. Negative values, e.g., CV _ FILLED, cause the function to draw a filled rectangle.
:type thickness: int
:param lineType: Type of the line, see :ref:`Line` description
:type lineType: int
:param shift: Number of fractional bits in the point coordinates
:type shift: int
The function draws a rectangle with two opposite corners
``pt1``
and
``pt2``
.
.. index:: CV_RGB
.. _CV_RGB:
CV_RGB
------
`id=0.224041402111 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CV_RGB>`__
.. function:: CV_RGB(red,grn,blu)->CvScalar
Constructs a color value.
:param red: Red component
:type red: float
:param grn: Green component
:type grn: float
:param blu: Blue component
:type blu: float

View File

@ -0,0 +1,295 @@
Dynamic Structures
==================
.. highlight:: python
.. index:: CvMemStorage
.. _CvMemStorage:
CvMemStorage
------------
`id=0.11586833925 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvMemStorage>`__
.. class:: CvMemStorage
Growing memory storage.
Many OpenCV functions use a given storage area for their results
and working storage. These storage areas can be created using
:ref:`CreateMemStorage`
. OpenCV Python tracks the objects occupying a
CvMemStorage, and automatically releases the CvMemStorage when there are
no objects referring to it. For this reason, there is explicit function
to release a CvMemStorage.
.. doctest::
>>> import cv
>>> image = cv.LoadImageM("building.jpg", cv.CV_LOAD_IMAGE_GRAYSCALE)
>>> seq = cv.FindContours(image, cv.CreateMemStorage(), cv.CV_RETR_TREE, cv.CV_CHAIN_APPROX_SIMPLE)
>>> del seq # associated storage is also released
..
.. index:: CvSeq
.. _CvSeq:
CvSeq
-----
`id=0.0938210237552 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvSeq>`__
.. class:: CvSeq
Growable sequence of elements.
Many OpenCV functions return a CvSeq object. The CvSeq obect is a sequence, so these are all legal:
::
seq = cv.FindContours(scribble, storage, cv.CV_RETR_CCOMP, cv.CV_CHAIN_APPROX_SIMPLE)
# seq is a sequence of point pairs
print len(seq)
# FindContours returns a sequence of (x,y) points, so to print them out:
for (x,y) in seq:
print (x,y)
print seq[10] # tenth entry in the seqeuence
print seq[::-1] # reversed sequence
print sorted(list(seq)) # sorted sequence
..
Also, a CvSeq object has methods
``h_next()``
,
``h_prev()``
,
``v_next()``
and
``v_prev()``
.
Some OpenCV functions (for example
:ref:`FindContours`
) can return multiple CvSeq objects, connected by these relations.
In this case the methods return the other sequences. If no relation between sequences exists, then the methods return
``None``
.
.. index:: CvSet
.. _CvSet:
CvSet
-----
`id=0.165386903844 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CvSet>`__
.. class:: CvSet
Collection of nodes.
Some OpenCV functions return a CvSet object. The CvSet obect is iterable, for example:
::
for i in s:
print i
print set(s)
print list(s)
..
.. index:: CloneSeq
.. _CloneSeq:
CloneSeq
--------
`id=0.893022984961 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CloneSeq>`__
.. function:: CloneSeq(seq,storage)-> None
Creates a copy of a sequence.
:param seq: Sequence
:type seq: :class:`CvSeq`
:param storage: The destination storage block to hold the new sequence header and the copied data, if any. If it is NULL, the function uses the storage block containing the input sequence.
:type storage: :class:`CvMemStorage`
The function makes a complete copy of the input sequence and returns it.
.. index:: CreateMemStorage
.. _CreateMemStorage:
CreateMemStorage
----------------
`id=0.141261875659 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/CreateMemStorage>`__
.. function:: CreateMemStorage(blockSize = 0) -> memstorage
Creates memory storage.
:param blockSize: Size of the storage blocks in bytes. If it is 0, the block size is set to a default value - currently it is about 64K.
:type blockSize: int
The function creates an empty memory storage. See
:ref:`CvMemStorage`
description.
.. index:: SeqInvert
.. _SeqInvert:
SeqInvert
---------
`id=0.420185773758 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/SeqInvert>`__
.. function:: SeqInvert(seq)-> None
Reverses the order of sequence elements.
:param seq: Sequence
:type seq: :class:`CvSeq`
The function reverses the sequence in-place - makes the first element go last, the last element go first and so forth.
.. index:: SeqRemove
.. _SeqRemove:
SeqRemove
---------
`id=0.405976799419 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/SeqRemove>`__
.. function:: SeqRemove(seq,index)-> None
Removes an element from the middle of a sequence.
:param seq: Sequence
:type seq: :class:`CvSeq`
:param index: Index of removed element
:type index: int
The function removes elements with the given
index. If the index is out of range the function reports an error. An
attempt to remove an element from an empty sequence is a special
case of this situation. The function removes an element by shifting
the sequence elements between the nearest end of the sequence and the
``index``
-th position, not counting the latter.
.. index:: SeqRemoveSlice
.. _SeqRemoveSlice:
SeqRemoveSlice
--------------
`id=0.589674828285 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/SeqRemoveSlice>`__
.. function:: SeqRemoveSlice(seq,slice)-> None
Removes a sequence slice.
:param seq: Sequence
:type seq: :class:`CvSeq`
:param slice: The part of the sequence to remove
:type slice: :class:`CvSlice`
The function removes a slice from the sequence.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,99 @@
Utility and System Functions and Macros
=======================================
.. highlight:: python
Error Handling
--------------
Errors in argument type cause a
``TypeError``
exception.
OpenCV errors cause an
``cv.error``
exception.
For example a function argument that is the wrong type produces a
``TypeError``
:
.. doctest::
>>> import cv
>>> cv.LoadImage(4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument 1 must be string, not int
..
A function with the
.. doctest::
>>> cv.CreateMat(-1, -1, cv.CV_8UC1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
error: Non-positive width or height
..
.. index:: GetTickCount
.. _GetTickCount:
GetTickCount
------------
`id=0.81706194546 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/GetTickCount>`__
.. function:: GetTickCount() -> long
Returns the number of ticks.
The function returns number of the ticks starting from some platform-dependent event (number of CPU ticks from the startup, number of milliseconds from 1970th year, etc.). The function is useful for accurate measurement of a function/user-code execution time. To convert the number of ticks to time units, use
:ref:`GetTickFrequency`
.
.. index:: GetTickFrequency
.. _GetTickFrequency:
GetTickFrequency
----------------
`id=0.967317163838 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/GetTickFrequency>`__
.. function:: GetTickFrequency() -> long
Returns the number of ticks per microsecond.
The function returns the number of ticks per microsecond. Thus, the quotient of
:ref:`GetTickCount`
and
:ref:`GetTickFrequency`
will give the number of microseconds starting from the platform-dependent event.

View File

@ -0,0 +1,95 @@
XML/YAML Persistence
====================
.. highlight:: python
.. index:: Load
.. _Load:
Load
----
`id=0.778290631419 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/Load>`__
.. function:: Load(filename,storage=NULL,name=NULL)-> generic
Loads an object from a file.
:param filename: File name
:type filename: str
:param storage: Memory storage for dynamic structures, such as :ref:`CvSeq` or :ref:`CvGraph` . It is not used for matrices or images.
:type storage: :class:`CvMemStorage`
:param name: Optional object name. If it is NULL, the first top-level object in the storage will be loaded.
:type name: str
The function loads an object from a file. It provides a
simple interface to
:ref:`Read`
. After the object is loaded, the file
storage is closed and all the temporary buffers are deleted. Thus,
to load a dynamic structure, such as a sequence, contour, or graph, one
should pass a valid memory storage destination to the function.
.. index:: Save
.. _Save:
Save
----
`id=0.295047298682 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/core/Save>`__
.. function:: Save(filename,structPtr,name=NULL,comment=NULL)-> None
Saves an object to a file.
:param filename: File name
:type filename: str
:param structPtr: Object to save
:type structPtr: :class:`generic`
:param name: Optional object name. If it is NULL, the name will be formed from ``filename`` .
:type name: str
:param comment: Optional comment to put in the beginning of the file
:type comment: str
The function saves an object to a file. It provides a simple interface to
:ref:`Write`
.

View File

@ -0,0 +1,10 @@
*******************************************************
features2d. Feature Detection and Descriptor Extraction
*******************************************************
.. toctree::
:maxdepth: 2
features2d_feature_detection_and_description

View File

@ -0,0 +1,264 @@
Feature detection and description
=================================
.. highlight:: python
* **image** The image. Keypoints (corners) will be detected on this.
* **keypoints** Keypoints detected on the image.
* **threshold** Threshold on difference between intensity of center pixel and
pixels on circle around this pixel. See description of the algorithm.
* **nonmaxSupression** If it is true then non-maximum supression will be applied to detected corners (keypoints).
.. index:: CvSURFPoint
.. _CvSURFPoint:
CvSURFPoint
-----------
`id=0.785092904945 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/features2d/CvSURFPoint>`__
.. class:: CvSURFPoint
A SURF keypoint, represented as a tuple
``((x, y), laplacian, size, dir, hessian)``
.
.. attribute:: x
x-coordinate of the feature within the image
.. attribute:: y
y-coordinate of the feature within the image
.. attribute:: laplacian
-1, 0 or +1. sign of the laplacian at the point. Can be used to speedup feature comparison since features with laplacians of different signs can not match
.. attribute:: size
size of the feature
.. attribute:: dir
orientation of the feature: 0..360 degrees
.. attribute:: hessian
value of the hessian (can be used to approximately estimate the feature strengths; see also params.hessianThreshold)
.. index:: ExtractSURF
.. _ExtractSURF:
ExtractSURF
-----------
`id=0.999928834286 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/features2d/ExtractSURF>`__
.. function:: ExtractSURF(image,mask,storage,params)-> (keypoints,descriptors)
Extracts Speeded Up Robust Features from an image.
:param image: The input 8-bit grayscale image
:type image: :class:`CvArr`
:param mask: The optional input 8-bit mask. The features are only found in the areas that contain more than 50 % of non-zero mask pixels
:type mask: :class:`CvArr`
:param keypoints: sequence of keypoints.
:type keypoints: :class:`CvSeq` of :class:`CvSURFPoint`
:param descriptors: sequence of descriptors. Each SURF descriptor is a list of floats, of length 64 or 128.
:type descriptors: :class:`CvSeq` of list of float
:param storage: Memory storage where keypoints and descriptors will be stored
:type storage: :class:`CvMemStorage`
:param params: Various algorithm parameters in a tuple ``(extended, hessianThreshold, nOctaves, nOctaveLayers)`` :
* **extended** 0 means basic descriptors (64 elements each), 1 means extended descriptors (128 elements each)
* **hessianThreshold** only features with hessian larger than that are extracted. good default value is ~300-500 (can depend on the average local contrast and sharpness of the image). user can further filter out some features based on their hessian values and other characteristics.
* **nOctaves** the number of octaves to be used for extraction. With each next octave the feature size is doubled (3 by default)
* **nOctaveLayers** The number of layers within each octave (4 by default)
:type params: :class:`CvSURFParams`
The function cvExtractSURF finds robust features in the image, as
described in
Bay06
. For each feature it returns its location, size,
orientation and optionally the descriptor, basic or extended. The function
can be used for object tracking and localization, image stitching etc.
To extract strong SURF features from an image
.. doctest::
>>> import cv
>>> im = cv.LoadImageM("building.jpg", cv.CV_LOAD_IMAGE_GRAYSCALE)
>>> (keypoints, descriptors) = cv.ExtractSURF(im, None, cv.CreateMemStorage(), (0, 30000, 3, 1))
>>> print len(keypoints), len(descriptors)
6 6
>>> for ((x, y), laplacian, size, dir, hessian) in keypoints:
... print "x=%d y=%d laplacian=%d size=%d dir=%f hessian=%f" % (x, y, laplacian, size, dir, hessian)
x=30 y=27 laplacian=-1 size=31 dir=69.778503 hessian=36979.789062
x=296 y=197 laplacian=1 size=33 dir=111.081039 hessian=31514.349609
x=296 y=266 laplacian=1 size=32 dir=107.092300 hessian=31477.908203
x=254 y=284 laplacian=1 size=31 dir=279.137360 hessian=34169.800781
x=498 y=525 laplacian=-1 size=33 dir=278.006592 hessian=31002.759766
x=777 y=281 laplacian=1 size=70 dir=167.940964 hessian=35538.363281
..
.. index:: GetStarKeypoints
.. _GetStarKeypoints:
GetStarKeypoints
----------------
`id=0.373658080009 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/features2d/GetStarKeypoints>`__
.. function:: GetStarKeypoints(image,storage,params)-> keypoints
Retrieves keypoints using the StarDetector algorithm.
:param image: The input 8-bit grayscale image
:type image: :class:`CvArr`
:param storage: Memory storage where the keypoints will be stored
:type storage: :class:`CvMemStorage`
:param params: Various algorithm parameters in a tuple ``(maxSize, responseThreshold, lineThresholdProjected, lineThresholdBinarized, suppressNonmaxSize)`` :
* **maxSize** maximal size of the features detected. The following values of the parameter are supported: 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128
* **responseThreshold** threshold for the approximatd laplacian, used to eliminate weak features
* **lineThresholdProjected** another threshold for laplacian to eliminate edges
* **lineThresholdBinarized** another threshold for the feature scale to eliminate edges
* **suppressNonmaxSize** linear size of a pixel neighborhood for non-maxima suppression
:type params: :class:`CvStarDetectorParams`
The function GetStarKeypoints extracts keypoints that are local
scale-space extremas. The scale-space is constructed by computing
approximate values of laplacians with different sigma's at each
pixel. Instead of using pyramids, a popular approach to save computing
time, all of the laplacians are computed at each pixel of the original
high-resolution image. But each approximate laplacian value is computed
in O(1) time regardless of the sigma, thanks to the use of integral
images. The algorithm is based on the paper
Agrawal08
, but instead
of a square, hexagon or octagon it uses an 8-end star shape, hence the name,
consisting of overlapping upright and tilted squares.
Each keypoint is represented by a tuple
``((x, y), size, response)``
:
* **x, y** Screen coordinates of the keypoint
* **size** feature size, up to ``maxSize``
* **response** approximated laplacian value for the keypoint

View File

@ -0,0 +1,38 @@
*************************************
highgui. High-level GUI and Media I/O
*************************************
While OpenCV was designed for use in full-scale
applications and can be used within functionally rich UI frameworks (such as Qt, WinForms or Cocoa) or without any UI at all, sometimes there is a need to try some functionality quickly and visualize the results. This is what the HighGUI module has been designed for.
It provides easy interface to:
*
create and manipulate windows that can display images and "remember" their content (no need to handle repaint events from OS)
*
add trackbars to the windows, handle simple mouse events as well as keyboard commmands
*
read and write images to/from disk or memory.
*
read video from camera or file and write video to a file.
.. toctree::
:maxdepth: 2
highgui_user_interface
highgui_reading_and_writing_images_and_video

View File

@ -0,0 +1,679 @@
Reading and Writing Images and Video
====================================
.. highlight:: python
.. index:: LoadImage
.. _LoadImage:
LoadImage
---------
`id=0.709113485048 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/LoadImage>`__
.. function:: LoadImage(filename, iscolor=CV_LOAD_IMAGE_COLOR)->None
Loads an image from a file as an IplImage.
:param filename: Name of file to be loaded.
:type filename: str
:param iscolor: Specific color type of the loaded image:
* **CV_LOAD_IMAGE_COLOR** the loaded image is forced to be a 3-channel color image
* **CV_LOAD_IMAGE_GRAYSCALE** the loaded image is forced to be grayscale
* **CV_LOAD_IMAGE_UNCHANGED** the loaded image will be loaded as is.
:type iscolor: int
The function
``cvLoadImage``
loads an image from the specified file and returns the pointer to the loaded image. Currently the following file formats are supported:
*
Windows bitmaps - BMP, DIB
*
JPEG files - JPEG, JPG, JPE
*
Portable Network Graphics - PNG
*
Portable image format - PBM, PGM, PPM
*
Sun rasters - SR, RAS
*
TIFF files - TIFF, TIF
Note that in the current implementation the alpha channel, if any, is stripped from the output image, e.g. 4-channel RGBA image will be loaded as RGB.
.. index:: LoadImageM
.. _LoadImageM:
LoadImageM
----------
`id=0.915605899901 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/LoadImageM>`__
.. function:: LoadImageM(filename, iscolor=CV_LOAD_IMAGE_COLOR)->None
Loads an image from a file as a CvMat.
:param filename: Name of file to be loaded.
:type filename: str
:param iscolor: Specific color type of the loaded image:
* **CV_LOAD_IMAGE_COLOR** the loaded image is forced to be a 3-channel color image
* **CV_LOAD_IMAGE_GRAYSCALE** the loaded image is forced to be grayscale
* **CV_LOAD_IMAGE_UNCHANGED** the loaded image will be loaded as is.
:type iscolor: int
The function
``cvLoadImageM``
loads an image from the specified file and returns the pointer to the loaded image.
urrently the following file formats are supported:
*
Windows bitmaps - BMP, DIB
*
JPEG files - JPEG, JPG, JPE
*
Portable Network Graphics - PNG
*
Portable image format - PBM, PGM, PPM
*
Sun rasters - SR, RAS
*
TIFF files - TIFF, TIF
Note that in the current implementation the alpha channel, if any, is stripped from the output image, e.g. 4-channel RGBA image will be loaded as RGB.
.. index:: SaveImage
.. _SaveImage:
SaveImage
---------
`id=0.496487139898 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/SaveImage>`__
.. function:: SaveImage(filename,image)-> None
Saves an image to a specified file.
:param filename: Name of the file.
:type filename: str
:param image: Image to be saved.
:type image: :class:`CvArr`
The function
``cvSaveImage``
saves the image to the specified file. The image format is chosen based on the
``filename``
extension, see
:ref:`LoadImage`
. Only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function. If the format, depth or channel order is different, use
``cvCvtScale``
and
``cvCvtColor``
to convert it before saving, or use universal
``cvSave``
to save the image to XML or YAML format.
.. index:: CvCapture
.. _CvCapture:
CvCapture
---------
`id=0.364337205432 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/CvCapture>`__
.. class:: CvCapture
Video capturing structure.
The structure
``CvCapture``
does not have a public interface and is used only as a parameter for video capturing functions.
.. index:: CaptureFromCAM
.. _CaptureFromCAM:
CaptureFromCAM
--------------
`id=0.68934258142 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/CaptureFromCAM>`__
.. function:: CaptureFromCAM(index) -> CvCapture
Initializes capturing a video from a camera.
:param index: Index of the camera to be used. If there is only one camera or it does not matter what camera is used -1 may be passed.
:type index: int
The function
``cvCaptureFromCAM``
allocates and initializes the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394).
To release the structure, use
:ref:`ReleaseCapture`
.
.. index:: CaptureFromFile
.. _CaptureFromFile:
CaptureFromFile
---------------
`id=0.627099214181 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/CaptureFromFile>`__
.. function:: CaptureFromFile(filename) -> CvCapture
Initializes capturing a video from a file.
:param filename: Name of the video file.
:type filename: str
The function
``cvCaptureFromFile``
allocates and initializes the CvCapture structure for reading the video stream from the specified file. Which codecs and file formats are supported depends on the back end library. On Windows HighGui uses Video for Windows (VfW), on Linux ffmpeg is used and on Mac OS X the back end is QuickTime. See VideoCodecs for some discussion on what to expect and how to prepare your video files.
After the allocated structure is not used any more it should be released by the
:ref:`ReleaseCapture`
function.
.. index:: GetCaptureProperty
.. _GetCaptureProperty:
GetCaptureProperty
------------------
`id=0.295657731336 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/GetCaptureProperty>`__
.. function:: GetCaptureProperty(capture, property_id)->double
Gets video capturing properties.
:param capture: video capturing structure.
:type capture: :class:`CvCapture`
:param property_id: Property identifier. Can be one of the following:
:type property_id: int
* **CV_CAP_PROP_POS_MSEC** Film current position in milliseconds or video capture timestamp
* **CV_CAP_PROP_POS_FRAMES** 0-based index of the frame to be decoded/captured next
* **CV_CAP_PROP_POS_AVI_RATIO** Relative position of the video file (0 - start of the film, 1 - end of the film)
* **CV_CAP_PROP_FRAME_WIDTH** Width of the frames in the video stream
* **CV_CAP_PROP_FRAME_HEIGHT** Height of the frames in the video stream
* **CV_CAP_PROP_FPS** Frame rate
* **CV_CAP_PROP_FOURCC** 4-character code of codec
* **CV_CAP_PROP_FRAME_COUNT** Number of frames in the video file
* **CV_CAP_PROP_FORMAT** The format of the Mat objects returned by retrieve()
* **CV_CAP_PROP_MODE** A backend-specific value indicating the current capture mode
* **CV_CAP_PROP_BRIGHTNESS** Brightness of the image (only for cameras)
* **CV_CAP_PROP_CONTRAST** Contrast of the image (only for cameras)
* **CV_CAP_PROP_SATURATION** Saturation of the image (only for cameras)
* **CV_CAP_PROP_HUE** Hue of the image (only for cameras)
* **CV_CAP_PROP_GAIN** Gain of the image (only for cameras)
* **CV_CAP_PROP_EXPOSURE** Exposure (only for cameras)
* **CV_CAP_PROP_CONVERT_RGB** Boolean flags indicating whether images should be converted to RGB
* **CV_CAP_PROP_WHITE_BALANCE** Currently unsupported
* **CV_CAP_PROP_RECTIFICATION** TOWRITE (note: only supported by DC1394 v 2.x backend currently)
The function
``cvGetCaptureProperty``
retrieves the specified property of the camera or video file.
.. index:: GrabFrame
.. _GrabFrame:
GrabFrame
---------
`id=0.664037861142 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/GrabFrame>`__
.. function:: GrabFrame(capture) -> int
Grabs the frame from a camera or file.
:param capture: video capturing structure.
:type capture: :class:`CvCapture`
The function
``cvGrabFrame``
grabs the frame from a camera or file. The grabbed frame is stored internally. The purpose of this function is to grab the frame
*quickly*
so that syncronization can occur if it has to read from several cameras simultaneously. The grabbed frames are not exposed because they may be stored in a compressed format (as defined by the camera/driver). To retrieve the grabbed frame,
:ref:`RetrieveFrame`
should be used.
.. index:: QueryFrame
.. _QueryFrame:
QueryFrame
----------
`id=0.15232451714 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/QueryFrame>`__
.. function:: QueryFrame(capture) -> iplimage
Grabs and returns a frame from a camera or file.
:param capture: video capturing structure.
:type capture: :class:`CvCapture`
The function
``cvQueryFrame``
grabs a frame from a camera or video file, decompresses it and returns it. This function is just a combination of
:ref:`GrabFrame`
and
:ref:`RetrieveFrame`
, but in one call. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.
.. index:: RetrieveFrame
.. _RetrieveFrame:
RetrieveFrame
-------------
`id=0.978271497895 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/RetrieveFrame>`__
.. function:: RetrieveFrame(capture) -> iplimage
Gets the image grabbed with cvGrabFrame.
:param capture: video capturing structure.
:type capture: :class:`CvCapture`
The function
``cvRetrieveFrame``
returns the pointer to the image grabbed with the
:ref:`GrabFrame`
function. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.
.. index:: SetCaptureProperty
.. _SetCaptureProperty:
SetCaptureProperty
------------------
`id=0.42439239326 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/SetCaptureProperty>`__
.. function:: SetCaptureProperty(capture, property_id,value)->None
Sets video capturing properties.
:param capture: video capturing structure.
:type capture: :class:`CvCapture`
:param property_id: property identifier. Can be one of the following:
:type property_id: int
* **CV_CAP_PROP_POS_MSEC** Film current position in milliseconds or video capture timestamp
* **CV_CAP_PROP_POS_FRAMES** 0-based index of the frame to be decoded/captured next
* **CV_CAP_PROP_POS_AVI_RATIO** Relative position of the video file (0 - start of the film, 1 - end of the film)
* **CV_CAP_PROP_FRAME_WIDTH** Width of the frames in the video stream
* **CV_CAP_PROP_FRAME_HEIGHT** Height of the frames in the video stream
* **CV_CAP_PROP_FPS** Frame rate
* **CV_CAP_PROP_FOURCC** 4-character code of codec
* **CV_CAP_PROP_FRAME_COUNT** Number of frames in the video file
* **CV_CAP_PROP_FORMAT** The format of the Mat objects returned by retrieve()
* **CV_CAP_PROP_MODE** A backend-specific value indicating the current capture mode
* **CV_CAP_PROP_BRIGHTNESS** Brightness of the image (only for cameras)
* **CV_CAP_PROP_CONTRAST** Contrast of the image (only for cameras)
* **CV_CAP_PROP_SATURATION** Saturation of the image (only for cameras)
* **CV_CAP_PROP_HUE** Hue of the image (only for cameras)
* **CV_CAP_PROP_GAIN** Gain of the image (only for cameras)
* **CV_CAP_PROP_EXPOSURE** Exposure (only for cameras)
* **CV_CAP_PROP_CONVERT_RGB** Boolean flags indicating whether images should be converted to RGB
* **CV_CAP_PROP_WHITE_BALANCE** Currently unsupported
* **CV_CAP_PROP_RECTIFICATION** TOWRITE (note: only supported by DC1394 v 2.x backend currently)
:param value: value of the property.
:type value: float
The function
``cvSetCaptureProperty``
sets the specified property of video capturing. Currently the function supports only video files:
``CV_CAP_PROP_POS_MSEC, CV_CAP_PROP_POS_FRAMES, CV_CAP_PROP_POS_AVI_RATIO``
.
NB This function currently does nothing when using the latest CVS download on linux with FFMPEG (the function contents are hidden if 0 is used and returned).
.. index:: CreateVideoWriter
.. _CreateVideoWriter:
CreateVideoWriter
-----------------
`id=0.778639527068 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/CreateVideoWriter>`__
.. function:: CreateVideoWriter(filename, fourcc, fps, frame_size, is_color) -> CvVideoWriter
Creates the video file writer.
:param filename: Name of the output video file.
:type filename: str
:param fourcc: 4-character code of codec used to compress the frames. For example, ``CV_FOURCC('P','I','M,'1')`` is a MPEG-1 codec, ``CV_FOURCC('M','J','P','G')`` is a motion-jpeg codec etc.
Under Win32 it is possible to pass -1 in order to choose compression method and additional compression parameters from dialog. Under Win32 if 0 is passed while using an avi filename it will create a video writer that creates an uncompressed avi file.
:type fourcc: int
:param fps: Framerate of the created video stream.
:type fps: float
:param frame_size: Size of the video frames.
:type frame_size: :class:`CvSize`
:param is_color: If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only).
:type is_color: int
The function
``cvCreateVideoWriter``
creates the video writer structure.
Which codecs and file formats are supported depends on the back end library. On Windows HighGui uses Video for Windows (VfW), on Linux ffmpeg is used and on Mac OS X the back end is QuickTime. See VideoCodecs for some discussion on what to expect.
.. index:: WriteFrame
.. _WriteFrame:
WriteFrame
----------
`id=0.0385991600269 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/WriteFrame>`__
.. function:: WriteFrame(writer, image)->int
Writes a frame to a video file.
:param writer: Video writer structure
:type writer: :class:`CvVideoWriter`
:param image: The written frame
:type image: :class:`IplImage`
The function
``cvWriteFrame``
writes/appends one frame to a video file.

View File

@ -0,0 +1,576 @@
User Interface
==============
.. highlight:: python
.. index:: CreateTrackbar
.. _CreateTrackbar:
CreateTrackbar
--------------
`id=0.859200002353 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/CreateTrackbar>`__
.. function:: CreateTrackbar(trackbarName, windowName, value, count, onChange) -> None
Creates a trackbar and attaches it to the specified window
:param trackbarName: Name of the created trackbar.
:type trackbarName: str
:param windowName: Name of the window which will be used as a parent for created trackbar.
:type windowName: str
:param value: Initial value for the slider position, between 0 and ``count`` .
:type value: int
:param count: Maximal position of the slider. Minimal position is always 0.
:type count: int
:param onChange:
OpenCV calls ``onChange`` every time the slider changes position.
OpenCV will call it as ``func(x)`` where ``x`` is the new position of the slider.
:type onChange: :class:`PyCallableObject`
The function
``cvCreateTrackbar``
creates a trackbar (a.k.a. slider or range control) with the specified name and range, assigns a variable to be syncronized with trackbar position and specifies a callback function to be called on trackbar position change. The created trackbar is displayed on the top of the given window.
\
\
**[Qt Backend Only]**
qt-specific details:
* **windowName** Name of the window which will be used as a parent for created trackbar. Can be NULL if the trackbar should be attached to the control panel.
The created trackbar is displayed at the bottom of the given window if
*windowName*
is correctly provided, or displayed on the control panel if
*windowName*
is NULL.
By clicking on the label of each trackbar, it is possible to edit the trackbar's value manually for a more accurate control of it.
.. index:: DestroyAllWindows
.. _DestroyAllWindows:
DestroyAllWindows
-----------------
`id=0.386578572057 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/DestroyAllWindows>`__
.. function:: DestroyAllWindows()-> None
Destroys all of the HighGUI windows.
The function
``cvDestroyAllWindows``
destroys all of the opened HighGUI windows.
.. index:: DestroyWindow
.. _DestroyWindow:
DestroyWindow
-------------
`id=0.0256606142145 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/DestroyWindow>`__
.. function:: DestroyWindow(name)-> None
Destroys a window.
:param name: Name of the window to be destroyed.
:type name: str
The function
``cvDestroyWindow``
destroys the window with the given name.
.. index:: GetTrackbarPos
.. _GetTrackbarPos:
GetTrackbarPos
--------------
`id=0.0119794922165 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/GetTrackbarPos>`__
.. function:: GetTrackbarPos(trackbarName,windowName)-> None
Returns the trackbar position.
:param trackbarName: Name of the trackbar.
:type trackbarName: str
:param windowName: Name of the window which is the parent of the trackbar.
:type windowName: str
The function
``cvGetTrackbarPos``
returns the current position of the specified trackbar.
\
\
**[Qt Backend Only]**
qt-specific details:
* **windowName** Name of the window which is the parent of the trackbar. Can be NULL if the trackbar is attached to the control panel.
.. index:: MoveWindow
.. _MoveWindow:
MoveWindow
----------
`id=0.0432662100889 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/MoveWindow>`__
.. function:: MoveWindow(name,x,y)-> None
Sets the position of the window.
:param name: Name of the window to be moved.
:type name: str
:param x: New x coordinate of the top-left corner
:type x: int
:param y: New y coordinate of the top-left corner
:type y: int
The function
``cvMoveWindow``
changes the position of the window.
.. index:: NamedWindow
.. _NamedWindow:
NamedWindow
-----------
`id=0.155885062255 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/NamedWindow>`__
.. function:: NamedWindow(name,flags=CV_WINDOW_AUTOSIZE)-> None
Creates a window.
:param name: Name of the window in the window caption that may be used as a window identifier.
:type name: str
:param flags: Flags of the window. Currently the only supported flag is ``CV_WINDOW_AUTOSIZE`` . If this is set, window size is automatically adjusted to fit the displayed image (see :ref:`ShowImage` ), and the user can not change the window size manually.
:type flags: int
The function
``cvNamedWindow``
creates a window which can be used as a placeholder for images and trackbars. Created windows are referred to by their names.
If a window with the same name already exists, the function does nothing.
\
\
**[Qt Backend Only]**
qt-specific details:
* **flags** Flags of the window. Currently the supported flags are:
* **CV_WINDOW_NORMAL or CV_WINDOW_AUTOSIZE:** ``CV_WINDOW_NORMAL`` let the user resize the window, whereas ``CV_WINDOW_AUTOSIZE`` adjusts automatically the window's size to fit the displayed image (see :ref:`ShowImage` ), and the user can not change the window size manually.
* **CV_WINDOW_FREERATIO or CV_WINDOW_KEEPRATIO:** ``CV_WINDOW_FREERATIO`` adjust the image without respect the its ration, whereas ``CV_WINDOW_KEEPRATIO`` keep the image's ratio.
* **CV_GUI_NORMAL or CV_GUI_EXPANDED:** ``CV_GUI_NORMAL`` is the old way to draw the window without statusbar and toolbar, whereas ``CV_GUI_EXPANDED`` is the new enhance GUI.
This parameter is optional. The default flags set for a new window are ``CV_WINDOW_AUTOSIZE`` , ``CV_WINDOW_KEEPRATIO`` , and ``CV_GUI_EXPANDED`` .
However, if you want to modify the flags, you can combine them using OR operator, ie:
::
cvNamedWindow( ``myWindow'', ``CV_WINDOW_NORMAL`` textbar ``CV_GUI_NORMAL`` );
..
.. index:: ResizeWindow
.. _ResizeWindow:
ResizeWindow
------------
`id=0.266699312987 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/ResizeWindow>`__
.. function:: ResizeWindow(name,width,height)-> None
Sets the window size.
:param name: Name of the window to be resized.
:type name: str
:param width: New width
:type width: int
:param height: New height
:type height: int
The function
``cvResizeWindow``
changes the size of the window.
.. index:: SetMouseCallback
.. _SetMouseCallback:
SetMouseCallback
----------------
`id=0.299310906828 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/SetMouseCallback>`__
.. function:: SetMouseCallback(windowName, onMouse, param) -> None
Assigns callback for mouse events.
:param windowName: Name of the window.
:type windowName: str
:param onMouse: Callable to be called every time a mouse event occurs in the specified window. This callable should have signature `` Foo(event, x, y, flags, param)-> None ``
where ``event`` is one of ``CV_EVENT_*`` , ``x`` and ``y`` are the coordinates of the mouse pointer in image coordinates (not window coordinates), ``flags`` is a combination of ``CV_EVENT_FLAG_*`` , and ``param`` is a user-defined parameter passed to the ``cvSetMouseCallback`` function call.
:type onMouse: :class:`PyCallableObject`
:param param: User-defined parameter to be passed to the callback function.
:type param: object
The function
``cvSetMouseCallback``
sets the callback function for mouse events occuring within the specified window.
The
``event``
parameter is one of:
* **CV_EVENT_MOUSEMOVE** Mouse movement
* **CV_EVENT_LBUTTONDOWN** Left button down
* **CV_EVENT_RBUTTONDOWN** Right button down
* **CV_EVENT_MBUTTONDOWN** Middle button down
* **CV_EVENT_LBUTTONUP** Left button up
* **CV_EVENT_RBUTTONUP** Right button up
* **CV_EVENT_MBUTTONUP** Middle button up
* **CV_EVENT_LBUTTONDBLCLK** Left button double click
* **CV_EVENT_RBUTTONDBLCLK** Right button double click
* **CV_EVENT_MBUTTONDBLCLK** Middle button double click
The
``flags``
parameter is a combination of :
* **CV_EVENT_FLAG_LBUTTON** Left button pressed
* **CV_EVENT_FLAG_RBUTTON** Right button pressed
* **CV_EVENT_FLAG_MBUTTON** Middle button pressed
* **CV_EVENT_FLAG_CTRLKEY** Control key pressed
* **CV_EVENT_FLAG_SHIFTKEY** Shift key pressed
* **CV_EVENT_FLAG_ALTKEY** Alt key pressed
.. index:: SetTrackbarPos
.. _SetTrackbarPos:
SetTrackbarPos
--------------
`id=0.722744232916 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/SetTrackbarPos>`__
.. function:: SetTrackbarPos(trackbarName,windowName,pos)-> None
Sets the trackbar position.
:param trackbarName: Name of the trackbar.
:type trackbarName: str
:param windowName: Name of the window which is the parent of trackbar.
:type windowName: str
:param pos: New position.
:type pos: int
The function
``cvSetTrackbarPos``
sets the position of the specified trackbar.
\
\
**[Qt Backend Only]**
qt-specific details:
* **windowName** Name of the window which is the parent of trackbar. Can be NULL if the trackbar is attached to the control panel.
.. index:: ShowImage
.. _ShowImage:
ShowImage
---------
`id=0.260802502296 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/ShowImage>`__
.. function:: ShowImage(name,image)-> None
Displays the image in the specified window
:param name: Name of the window.
:type name: str
:param image: Image to be shown.
:type image: :class:`CvArr`
The function
``cvShowImage``
displays the image in the specified window. If the window was created with the
``CV_WINDOW_AUTOSIZE``
flag then the image is shown with its original size, otherwise the image is scaled to fit in the window. The function may scale the image, depending on its depth:
*
If the image is 8-bit unsigned, it is displayed as is.
*
If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255].
*
If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255].
.. index:: WaitKey
.. _WaitKey:
WaitKey
-------
`id=0.742095797983 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/highgui/WaitKey>`__
.. function:: WaitKey(delay=0)-> int
Waits for a pressed key.
:param delay: Delay in milliseconds.
:type delay: int
The function
``cvWaitKey``
waits for key event infinitely (
:math:`\texttt{delay} <= 0`
) or for
``delay``
milliseconds. Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
**Note:**
This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing, unless HighGUI is used within some environment that takes care of event processing.
\
\
**[Qt Backend Only]**
qt-specific details:
With this current Qt implementation, this is the only way to process event such as repaint for the windows, and so on
ldots

View File

@ -0,0 +1,18 @@
*************************
imgproc. Image Processing
*************************
.. toctree::
:maxdepth: 2
imgproc_histograms
imgproc_image_filtering
imgproc_geometric_image_transformations
imgproc_miscellaneous_image_transformations
imgproc_structural_analysis_and_shape_descriptors
imgproc_planar_subdivisions
imgproc_motion_analysis_and_object_tracking
imgproc_feature_detection
imgproc_object_detection

View File

@ -0,0 +1,628 @@
Feature Detection
=================
.. highlight:: python
.. index:: Canny
.. _Canny:
Canny
-----
`id=0.573160740956 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Canny>`__
.. function:: Canny(image,edges,threshold1,threshold2,aperture_size=3)-> None
Implements the Canny algorithm for edge detection.
:param image: Single-channel input image
:type image: :class:`CvArr`
:param edges: Single-channel image to store the edges found by the function
:type edges: :class:`CvArr`
:param threshold1: The first threshold
:type threshold1: float
:param threshold2: The second threshold
:type threshold2: float
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` )
:type aperture_size: int
The function finds the edges on the input image
``image``
and marks them in the output image
``edges``
using the Canny algorithm. The smallest value between
``threshold1``
and
``threshold2``
is used for edge linking, the largest value is used to find the initial segments of strong edges.
.. index:: CornerEigenValsAndVecs
.. _CornerEigenValsAndVecs:
CornerEigenValsAndVecs
----------------------
`id=0.769586068428 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CornerEigenValsAndVecs>`__
.. function:: CornerEigenValsAndVecs(image,eigenvv,blockSize,aperture_size=3)-> None
Calculates eigenvalues and eigenvectors of image blocks for corner detection.
:param image: Input image
:type image: :class:`CvArr`
:param eigenvv: Image to store the results. It must be 6 times wider than the input image
:type eigenvv: :class:`CvArr`
:param blockSize: Neighborhood size (see discussion)
:type blockSize: int
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` )
:type aperture_size: int
For every pixel, the function
``cvCornerEigenValsAndVecs``
considers a
:math:`\texttt{blockSize} \times \texttt{blockSize}`
neigborhood S(p). It calcualtes the covariation matrix of derivatives over the neigborhood as:
.. math::
M = \begin{bmatrix} \sum _{S(p)}(dI/dx)^2 & \sum _{S(p)}(dI/dx \cdot dI/dy)^2 \\ \sum _{S(p)}(dI/dx \cdot dI/dy)^2 & \sum _{S(p)}(dI/dy)^2 \end{bmatrix}
After that it finds eigenvectors and eigenvalues of the matrix and stores them into destination image in form
:math:`(\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)`
where
* :math:`\lambda_1, \lambda_2`
are the eigenvalues of
:math:`M`
; not sorted
* :math:`x_1, y_1`
are the eigenvectors corresponding to
:math:`\lambda_1`
* :math:`x_2, y_2`
are the eigenvectors corresponding to
:math:`\lambda_2`
.. index:: CornerHarris
.. _CornerHarris:
CornerHarris
------------
`id=0.619256620171 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CornerHarris>`__
.. function:: CornerHarris(image,harris_dst,blockSize,aperture_size=3,k=0.04)-> None
Harris edge detector.
:param image: Input image
:type image: :class:`CvArr`
:param harris_dst: Image to store the Harris detector responses. Should have the same size as ``image``
:type harris_dst: :class:`CvArr`
:param blockSize: Neighborhood size (see the discussion of :ref:`CornerEigenValsAndVecs` )
:type blockSize: int
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` ).
:type aperture_size: int
:param k: Harris detector free parameter. See the formula below
:type k: float
The function runs the Harris edge detector on the image. Similarly to
:ref:`CornerMinEigenVal`
and
:ref:`CornerEigenValsAndVecs`
, for each pixel it calculates a
:math:`2\times2`
gradient covariation matrix
:math:`M`
over a
:math:`\texttt{blockSize} \times \texttt{blockSize}`
neighborhood. Then, it stores
.. math::
det(M) - k \, trace(M)^2
to the destination image. Corners in the image can be found as the local maxima of the destination image.
.. index:: CornerMinEigenVal
.. _CornerMinEigenVal:
CornerMinEigenVal
-----------------
`id=0.523904183834 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CornerMinEigenVal>`__
.. function:: CornerMinEigenVal(image,eigenval,blockSize,aperture_size=3)-> None
Calculates the minimal eigenvalue of gradient matrices for corner detection.
:param image: Input image
:type image: :class:`CvArr`
:param eigenval: Image to store the minimal eigenvalues. Should have the same size as ``image``
:type eigenval: :class:`CvArr`
:param blockSize: Neighborhood size (see the discussion of :ref:`CornerEigenValsAndVecs` )
:type blockSize: int
:param aperture_size: Aperture parameter for the Sobel operator (see :ref:`Sobel` ).
:type aperture_size: int
The function is similar to
:ref:`CornerEigenValsAndVecs`
but it calculates and stores only the minimal eigen value of derivative covariation matrix for every pixel, i.e.
:math:`min(\lambda_1, \lambda_2)`
in terms of the previous function.
.. index:: FindCornerSubPix
.. _FindCornerSubPix:
FindCornerSubPix
----------------
`id=0.448453276565 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/FindCornerSubPix>`__
.. function:: FindCornerSubPix(image,corners,win,zero_zone,criteria)-> corners
Refines the corner locations.
:param image: Input image
:type image: :class:`CvArr`
:param corners: Initial coordinates of the input corners as a list of (x, y) pairs
:type corners: sequence of (float, float)
:param win: Half of the side length of the search window. For example, if ``win`` =(5,5), then a :math:`5*2+1 \times 5*2+1 = 11 \times 11` search window would be used
:type win: :class:`CvSize`
:param zero_zone: Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size
:type zero_zone: :class:`CvSize`
:param criteria: Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after a certain number of iterations or when a required accuracy is achieved. The ``criteria`` may specify either of or both the maximum number of iteration and the required accuracy
:type criteria: :class:`CvTermCriteria`
The function iterates to find the sub-pixel accurate location of corners, or radial saddle points, as shown in on the picture below.
It returns the refined coordinates as a list of (x, y) pairs.
.. image:: ../pics/cornersubpix.png
Sub-pixel accurate corner locator is based on the observation that every vector from the center
:math:`q`
to a point
:math:`p`
located within a neighborhood of
:math:`q`
is orthogonal to the image gradient at
:math:`p`
subject to image and measurement noise. Consider the expression:
.. math::
\epsilon _i = {DI_{p_i}}^T \cdot (q - p_i)
where
:math:`{DI_{p_i}}`
is the image gradient at the one of the points
:math:`p_i`
in a neighborhood of
:math:`q`
. The value of
:math:`q`
is to be found such that
:math:`\epsilon_i`
is minimized. A system of equations may be set up with
:math:`\epsilon_i`
set to zero:
.. math::
\sum _i(DI_{p_i} \cdot {DI_{p_i}}^T) q = \sum _i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i)
where the gradients are summed within a neighborhood ("search window") of
:math:`q`
. Calling the first gradient term
:math:`G`
and the second gradient term
:math:`b`
gives:
.. math::
q = G^{-1} \cdot b
The algorithm sets the center of the neighborhood window at this new center
:math:`q`
and then iterates until the center keeps within a set threshold.
.. index:: GoodFeaturesToTrack
.. _GoodFeaturesToTrack:
GoodFeaturesToTrack
-------------------
`id=0.0875265840344 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/GoodFeaturesToTrack>`__
.. function:: GoodFeaturesToTrack(image,eigImage,tempImage,cornerCount,qualityLevel,minDistance,mask=NULL,blockSize=3,useHarris=0,k=0.04)-> corners
Determines strong corners on an image.
:param image: The source 8-bit or floating-point 32-bit, single-channel image
:type image: :class:`CvArr`
:param eigImage: Temporary floating-point 32-bit image, the same size as ``image``
:type eigImage: :class:`CvArr`
:param tempImage: Another temporary image, the same size and format as ``eigImage``
:type tempImage: :class:`CvArr`
:param cornerCount: number of corners to detect
:type cornerCount: int
:param qualityLevel: Multiplier for the max/min eigenvalue; specifies the minimal accepted quality of image corners
:type qualityLevel: float
:param minDistance: Limit, specifying the minimum possible distance between the returned corners; Euclidian distance is used
:type minDistance: float
:param mask: Region of interest. The function selects points either in the specified region or in the whole image if the mask is NULL
:type mask: :class:`CvArr`
:param blockSize: Size of the averaging block, passed to the underlying :ref:`CornerMinEigenVal` or :ref:`CornerHarris` used by the function
:type blockSize: int
:param useHarris: If nonzero, Harris operator ( :ref:`CornerHarris` ) is used instead of default :ref:`CornerMinEigenVal`
:type useHarris: int
:param k: Free parameter of Harris detector; used only if ( :math:`\texttt{useHarris} != 0` )
:type k: float
The function finds the corners with big eigenvalues in the image. The function first calculates the minimal
eigenvalue for every source image pixel using the
:ref:`CornerMinEigenVal`
function and stores them in
``eigImage``
. Then it performs
non-maxima suppression (only the local maxima in
:math:`3\times 3`
neighborhood
are retained). The next step rejects the corners with the minimal
eigenvalue less than
:math:`\texttt{qualityLevel} \cdot max(\texttt{eigImage}(x,y))`
.
Finally, the function ensures that the distance between any two corners is not smaller than
``minDistance``
. The weaker corners (with a smaller min eigenvalue) that are too close to the stronger corners are rejected.
Note that the if the function is called with different values
``A``
and
``B``
of the parameter
``qualityLevel``
, and
``A``
> {B}, the array of returned corners with
``qualityLevel=A``
will be the prefix of the output corners array with
``qualityLevel=B``
.
.. index:: HoughLines2
.. _HoughLines2:
HoughLines2
-----------
`id=0.925466467327 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/HoughLines2>`__
.. function:: HoughLines2(image,storage,method,rho,theta,threshold,param1=0,param2=0)-> lines
Finds lines in a binary image using a Hough transform.
:param image: The 8-bit, single-channel, binary source image. In the case of a probabilistic method, the image is modified by the function
:type image: :class:`CvArr`
:param storage: The storage for the lines that are detected. It can
be a memory storage (in this case a sequence of lines is created in
the storage and returned by the function) or single row/single column
matrix (CvMat*) of a particular type (see below) to which the lines'
parameters are written. The matrix header is modified by the function
so its ``cols`` or ``rows`` will contain the number of lines
detected. If ``storage`` is a matrix and the actual number
of lines exceeds the matrix size, the maximum possible number of lines
is returned (in the case of standard hough transform the lines are sorted
by the accumulator value)
:type storage: :class:`CvMemStorage`
:param method: The Hough transform variant, one of the following:
* **CV_HOUGH_STANDARD** classical or standard Hough transform. Every line is represented by two floating-point numbers :math:`(\rho, \theta)` , where :math:`\rho` is a distance between (0,0) point and the line, and :math:`\theta` is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of ``CV_32FC2`` type
* **CV_HOUGH_PROBABILISTIC** probabilistic Hough transform (more efficient in case if picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of ``CV_32SC4`` type
* **CV_HOUGH_MULTI_SCALE** multi-scale variant of the classical Hough transform. The lines are encoded the same way as ``CV_HOUGH_STANDARD``
:type method: int
:param rho: Distance resolution in pixel-related units
:type rho: float
:param theta: Angle resolution measured in radians
:type theta: float
:param threshold: Threshold parameter. A line is returned by the function if the corresponding accumulator value is greater than ``threshold``
:type threshold: int
:param param1: The first method-dependent parameter:
* For the classical Hough transform it is not used (0).
* For the probabilistic Hough transform it is the minimum line length.
* For the multi-scale Hough transform it is the divisor for the distance resolution :math:`\rho` . (The coarse distance resolution will be :math:`\rho` and the accurate resolution will be :math:`(\rho / \texttt{param1})` ).
:type param1: float
:param param2: The second method-dependent parameter:
* For the classical Hough transform it is not used (0).
* For the probabilistic Hough transform it is the maximum gap between line segments lying on the same line to treat them as a single line segment (i.e. to join them).
* For the multi-scale Hough transform it is the divisor for the angle resolution :math:`\theta` . (The coarse angle resolution will be :math:`\theta` and the accurate resolution will be :math:`(\theta / \texttt{param2})` ).
:type param2: float
The function implements a few variants of the Hough transform for line detection.
.. index:: PreCornerDetect
.. _PreCornerDetect:
PreCornerDetect
---------------
`id=0.420590326716 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/PreCornerDetect>`__
.. function:: PreCornerDetect(image,corners,apertureSize=3)-> None
Calculates the feature map for corner detection.
:param image: Input image
:type image: :class:`CvArr`
:param corners: Image to store the corner candidates
:type corners: :class:`CvArr`
:param apertureSize: Aperture parameter for the Sobel operator (see :ref:`Sobel` )
:type apertureSize: int
The function calculates the function
.. math::
D_x^2 D_{yy} + D_y^2 D_{xx} - 2 D_x D_y D_{xy}
where
:math:`D_?`
denotes one of the first image derivatives and
:math:`D_{??}`
denotes a second image derivative.
The corners can be found as local maximums of the function below:
.. include:: /Users/vp/Projects/ocv/opencv/doc/python_fragments/precornerdetect.py
:literal:

View File

@ -0,0 +1,748 @@
Geometric Image Transformations
===============================
.. highlight:: python
The functions in this section perform various geometrical transformations of 2D images. That is, they do not change the image content, but deform the pixel grid, and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel
:math:`(x, y)`
of the destination image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value, that is:
.. math::
\texttt{dst} (x,y)= \texttt{src} (f_x(x,y), f_y(x,y))
In the case when the user specifies the forward mapping:
:math:`\left<g_x, g_y\right>: \texttt{src} \rightarrow \texttt{dst}`
, the OpenCV functions first compute the corresponding inverse mapping:
:math:`\left<f_x, f_y\right>: \texttt{dst} \rightarrow \texttt{src}`
and then use the above formula.
The actual implementations of the geometrical transformations, from the most generic
:ref:`Remap`
and to the simplest and the fastest
:ref:`Resize`
, need to solve the 2 main problems with the above formula:
#.
extrapolation of non-existing pixels. Similarly to the filtering functions, described in the previous section, for some
:math:`(x,y)`
one of
:math:`f_x(x,y)`
or
:math:`f_y(x,y)`
, or they both, may fall outside of the image, in which case some extrapolation method needs to be used. OpenCV provides the same selection of the extrapolation methods as in the filtering functions, but also an additional method
``BORDER_TRANSPARENT``
, which means that the corresponding pixels in the destination image will not be modified at all.
#.
interpolation of pixel values. Usually
:math:`f_x(x,y)`
and
:math:`f_y(x,y)`
are floating-point numbers (i.e.
:math:`\left<f_x, f_y\right>`
can be an affine or perspective transformation, or radial lens distortion correction etc.), so a pixel values at fractional coordinates needs to be retrieved. In the simplest case the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel used, which is called nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated
`interpolation methods <http://en.wikipedia.org/wiki/Multivariate_interpolation>`_
, where a polynomial function is fit into some neighborhood of the computed pixel
:math:`(f_x(x,y), f_y(x,y))`
and then the value of the polynomial at
:math:`(f_x(x,y), f_y(x,y))`
is taken as the interpolated pixel value. In OpenCV you can choose between several interpolation methods, see
:ref:`Resize`
.
.. index:: GetRotationMatrix2D
.. _GetRotationMatrix2D:
GetRotationMatrix2D
-------------------
`id=0.155746043393 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/GetRotationMatrix2D>`__
.. function:: GetRotationMatrix2D(center,angle,scale,mapMatrix)-> None
Calculates the affine matrix of 2d rotation.
:param center: Center of the rotation in the source image
:type center: :class:`CvPoint2D32f`
:param angle: The rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner)
:type angle: float
:param scale: Isotropic scale factor
:type scale: float
:param mapMatrix: Pointer to the destination :math:`2\times 3` matrix
:type mapMatrix: :class:`CvMat`
The function
``cv2DRotationMatrix``
calculates the following matrix:
.. math::
\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot \texttt{center.x} - \beta \cdot \texttt{center.y} \\ - \beta & \alpha & \beta \cdot \texttt{center.x} - (1- \alpha ) \cdot \texttt{center.y} \end{bmatrix}
where
.. math::
\alpha = \texttt{scale} \cdot cos( \texttt{angle} ), \beta = \texttt{scale} \cdot sin( \texttt{angle} )
The transformation maps the rotation center to itself. If this is not the purpose, the shift should be adjusted.
.. index:: GetAffineTransform
.. _GetAffineTransform:
GetAffineTransform
------------------
`id=0.131853152013 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/GetAffineTransform>`__
.. function:: GetAffineTransform(src,dst,mapMatrix)-> None
Calculates the affine transform from 3 corresponding points.
:param src: Coordinates of 3 triangle vertices in the source image
:type src: :class:`CvPoint2D32f`
:param dst: Coordinates of the 3 corresponding triangle vertices in the destination image
:type dst: :class:`CvPoint2D32f`
:param mapMatrix: Pointer to the destination :math:`2 \times 3` matrix
:type mapMatrix: :class:`CvMat`
The function cvGetAffineTransform calculates the matrix of an affine transform such that:
.. math::
\begin{bmatrix} x'_i \\ y'_i \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}
where
.. math::
dst(i)=(x'_i,y'_i),
src(i)=(x_i, y_i),
i=0,1,2
.. index:: GetPerspectiveTransform
.. _GetPerspectiveTransform:
GetPerspectiveTransform
-----------------------
`id=0.411609579387 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/GetPerspectiveTransform>`__
.. function:: GetPerspectiveTransform(src,dst,mapMatrix)-> None
Calculates the perspective transform from 4 corresponding points.
:param src: Coordinates of 4 quadrangle vertices in the source image
:type src: :class:`CvPoint2D32f`
:param dst: Coordinates of the 4 corresponding quadrangle vertices in the destination image
:type dst: :class:`CvPoint2D32f`
:param mapMatrix: Pointer to the destination :math:`3\times 3` matrix
:type mapMatrix: :class:`CvMat`
The function
``cvGetPerspectiveTransform``
calculates a matrix of perspective transforms such that:
.. math::
\begin{bmatrix} x'_i \\ y'_i \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}
where
.. math::
dst(i)=(x'_i,y'_i),
src(i)=(x_i, y_i),
i=0,1,2,3
.. index:: GetQuadrangleSubPix
.. _GetQuadrangleSubPix:
GetQuadrangleSubPix
-------------------
`id=0.522776327076 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/GetQuadrangleSubPix>`__
.. function:: GetQuadrangleSubPix(src,dst,mapMatrix)-> None
Retrieves the pixel quadrangle from an image with sub-pixel accuracy.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Extracted quadrangle
:type dst: :class:`CvArr`
:param mapMatrix: The transformation :math:`2 \times 3` matrix :math:`[A|b]` (see the discussion)
:type mapMatrix: :class:`CvMat`
The function
``cvGetQuadrangleSubPix``
extracts pixels from
``src``
at sub-pixel accuracy and stores them to
``dst``
as follows:
.. math::
dst(x, y)= src( A_{11} x' + A_{12} y' + b_1, A_{21} x' + A_{22} y' + b_2)
where
.. math::
x'=x- \frac{(width(dst)-1)}{2} ,
y'=y- \frac{(height(dst)-1)}{2}
and
.. math::
\texttt{mapMatrix} = \begin{bmatrix} A_{11} & A_{12} & b_1 \\ A_{21} & A_{22} & b_2 \end{bmatrix}
The values of pixels at non-integer coordinates are retrieved using bilinear interpolation. When the function needs pixels outside of the image, it uses replication border mode to reconstruct the values. Every channel of multiple-channel images is processed independently.
.. index:: GetRectSubPix
.. _GetRectSubPix:
GetRectSubPix
-------------
`id=0.947278759693 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/GetRectSubPix>`__
.. function:: GetRectSubPix(src,dst,center)-> None
Retrieves the pixel rectangle from an image with sub-pixel accuracy.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Extracted rectangle
:type dst: :class:`CvArr`
:param center: Floating point coordinates of the extracted rectangle center within the source image. The center must be inside the image
:type center: :class:`CvPoint2D32f`
The function
``cvGetRectSubPix``
extracts pixels from
``src``
:
.. math::
dst(x, y) = src(x + \texttt{center.x} - (width( \texttt{dst} )-1)*0.5, y + \texttt{center.y} - (height( \texttt{dst} )-1)*0.5)
where the values of the pixels at non-integer coordinates are retrieved
using bilinear interpolation. Every channel of multiple-channel
images is processed independently. While the rectangle center
must be inside the image, parts of the rectangle may be
outside. In this case, the replication border mode is used to get
pixel values beyond the image boundaries.
.. index:: LogPolar
.. _LogPolar:
LogPolar
--------
`id=0.991629705277 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/LogPolar>`__
.. function:: LogPolar(src,dst,center,M,flags=CV_INNER_LINEAR+CV_WARP_FILL_OUTLIERS)-> None
Remaps an image to log-polar space.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param center: The transformation center; where the output precision is maximal
:type center: :class:`CvPoint2D32f`
:param M: Magnitude scale parameter. See below
:type M: float
:param flags: A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero
* **CV_WARP_INVERSE_MAP** See below
:type flags: int
The function
``cvLogPolar``
transforms the source image using the following transformation:
Forward transformation (
``CV_WARP_INVERSE_MAP``
is not set):
.. math::
dst( \phi , \rho ) = src(x,y)
Inverse transformation (
``CV_WARP_INVERSE_MAP``
is set):
.. math::
dst(x,y) = src( \phi , \rho )
where
.. math::
\rho = M \cdot \log{\sqrt{x^2 + y^2}} , \phi =atan(y/x)
The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth.
The function can not operate in-place.
.. index:: Remap
.. _Remap:
Remap
-----
`id=0.129979785029 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Remap>`__
.. function:: Remap(src,dst,mapx,mapy,flags=CV_INNER_LINEAR+CV_WARP_FILL_OUTLIERS,fillval=(0,0,0,0))-> None
Applies a generic geometrical transformation to the image.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param mapx: The map of x-coordinates (CV _ 32FC1 image)
:type mapx: :class:`CvArr`
:param mapy: The map of y-coordinates (CV _ 32FC1 image)
:type mapy: :class:`CvArr`
:param flags: A combination of interpolation method and the following optional flag(s):
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to ``fillval``
:type flags: int
:param fillval: A value used to fill outliers
:type fillval: :class:`CvScalar`
The function
``cvRemap``
transforms the source image using the specified map:
.. math::
\texttt{dst} (x,y) = \texttt{src} ( \texttt{mapx} (x,y), \texttt{mapy} (x,y))
Similar to other geometrical transformations, some interpolation method (specified by user) is used to extract pixels with non-integer coordinates.
Note that the function can not operate in-place.
.. index:: Resize
.. _Resize:
Resize
------
`id=0.923811087592 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Resize>`__
.. function:: Resize(src,dst,interpolation=CV_INTER_LINEAR)-> None
Resizes an image.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param interpolation: Interpolation method:
* **CV_INTER_NN** nearest-neigbor interpolation
* **CV_INTER_LINEAR** bilinear interpolation (used by default)
* **CV_INTER_AREA** resampling using pixel area relation. It is the preferred method for image decimation that gives moire-free results. In terms of zooming it is similar to the ``CV_INTER_NN`` method
* **CV_INTER_CUBIC** bicubic interpolation
:type interpolation: int
The function
``cvResize``
resizes an image
``src``
so that it fits exactly into
``dst``
. If ROI is set, the function considers the ROI as supported.
.. index:: WarpAffine
.. _WarpAffine:
WarpAffine
----------
`id=0.314654704506 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/WarpAffine>`__
.. function:: WarpAffine(src,dst,mapMatrix,flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS,fillval=(0,0,0,0))-> None
Applies an affine transformation to an image.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param mapMatrix: :math:`2\times 3` transformation matrix
:type mapMatrix: :class:`CvMat`
:param flags: A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels; if some of them correspond to outliers in the source image, they are set to ``fillval``
* **CV_WARP_INVERSE_MAP** indicates that ``matrix`` is inversely
transformed from the destination image to the source and, thus, can be used
directly for pixel interpolation. Otherwise, the function finds
the inverse transform from ``mapMatrix``
:type flags: int
:param fillval: A value used to fill outliers
:type fillval: :class:`CvScalar`
The function
``cvWarpAffine``
transforms the source image using the specified matrix:
.. math::
dst(x',y') = src(x,y)
where
.. math::
\begin{matrix} \begin{bmatrix} x' \\ y' \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} & \mbox{if CV\_WARP\_INVERSE\_MAP is not set} \\ \begin{bmatrix} x \\ y \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} & \mbox{otherwise} \end{matrix}
The function is similar to
:ref:`GetQuadrangleSubPix`
but they are not exactly the same.
:ref:`WarpAffine`
requires input and output image have the same data type, has larger overhead (so it is not quite suitable for small images) and can leave part of destination image unchanged. While
:ref:`GetQuadrangleSubPix`
may extract quadrangles from 8-bit images into floating-point buffer, has smaller overhead and always changes the whole destination image content.
Note that the function can not operate in-place.
To transform a sparse set of points, use the
:ref:`Transform`
function from cxcore.
.. index:: WarpPerspective
.. _WarpPerspective:
WarpPerspective
---------------
`id=0.554206520217 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/WarpPerspective>`__
.. function:: WarpPerspective(src,dst,mapMatrix,flags=CV_INNER_LINEAR+CV_WARP_FILL_OUTLIERS,fillval=(0,0,0,0))-> None
Applies a perspective transformation to an image.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param mapMatrix: :math:`3\times 3` transformation matrix
:type mapMatrix: :class:`CvMat`
:param flags: A combination of interpolation methods and the following optional flags:
* **CV_WARP_FILL_OUTLIERS** fills all of the destination image pixels; if some of them correspond to outliers in the source image, they are set to ``fillval``
* **CV_WARP_INVERSE_MAP** indicates that ``matrix`` is inversely transformed from the destination image to the source and, thus, can be used directly for pixel interpolation. Otherwise, the function finds the inverse transform from ``mapMatrix``
:type flags: int
:param fillval: A value used to fill outliers
:type fillval: :class:`CvScalar`
The function
``cvWarpPerspective``
transforms the source image using the specified matrix:
.. math::
\begin{matrix} \begin{bmatrix} x' \\ y' \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} & \mbox{if CV\_WARP\_INVERSE\_MAP is not set} \\ \begin{bmatrix} x \\ y \end{bmatrix} = \texttt{mapMatrix} \cdot \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} & \mbox{otherwise} \end{matrix}
Note that the function can not operate in-place.
For a sparse set of points use the
:ref:`PerspectiveTransform`
function from CxCore.

View File

@ -0,0 +1,771 @@
Histograms
==========
.. highlight:: python
.. index:: CvHistogram
.. _CvHistogram:
CvHistogram
-----------
`id=0.182438452658 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CvHistogram>`__
.. class:: CvHistogram
Multi-dimensional histogram.
A CvHistogram is a multi-dimensional histogram, created by function
:ref:`CreateHist`
. It has an attribute
``bins``
a
:ref:`CvMatND`
containing the histogram counts.
.. index:: CalcBackProject
.. _CalcBackProject:
CalcBackProject
---------------
`id=0.913786988566 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CalcBackProject>`__
.. function:: CalcBackProject(image,back_project,hist)-> None
Calculates the back projection.
:param image: Source images (though you may pass CvMat** as well)
:type image: sequence of :class:`IplImage`
:param back_project: Destination back projection image of the same type as the source images
:type back_project: :class:`CvArr`
:param hist: Histogram
:type hist: :class:`CvHistogram`
The function calculates the back project of the histogram. For each
tuple of pixels at the same position of all input single-channel images
the function puts the value of the histogram bin, corresponding to the
tuple in the destination image. In terms of statistics, the value of
each output image pixel is the probability of the observed tuple given
the distribution (histogram). For example, to find a red object in the
picture, one may do the following:
#.
Calculate a hue histogram for the red object assuming the image contains only this object. The histogram is likely to have a strong maximum, corresponding to red color.
#.
Calculate back projection of a hue plane of input image where the object is searched, using the histogram. Threshold the image.
#.
Find connected components in the resulting picture and choose the right component using some additional criteria, for example, the largest connected component.
That is the approximate algorithm of Camshift color object tracker, except for the 3rd step, instead of which CAMSHIFT algorithm is used to locate the object on the back projection given the previous object position.
.. index:: CalcBackProjectPatch
.. _CalcBackProjectPatch:
CalcBackProjectPatch
--------------------
`id=0.817068389137 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CalcBackProjectPatch>`__
.. function:: CalcBackProjectPatch(images,dst,patch_size,hist,method,factor)-> None
Locates a template within an image by using a histogram comparison.
:param images: Source images (though, you may pass CvMat** as well)
:type images: sequence of :class:`IplImage`
:param dst: Destination image
:type dst: :class:`CvArr`
:param patch_size: Size of the patch slid though the source image
:type patch_size: :class:`CvSize`
:param hist: Histogram
:type hist: :class:`CvHistogram`
:param method: Comparison method, passed to :ref:`CompareHist` (see description of that function)
:type method: int
:param factor: Normalization factor for histograms, will affect the normalization scale of the destination image, pass 1 if unsure
:type factor: float
The function calculates the back projection by comparing histograms of the source image patches with the given histogram. Taking measurement results from some image at each location over ROI creates an array
``image``
. These results might be one or more of hue,
``x``
derivative,
``y``
derivative, Laplacian filter, oriented Gabor filter, etc. Each measurement output is collected into its own separate image. The
``image``
image array is a collection of these measurement images. A multi-dimensional histogram
``hist``
is constructed by sampling from the
``image``
image array. The final histogram is normalized. The
``hist``
histogram has as many dimensions as the number of elements in
``image``
array.
Each new image is measured and then converted into an
``image``
image array over a chosen ROI. Histograms are taken from this
``image``
image in an area covered by a "patch" with an anchor at center as shown in the picture below. The histogram is normalized using the parameter
``norm_factor``
so that it may be compared with
``hist``
. The calculated histogram is compared to the model histogram;
``hist``
uses The function
``cvCompareHist``
with the comparison method=
``method``
). The resulting output is placed at the location corresponding to the patch anchor in the probability image
``dst``
. This process is repeated as the patch is slid over the ROI. Iterative histogram update by subtracting trailing pixels covered by the patch and adding newly covered pixels to the histogram can save a lot of operations, though it is not implemented yet.
Back Project Calculation by Patches
.. image:: ../pics/backprojectpatch.png
.. index:: CalcHist
.. _CalcHist:
CalcHist
--------
`id=0.937336139373 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CalcHist>`__
.. function:: CalcHist(image,hist,accumulate=0,mask=NULL)-> None
Calculates the histogram of image(s).
:param image: Source images (though you may pass CvMat** as well)
:type image: sequence of :class:`IplImage`
:param hist: Pointer to the histogram
:type hist: :class:`CvHistogram`
:param accumulate: Accumulation flag. If it is set, the histogram is not cleared in the beginning. This feature allows user to compute a single histogram from several images, or to update the histogram online
:type accumulate: int
:param mask: The operation mask, determines what pixels of the source images are counted
:type mask: :class:`CvArr`
The function calculates the histogram of one or more
single-channel images. The elements of a tuple that is used to increment
a histogram bin are taken at the same location from the corresponding
input images.
.. include:: /Users/vp/Projects/ocv/opencv/doc/python_fragments/calchist.py
:literal:
.. index:: CalcProbDensity
.. _CalcProbDensity:
CalcProbDensity
---------------
`id=0.656540136559 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CalcProbDensity>`__
.. function:: CalcProbDensity(hist1,hist2,dst_hist,scale=255)-> None
Divides one histogram by another.
:param hist1: first histogram (the divisor)
:type hist1: :class:`CvHistogram`
:param hist2: second histogram
:type hist2: :class:`CvHistogram`
:param dst_hist: destination histogram
:type dst_hist: :class:`CvHistogram`
:param scale: scale factor for the destination histogram
:type scale: float
The function calculates the object probability density from the two histograms as:
.. math::
\texttt{dist\_hist} (I)= \forkthree{0}{if $\texttt{hist1}(I)=0$}{\texttt{scale}}{if $\texttt{hist1}(I) \ne 0$ and $\texttt{hist2}(I) > \texttt{hist1}(I)$}{\frac{\texttt{hist2}(I) \cdot \texttt{scale}}{\texttt{hist1}(I)}}{if $\texttt{hist1}(I) \ne 0$ and $\texttt{hist2}(I) \le \texttt{hist1}(I)$}
So the destination histogram bins are within less than
``scale``
.
.. index:: ClearHist
.. _ClearHist:
ClearHist
---------
`id=0.0803392460869 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/ClearHist>`__
.. function:: ClearHist(hist)-> None
Clears the histogram.
:param hist: Histogram
:type hist: :class:`CvHistogram`
The function sets all of the histogram bins to 0 in the case of a dense histogram and removes all histogram bins in the case of a sparse array.
.. index:: CompareHist
.. _CompareHist:
CompareHist
-----------
`id=0.913670879358 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CompareHist>`__
.. function:: CompareHist(hist1,hist2,method)->float
Compares two dense histograms.
:param hist1: The first dense histogram
:type hist1: :class:`CvHistogram`
:param hist2: The second dense histogram
:type hist2: :class:`CvHistogram`
:param method: Comparison method, one of the following:
* **CV_COMP_CORREL** Correlation
* **CV_COMP_CHISQR** Chi-Square
* **CV_COMP_INTERSECT** Intersection
* **CV_COMP_BHATTACHARYYA** Bhattacharyya distance
:type method: int
The function compares two dense histograms using the specified method (
:math:`H_1`
denotes the first histogram,
:math:`H_2`
the second):
* Correlation (method=CV\_COMP\_CORREL)
.. math::
d(H_1,H_2) = \frac{\sum_I (H'_1(I) \cdot H'_2(I))}{\sqrt{\sum_I(H'_1(I)^2) \cdot \sum_I(H'_2(I)^2)}}
where
.. math::
H'_k(I) = \frac{H_k(I) - 1}{N \cdot \sum_J H_k(J)}
where N is the number of histogram bins.
* Chi-Square (method=CV\_COMP\_CHISQR)
.. math::
d(H_1,H_2) = \sum _I \frac{(H_1(I)-H_2(I))^2}{H_1(I)+H_2(I)}
* Intersection (method=CV\_COMP\_INTERSECT)
.. math::
d(H_1,H_2) = \sum _I \min (H_1(I), H_2(I))
* Bhattacharyya distance (method=CV\_COMP\_BHATTACHARYYA)
.. math::
d(H_1,H_2) = \sqrt{1 - \sum_I \frac{\sqrt{H_1(I) \cdot H_2(I)}}{ \sqrt{ \sum_I H_1(I) \cdot \sum_I H_2(I) }}}
The function returns
:math:`d(H_1, H_2)`
.
Note: the method
``CV_COMP_BHATTACHARYYA``
only works with normalized histograms.
To compare a sparse histogram or more general sparse configurations of weighted points, consider using the
:ref:`CalcEMD2`
function.
.. index:: CreateHist
.. _CreateHist:
CreateHist
----------
`id=0.716036535258 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CreateHist>`__
.. function:: CreateHist(dims, type, ranges, uniform = 1) -> hist
Creates a histogram.
:param dims: for an N-dimensional histogram, list of length N giving the size of each dimension
:type dims: sequence of int
:param type: Histogram representation format: ``CV_HIST_ARRAY`` means that the histogram data is represented as a multi-dimensional dense array CvMatND; ``CV_HIST_SPARSE`` means that histogram data is represented as a multi-dimensional sparse array CvSparseMat
:type type: int
:param ranges: Array of ranges for the histogram bins. Its meaning depends on the ``uniform`` parameter value. The ranges are used for when the histogram is calculated or backprojected to determine which histogram bin corresponds to which value/tuple of values from the input image(s)
:type ranges: list of tuples of ints
:param uniform: Uniformity flag; if not 0, the histogram has evenly
spaced bins and for every :math:`0<=i<cDims` ``ranges[i]``
is an array of two numbers: lower and upper boundaries for the i-th
histogram dimension.
The whole range [lower,upper] is then split
into ``dims[i]`` equal parts to determine the ``i-th`` input
tuple value ranges for every histogram bin. And if ``uniform=0`` ,
then ``i-th`` element of ``ranges`` array contains ``dims[i]+1`` elements: :math:`\texttt{lower}_0, \texttt{upper}_0,
\texttt{lower}_1, \texttt{upper}_1 = \texttt{lower}_2,
...
\texttt{upper}_{dims[i]-1}`
where :math:`\texttt{lower}_j` and :math:`\texttt{upper}_j`
are lower and upper
boundaries of ``i-th`` input tuple value for ``j-th``
bin, respectively. In either case, the input values that are beyond
the specified range for a histogram bin are not counted by :ref:`CalcHist` and filled with 0 by :ref:`CalcBackProject`
:type uniform: int
The function creates a histogram of the specified
size and returns a pointer to the created histogram. If the array
``ranges``
is 0, the histogram bin ranges must be specified later
via the function
:ref:`SetHistBinRanges`
. Though
:ref:`CalcHist`
and
:ref:`CalcBackProject`
may process 8-bit images without setting
bin ranges, they assume thy are equally spaced in 0 to 255 bins.
.. index:: GetMinMaxHistValue
.. _GetMinMaxHistValue:
GetMinMaxHistValue
------------------
`id=0.698817534292 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/GetMinMaxHistValue>`__
.. function:: GetMinMaxHistValue(hist)-> (min_value,max_value,min_idx,max_idx)
Finds the minimum and maximum histogram bins.
:param hist: Histogram
:type hist: :class:`CvHistogram`
:param min_value: Minimum value of the histogram
:type min_value: :class:`CvScalar`
:param max_value: Maximum value of the histogram
:type max_value: :class:`CvScalar`
:param min_idx: Coordinates of the minimum
:type min_idx: sequence of int
:param max_idx: Coordinates of the maximum
:type max_idx: sequence of int
The function finds the minimum and
maximum histogram bins and their positions. All of output arguments are
optional. Among several extremas with the same value the ones with the
minimum index (in lexicographical order) are returned. In the case of several maximums
or minimums, the earliest in lexicographical order (extrema locations)
is returned.
.. index:: NormalizeHist
.. _NormalizeHist:
NormalizeHist
-------------
`id=0.905166705956 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/NormalizeHist>`__
.. function:: NormalizeHist(hist,factor)-> None
Normalizes the histogram.
:param hist: Pointer to the histogram
:type hist: :class:`CvHistogram`
:param factor: Normalization factor
:type factor: float
The function normalizes the histogram bins by scaling them, such that the sum of the bins becomes equal to
``factor``
.
.. index:: QueryHistValue_1D
.. _QueryHistValue_1D:
QueryHistValue_1D
-----------------
`id=0.26842391983 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/QueryHistValue_1D>`__
.. function:: QueryHistValue_1D(hist, idx0) -> float
Returns the value from a 1D histogram bin.
:param hist: Histogram
:type hist: :class:`CvHistogram`
:param idx0: bin index 0
:type idx0: int
.. index:: QueryHistValue_2D
.. _QueryHistValue_2D:
QueryHistValue_2D
-----------------
`id=0.149356032534 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/QueryHistValue_2D>`__
.. function:: QueryHistValue_2D(hist, idx0, idx1) -> float
Returns the value from a 2D histogram bin.
:param hist: Histogram
:type hist: :class:`CvHistogram`
:param idx0: bin index 0
:type idx0: int
:param idx1: bin index 1
:type idx1: int
.. index:: QueryHistValue_3D
.. _QueryHistValue_3D:
QueryHistValue_3D
-----------------
`id=0.846880584809 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/QueryHistValue_3D>`__
.. function:: QueryHistValue_3D(hist, idx0, idx1, idx2) -> float
Returns the value from a 3D histogram bin.
:param hist: Histogram
:type hist: :class:`CvHistogram`
:param idx0: bin index 0
:type idx0: int
:param idx1: bin index 1
:type idx1: int
:param idx2: bin index 2
:type idx2: int
.. index:: QueryHistValue_nD
.. _QueryHistValue_nD:
QueryHistValue_nD
-----------------
`id=0.36909443826 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/QueryHistValue_nD>`__
.. function:: QueryHistValue_nD(hist, idx) -> float
Returns the value from a 1D histogram bin.
:param hist: Histogram
:type hist: :class:`CvHistogram`
:param idx: list of indices, of same length as the dimension of the histogram's bin.
:type idx: sequence of int
.. index:: ThreshHist
.. _ThreshHist:
ThreshHist
----------
`id=0.255496509485 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/ThreshHist>`__
.. function:: ThreshHist(hist,threshold)-> None
Thresholds the histogram.
:param hist: Pointer to the histogram
:type hist: :class:`CvHistogram`
:param threshold: Threshold level
:type threshold: float
The function clears histogram bins that are below the specified threshold.

View File

@ -0,0 +1,750 @@
Image Filtering
===============
.. highlight:: python
Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images (represented as
:func:`Mat`
's), that is, for each pixel location
:math:`(x,y)`
in the source image some its (normally rectangular) neighborhood is considered and used to compute the response. In case of a linear filter it is a weighted sum of pixel values, in case of morphological operations it is the minimum or maximum etc. The computed response is stored to the destination image at the same location
:math:`(x,y)`
. It means, that the output image will be of the same size as the input image. Normally, the functions supports multi-channel arrays, in which case every channel is processed independently, therefore the output image will also have the same number of channels as the input one.
Another common feature of the functions and classes described in this section is that, unlike simple arithmetic functions, they need to extrapolate values of some non-existing pixels. For example, if we want to smooth an image using a Gaussian
:math:`3 \times 3`
filter, then during the processing of the left-most pixels in each row we need pixels to the left of them, i.e. outside of the image. We can let those pixels be the same as the left-most image pixels (i.e. use "replicated border" extrapolation method), or assume that all the non-existing pixels are zeros ("contant border" extrapolation method) etc.
.. index:: IplConvKernel
.. _IplConvKernel:
IplConvKernel
-------------
`id=0.589941281227 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/IplConvKernel>`__
.. class:: IplConvKernel
An IplConvKernel is a rectangular convolution kernel, created by function
:ref:`CreateStructuringElementEx`
.
.. index:: CopyMakeBorder
.. _CopyMakeBorder:
CopyMakeBorder
--------------
`id=0.392095677822 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CopyMakeBorder>`__
.. function:: CopyMakeBorder(src,dst,offset,bordertype,value=(0,0,0,0))-> None
Copies an image and makes a border around it.
:param src: The source image
:type src: :class:`CvArr`
:param dst: The destination image
:type dst: :class:`CvArr`
:param offset: Coordinates of the top-left corner (or bottom-left in the case of images with bottom-left origin) of the destination image rectangle where the source image (or its ROI) is copied. Size of the rectanlge matches the source image size/ROI size
:type offset: :class:`CvPoint`
:param bordertype: Type of the border to create around the copied source image rectangle; types include:
* **IPL_BORDER_CONSTANT** border is filled with the fixed value, passed as last parameter of the function.
* **IPL_BORDER_REPLICATE** the pixels from the top and bottom rows, the left-most and right-most columns are replicated to fill the border.
(The other two border types from IPL, ``IPL_BORDER_REFLECT`` and ``IPL_BORDER_WRAP`` , are currently unsupported)
:type bordertype: int
:param value: Value of the border pixels if ``bordertype`` is ``IPL_BORDER_CONSTANT``
:type value: :class:`CvScalar`
The function copies the source 2D array into the interior of the destination array and makes a border of the specified type around the copied area. The function is useful when one needs to emulate border type that is different from the one embedded into a specific algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV, internally use replication border type, while the user may need a zero border or a border, filled with 1's or 255's.
.. index:: CreateStructuringElementEx
.. _CreateStructuringElementEx:
CreateStructuringElementEx
--------------------------
`id=0.317060827729 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CreateStructuringElementEx>`__
.. function:: CreateStructuringElementEx(cols,rows,anchorX,anchorY,shape,values=None)-> kernel
Creates a structuring element.
:param cols: Number of columns in the structuring element
:type cols: int
:param rows: Number of rows in the structuring element
:type rows: int
:param anchorX: Relative horizontal offset of the anchor point
:type anchorX: int
:param anchorY: Relative vertical offset of the anchor point
:type anchorY: int
:param shape: Shape of the structuring element; may have the following values:
* **CV_SHAPE_RECT** a rectangular element
* **CV_SHAPE_CROSS** a cross-shaped element
* **CV_SHAPE_ELLIPSE** an elliptic element
* **CV_SHAPE_CUSTOM** a user-defined element. In this case the parameter ``values`` specifies the mask, that is, which neighbors of the pixel must be considered
:type shape: int
:param values: Pointer to the structuring element data, a plane array, representing row-by-row scanning of the element matrix. Non-zero values indicate points that belong to the element. If the pointer is ``NULL`` , then all values are considered non-zero, that is, the element is of a rectangular shape. This parameter is considered only if the shape is ``CV_SHAPE_CUSTOM``
:type values: sequence of int
The function CreateStructuringElementEx allocates and fills the structure
``IplConvKernel``
, which can be used as a structuring element in the morphological operations.
.. index:: Dilate
.. _Dilate:
Dilate
------
`id=0.716788417488 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Dilate>`__
.. function:: Dilate(src,dst,element=None,iterations=1)-> None
Dilates an image by using a specific structuring element.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param element: Structuring element used for dilation. If it is ``None`` ,
a :math:`3\times 3` rectangular structuring element is used
:type element: :class:`IplConvKernel`
:param iterations: Number of times dilation is applied
:type iterations: int
The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:
.. math::
\max _{(x',y') \, in \, \texttt{element} }src(x+x',y+y')
The function supports the in-place mode. Dilation can be applied several (
``iterations``
) times. For color images, each channel is processed independently.
.. index:: Erode
.. _Erode:
Erode
-----
`id=0.842620131268 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Erode>`__
.. function:: Erode(src,dst,element=None,iterations=1)-> None
Erodes an image by using a specific structuring element.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param element: Structuring element used for erosion. If it is ``None`` ,
a :math:`3\times 3` rectangular structuring element is used
:type element: :class:`IplConvKernel`
:param iterations: Number of times erosion is applied
:type iterations: int
The function erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken:
.. math::
\min _{(x',y') \, in \, \texttt{element} }src(x+x',y+y')
The function supports the in-place mode. Erosion can be applied several (
``iterations``
) times. For color images, each channel is processed independently.
.. index:: Filter2D
.. _Filter2D:
Filter2D
--------
`id=0.460981812748 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Filter2D>`__
.. function:: Filter2D(src,dst,kernel,anchor=(-1,-1))-> None
Convolves an image with the kernel.
:param src: The source image
:type src: :class:`CvArr`
:param dst: The destination image
:type dst: :class:`CvArr`
:param kernel: Convolution kernel, a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using :ref:`Split` and process them individually
:type kernel: :class:`CvMat`
:param anchor: The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center
:type anchor: :class:`CvPoint`
The function applies an arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that are inside the image.
.. index:: Laplace
.. _Laplace:
Laplace
-------
`id=0.292603296168 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Laplace>`__
.. function:: Laplace(src,dst,apertureSize=3)-> None
Calculates the Laplacian of an image.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param apertureSize: Aperture size (it has the same meaning as :ref:`Sobel` )
:type apertureSize: int
The function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator:
.. math::
\texttt{dst} (x,y) = \frac{d^2 \texttt{src}}{dx^2} + \frac{d^2 \texttt{src}}{dy^2}
Setting
``apertureSize``
= 1 gives the fastest variant that is equal to convolving the image with the following kernel:
.. math::
\vecthreethree {0}{1}{0}{1}{-4}{1}{0}{1}{0}
Similar to the
:ref:`Sobel`
function, no scaling is done and the same combinations of input and output formats are supported.
.. index:: MorphologyEx
.. _MorphologyEx:
MorphologyEx
------------
`id=0.989292823459 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/MorphologyEx>`__
.. function:: MorphologyEx(src,dst,temp,element,operation,iterations=1)-> None
Performs advanced morphological transformations.
:param src: Source image
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param temp: Temporary image, required in some cases
:type temp: :class:`CvArr`
:param element: Structuring element
:type element: :class:`IplConvKernel`
:param operation: Type of morphological operation, one of the following:
* **CV_MOP_OPEN** opening
* **CV_MOP_CLOSE** closing
* **CV_MOP_GRADIENT** morphological gradient
* **CV_MOP_TOPHAT** "top hat"
* **CV_MOP_BLACKHAT** "black hat"
:type operation: int
:param iterations: Number of times erosion and dilation are applied
:type iterations: int
The function can perform advanced morphological transformations using erosion and dilation as basic operations.
Opening:
.. math::
dst=open(src,element)=dilate(erode(src,element),element)
Closing:
.. math::
dst=close(src,element)=erode(dilate(src,element),element)
Morphological gradient:
.. math::
dst=morph \_ grad(src,element)=dilate(src,element)-erode(src,element)
"Top hat":
.. math::
dst=tophat(src,element)=src-open(src,element)
"Black hat":
.. math::
dst=blackhat(src,element)=close(src,element)-src
The temporary image
``temp``
is required for a morphological gradient and, in the case of in-place operation, for "top hat" and "black hat".
.. index:: PyrDown
.. _PyrDown:
PyrDown
-------
`id=0.761058003811 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/PyrDown>`__
.. function:: PyrDown(src,dst,filter=CV_GAUSSIAN_5X5)-> None
Downsamples an image.
:param src: The source image
:type src: :class:`CvArr`
:param dst: The destination image, should have a half as large width and height than the source
:type dst: :class:`CvArr`
:param filter: Type of the filter used for convolution; only ``CV_GAUSSIAN_5x5`` is currently supported
:type filter: int
The function performs the downsampling step of the Gaussian pyramid decomposition. First it convolves the source image with the specified filter and then downsamples the image by rejecting even rows and columns.
.. index:: Smooth
.. _Smooth:
Smooth
------
`id=0.981627398232 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Smooth>`__
.. function:: Smooth(src,dst,smoothtype=CV_GAUSSIAN,param1=3,param2=0,param3=0,param4=0)-> None
Smooths the image in one of several ways.
:param src: The source image
:type src: :class:`CvArr`
:param dst: The destination image
:type dst: :class:`CvArr`
:param smoothtype: Type of the smoothing:
* **CV_BLUR_NO_SCALE** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's). If you want to smooth different pixels with different-size box kernels, you can use the integral image that is computed using :ref:`Integral`
* **CV_BLUR** linear convolution with :math:`\texttt{param1}\times\texttt{param2}` box kernel (all 1's) with subsequent scaling by :math:`1/(\texttt{param1}\cdot\texttt{param2})`
* **CV_GAUSSIAN** linear convolution with a :math:`\texttt{param1}\times\texttt{param2}` Gaussian kernel
* **CV_MEDIAN** median filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture
* **CV_BILATERAL** bilateral filter with a :math:`\texttt{param1}\times\texttt{param1}` square aperture, color sigma= ``param3`` and spatial sigma= ``param4`` . If ``param1=0`` , the aperture square side is set to ``cvRound(param4*1.5)*2+1`` . Information about bilateral filtering can be found at http://www.dai.ed.ac.uk/CVonline/LOCAL\_COPIES/MANDUCHI1/Bilateral\_Filtering.html
:type smoothtype: int
:param param1: The first parameter of the smoothing operation, the aperture width. Must be a positive odd number (1, 3, 5, ...)
:type param1: int
:param param2: The second parameter of the smoothing operation, the aperture height. Ignored by ``CV_MEDIAN`` and ``CV_BILATERAL`` methods. In the case of simple scaled/non-scaled and Gaussian blur if ``param2`` is zero, it is set to ``param1`` . Otherwise it must be a positive odd number.
:type param2: int
:param param3: In the case of a Gaussian parameter this parameter may specify Gaussian :math:`\sigma` (standard deviation). If it is zero, it is calculated from the kernel size:
.. math::
\sigma = 0.3 (n/2 - 1) + 0.8 \quad \text{where} \quad n= \begin{array}{l l} \mbox{\texttt{param1} for horizontal kernel} \\ \mbox{\texttt{param2} for vertical kernel} \end{array}
Using standard sigma for small kernels ( :math:`3\times 3` to :math:`7\times 7` ) gives better speed. If ``param3`` is not zero, while ``param1`` and ``param2`` are zeros, the kernel size is calculated from the sigma (to provide accurate enough operation).
:type param3: float
The function smooths an image using one of several methods. Every of the methods has some features and restrictions listed below
Blur with no scaling works with single-channel images only and supports accumulation of 8-bit to 16-bit format (similar to
:ref:`Sobel`
and
:ref:`Laplace`
) and 32-bit floating point to 32-bit floating-point format.
Simple blur and Gaussian blur support 1- or 3-channel, 8-bit and 32-bit floating point images. These two methods can process images in-place.
Median and bilateral filters work with 1- or 3-channel 8-bit images and can not process images in-place.
.. index:: Sobel
.. _Sobel:
Sobel
-----
`id=0.141242620837 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Sobel>`__
.. function:: Sobel(src,dst,xorder,yorder,apertureSize = 3)-> None
Calculates the first, second, third or mixed image derivatives using an extended Sobel operator.
:param src: Source image of type CvArr*
:type src: :class:`CvArr`
:param dst: Destination image
:type dst: :class:`CvArr`
:param xorder: Order of the derivative x
:type xorder: int
:param yorder: Order of the derivative y
:type yorder: int
:param apertureSize: Size of the extended Sobel kernel, must be 1, 3, 5 or 7
:type apertureSize: int
In all cases except 1, an
:math:`\texttt{apertureSize} \times
\texttt{apertureSize}`
separable kernel will be used to calculate the
derivative. For
:math:`\texttt{apertureSize} = 1`
a
:math:`3 \times 1`
or
:math:`1 \times 3`
a kernel is used (Gaussian smoothing is not done). There is also the special
value
``CV_SCHARR``
(-1) that corresponds to a
:math:`3\times3`
Scharr
filter that may give more accurate results than a
:math:`3\times3`
Sobel. Scharr
aperture is
.. math::
\vecthreethree{-3}{0}{3}{-10}{0}{10}{-3}{0}{3}
for the x-derivative or transposed for the y-derivative.
The function calculates the image derivative by convolving the image with the appropriate kernel:
.. math::
\texttt{dst} (x,y) = \frac{d^{xorder+yorder} \texttt{src}}{dx^{xorder} \cdot dy^{yorder}}
The Sobel operators combine Gaussian smoothing and differentiation
so the result is more or less resistant to the noise. Most often,
the function is called with (
``xorder``
= 1,
``yorder``
= 0,
``apertureSize``
= 3) or (
``xorder``
= 0,
``yorder``
= 1,
``apertureSize``
= 3) to calculate the first x- or y- image
derivative. The first case corresponds to a kernel of:
.. math::
\vecthreethree{-1}{0}{1}{-2}{0}{2}{-1}{0}{1}
and the second one corresponds to a kernel of:
.. math::
\vecthreethree{-1}{-2}{-1}{0}{0}{0}{1}{2}{1}
or a kernel of:
.. math::
\vecthreethree{1}{2}{1}{0}{0}{0}{-1}{2}{-1}
depending on the image origin (
``origin``
field of
``IplImage``
structure). No scaling is done, so the destination image
usually has larger numbers (in absolute values) than the source image does. To
avoid overflow, the function requires a 16-bit destination image if the
source image is 8-bit. The result can be converted back to 8-bit using the
:ref:`ConvertScale`
or the
:ref:`ConvertScaleAbs`
function. Besides 8-bit images
the function can process 32-bit floating-point images. Both the source and the
destination must be single-channel images of equal size or equal ROI size.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,216 @@
Motion Analysis and Object Tracking
===================================
.. highlight:: python
.. index:: Acc
.. _Acc:
Acc
---
`id=0.629029815041 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Acc>`__
.. function:: Acc(image,sum,mask=NULL)-> None
Adds a frame to an accumulator.
:param image: Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently)
:type image: :class:`CvArr`
:param sum: Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point
:type sum: :class:`CvArr`
:param mask: Optional operation mask
:type mask: :class:`CvArr`
The function adds the whole image
``image``
or its selected region to the accumulator
``sum``
:
.. math::
\texttt{sum} (x,y) \leftarrow \texttt{sum} (x,y) + \texttt{image} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0
.. index:: MultiplyAcc
.. _MultiplyAcc:
MultiplyAcc
-----------
`id=0.767428702085 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/MultiplyAcc>`__
.. function:: MultiplyAcc(image1,image2,acc,mask=NULL)-> None
Adds the product of two input images to the accumulator.
:param image1: First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
:type image1: :class:`CvArr`
:param image2: Second input image, the same format as the first one
:type image2: :class:`CvArr`
:param acc: Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point
:type acc: :class:`CvArr`
:param mask: Optional operation mask
:type mask: :class:`CvArr`
The function adds the product of 2 images or their selected regions to the accumulator
``acc``
:
.. math::
\texttt{acc} (x,y) \leftarrow \texttt{acc} (x,y) + \texttt{image1} (x,y) \cdot \texttt{image2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0
.. index:: RunningAvg
.. _RunningAvg:
RunningAvg
----------
`id=0.136357383909 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/RunningAvg>`__
.. function:: RunningAvg(image,acc,alpha,mask=NULL)-> None
Updates the running average.
:param image: Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
:type image: :class:`CvArr`
:param acc: Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point
:type acc: :class:`CvArr`
:param alpha: Weight of input image
:type alpha: float
:param mask: Optional operation mask
:type mask: :class:`CvArr`
The function calculates the weighted sum of the input image
``image``
and the accumulator
``acc``
so that
``acc``
becomes a running average of frame sequence:
.. math::
\texttt{acc} (x,y) \leftarrow (1- \alpha ) \cdot \texttt{acc} (x,y) + \alpha \cdot \texttt{image} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0
where
:math:`\alpha`
regulates the update speed (how fast the accumulator forgets about previous frames).
.. index:: SquareAcc
.. _SquareAcc:
SquareAcc
---------
`id=0.606012635939 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/SquareAcc>`__
.. function:: SquareAcc(image,sqsum,mask=NULL)-> None
Adds the square of the source image to the accumulator.
:param image: Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
:type image: :class:`CvArr`
:param sqsum: Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point
:type sqsum: :class:`CvArr`
:param mask: Optional operation mask
:type mask: :class:`CvArr`
The function adds the input image
``image``
or its selected region, raised to power 2, to the accumulator
``sqsum``
:
.. math::
\texttt{sqsum} (x,y) \leftarrow \texttt{sqsum} (x,y) + \texttt{image} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0

View File

@ -0,0 +1,155 @@
Object Detection
================
.. highlight:: python
.. index:: MatchTemplate
.. _MatchTemplate:
MatchTemplate
-------------
`id=0.180820664163 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/MatchTemplate>`__
.. function:: MatchTemplate(image,templ,result,method)-> None
Compares a template against overlapped image regions.
:param image: Image where the search is running; should be 8-bit or 32-bit floating-point
:type image: :class:`CvArr`
:param templ: Searched template; must be not greater than the source image and the same data type as the image
:type templ: :class:`CvArr`
:param result: A map of comparison results; single-channel 32-bit floating-point.
If ``image`` is :math:`W \times H` and ``templ`` is :math:`w \times h` then ``result`` must be :math:`(W-w+1) \times (H-h+1)`
:type result: :class:`CvArr`
:param method: Specifies the way the template must be compared with the image regions (see below)
:type method: int
The function is similar to
:ref:`CalcBackProjectPatch`
. It slides through
``image``
, compares the
overlapped patches of size
:math:`w \times h`
against
``templ``
using the specified method and stores the comparison results to
``result``
. Here are the formulas for the different comparison
methods one may use (
:math:`I`
denotes
``image``
,
:math:`T`
``template``
,
:math:`R`
``result``
). The summation is done over template and/or the
image patch:
:math:`x' = 0...w-1, y' = 0...h-1`
* method=CV\_TM\_SQDIFF
.. math::
R(x,y)= \sum _{x',y'} (T(x',y')-I(x+x',y+y'))^2
* method=CV\_TM\_SQDIFF\_NORMED
.. math::
R(x,y)= \frac{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}
* method=CV\_TM\_CCORR
.. math::
R(x,y)= \sum _{x',y'} (T(x',y') \cdot I(x+x',y+y'))
* method=CV\_TM\_CCORR\_NORMED
.. math::
R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I'(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}
* method=CV\_TM\_CCOEFF
.. math::
R(x,y)= \sum _{x',y'} (T'(x',y') \cdot I(x+x',y+y'))
where
.. math::
\begin{array}{l} T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum _{x'',y''} T(x'',y'') \\ I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum _{x'',y''} I(x+x'',y+y'') \end{array}
* method=CV\_TM\_CCOEFF\_NORMED
.. math::
R(x,y)= \frac{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) }{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} }
After the function finishes the comparison, the best matches can be found as global minimums (
``CV_TM_SQDIFF``
) or maximums (
``CV_TM_CCORR``
and
``CV_TM_CCOEFF``
) using the
:ref:`MinMaxLoc`
function. In the case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels (and separate mean values are used for each channel).

View File

@ -0,0 +1,561 @@
Planar Subdivisions
===================
.. highlight:: python
.. index:: CvSubdiv2D
.. _CvSubdiv2D:
CvSubdiv2D
----------
`id=0.403332162742 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CvSubdiv2D>`__
.. class:: CvSubdiv2D
Planar subdivision.
.. attribute:: edges
A :ref:`CvSet` of :ref:`CvSubdiv2DEdge`
Planar subdivision is the subdivision of a plane into a set of
non-overlapped regions (facets) that cover the whole plane. The above
structure describes a subdivision built on a 2d point set, where the points
are linked together and form a planar graph, which, together with a few
edges connecting the exterior subdivision points (namely, convex hull points)
with infinity, subdivides a plane into facets by its edges.
For every subdivision there exists a dual subdivision in which facets and
points (subdivision vertices) swap their roles, that is, a facet is
treated as a vertex (called a virtual point below) of the dual subdivision and
the original subdivision vertices become facets. On the picture below
original subdivision is marked with solid lines and dual subdivision
with dotted lines.
.. image:: ../pics/subdiv.png
OpenCV subdivides a plane into triangles using Delaunay's
algorithm. Subdivision is built iteratively starting from a dummy
triangle that includes all the subdivision points for sure. In this
case the dual subdivision is a Voronoi diagram of the input 2d point set. The
subdivisions can be used for the 3d piece-wise transformation of a plane,
morphing, fast location of points on the plane, building special graphs
(such as NNG,RNG) and so forth.
.. index:: CvSubdiv2DPoint
.. _CvSubdiv2DPoint:
CvSubdiv2DPoint
---------------
`id=0.753986010152 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CvSubdiv2DPoint>`__
.. class:: CvSubdiv2DPoint
Point of original or dual subdivision.
.. attribute:: first
A connected :ref:`CvSubdiv2DEdge`
.. attribute:: pt
Position, as a :ref:`CvPoint2D32f`
.. index:: CalcSubdivVoronoi2D
.. _CalcSubdivVoronoi2D:
CalcSubdivVoronoi2D
-------------------
`id=0.119097157929 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CalcSubdivVoronoi2D>`__
.. function:: CalcSubdivVoronoi2D(subdiv)-> None
Calculates the coordinates of Voronoi diagram cells.
:param subdiv: Delaunay subdivision, in which all the points are already added
:type subdiv: :class:`CvSubdiv2D`
The function calculates the coordinates
of virtual points. All virtual points corresponding to some vertex of the
original subdivision form (when connected together) a boundary of the Voronoi
cell at that point.
.. index:: ClearSubdivVoronoi2D
.. _ClearSubdivVoronoi2D:
ClearSubdivVoronoi2D
--------------------
`id=0.158437620754 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/ClearSubdivVoronoi2D>`__
.. function:: ClearSubdivVoronoi2D(subdiv)-> None
Removes all virtual points.
:param subdiv: Delaunay subdivision
:type subdiv: :class:`CvSubdiv2D`
The function removes all of the virtual points. It
is called internally in
:ref:`CalcSubdivVoronoi2D`
if the subdivision
was modified after previous call to the function.
.. index:: CreateSubdivDelaunay2D
.. _CreateSubdivDelaunay2D:
CreateSubdivDelaunay2D
----------------------
`id=0.918020754539 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/CreateSubdivDelaunay2D>`__
.. function:: CreateSubdivDelaunay2D(rect,storage)-> delaunay_triangulation
Creates an empty Delaunay triangulation.
:param rect: Rectangle that includes all of the 2d points that are to be added to the subdivision
:type rect: :class:`CvRect`
:param storage: Container for subdivision
:type storage: :class:`CvMemStorage`
The function creates an empty Delaunay
subdivision, where 2d points can be added using the function
:ref:`SubdivDelaunay2DInsert`
. All of the points to be added must be within
the specified rectangle, otherwise a runtime error will be raised.
Note that the triangulation is a single large triangle that covers the given rectangle. Hence the three vertices of this triangle are outside the rectangle
``rect``
.
.. index:: FindNearestPoint2D
.. _FindNearestPoint2D:
FindNearestPoint2D
------------------
`id=0.679601866055 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/FindNearestPoint2D>`__
.. function:: FindNearestPoint2D(subdiv,pt)-> point
Finds the closest subdivision vertex to the given point.
:param subdiv: Delaunay or another subdivision
:type subdiv: :class:`CvSubdiv2D`
:param pt: Input point
:type pt: :class:`CvPoint2D32f`
The function is another function that
locates the input point within the subdivision. It finds the subdivision vertex that
is the closest to the input point. It is not necessarily one of vertices
of the facet containing the input point, though the facet (located using
:ref:`Subdiv2DLocate`
) is used as a starting
point. The function returns a pointer to the found subdivision vertex.
.. index:: Subdiv2DEdgeDst
.. _Subdiv2DEdgeDst:
Subdiv2DEdgeDst
---------------
`id=0.723258652692 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Subdiv2DEdgeDst>`__
.. function:: Subdiv2DEdgeDst(edge)-> point
Returns the edge destination.
:param edge: Subdivision edge (not a quad-edge)
:type edge: :class:`CvSubdiv2DEdge`
The function returns the edge destination. The
returned pointer may be NULL if the edge is from dual subdivision and
the virtual point coordinates are not calculated yet. The virtual points
can be calculated using the function
:ref:`CalcSubdivVoronoi2D`
.
.. index:: Subdiv2DGetEdge
.. _Subdiv2DGetEdge:
Subdiv2DGetEdge
---------------
`id=0.506587189348 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Subdiv2DGetEdge>`__
.. function:: Subdiv2DGetEdge(edge,type)-> CvSubdiv2DEdge
Returns one of the edges related to the given edge.
:param edge: Subdivision edge (not a quad-edge)
:type edge: :class:`CvSubdiv2DEdge`
:param type: Specifies which of the related edges to return, one of the following:
:type type: :class:`CvNextEdgeType`
* **CV_NEXT_AROUND_ORG** next around the edge origin ( ``eOnext`` on the picture below if ``e`` is the input edge)
* **CV_NEXT_AROUND_DST** next around the edge vertex ( ``eDnext`` )
* **CV_PREV_AROUND_ORG** previous around the edge origin (reversed ``eRnext`` )
* **CV_PREV_AROUND_DST** previous around the edge destination (reversed ``eLnext`` )
* **CV_NEXT_AROUND_LEFT** next around the left facet ( ``eLnext`` )
* **CV_NEXT_AROUND_RIGHT** next around the right facet ( ``eRnext`` )
* **CV_PREV_AROUND_LEFT** previous around the left facet (reversed ``eOnext`` )
* **CV_PREV_AROUND_RIGHT** previous around the right facet (reversed ``eDnext`` )
.. image:: ../pics/quadedge.png
The function returns one of the edges related to the input edge.
.. index:: Subdiv2DNextEdge
.. _Subdiv2DNextEdge:
Subdiv2DNextEdge
----------------
`id=0.406592929731 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Subdiv2DNextEdge>`__
.. function:: Subdiv2DNextEdge(edge)-> CvSubdiv2DEdge
Returns next edge around the edge origin
:param edge: Subdivision edge (not a quad-edge)
:type edge: :class:`CvSubdiv2DEdge`
.. image:: ../pics/quadedge.png
The function returns the next edge around the edge origin:
``eOnext``
on the picture above if
``e``
is the input edge)
.. index:: Subdiv2DLocate
.. _Subdiv2DLocate:
Subdiv2DLocate
--------------
`id=0.614412184993 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Subdiv2DLocate>`__
.. function:: Subdiv2DLocate(subdiv, pt) -> (loc, where)
Returns the location of a point within a Delaunay triangulation.
:param subdiv: Delaunay or another subdivision
:type subdiv: :class:`CvSubdiv2D`
:param pt: The point to locate
:type pt: :class:`CvPoint2D32f`
:param loc: The location of the point within the triangulation
:type loc: int
:param where: The edge or vertex. See below.
:type where: :class:`CvSubdiv2DEdge`, :class:`CvSubdiv2DPoint`
The function locates the input point within the subdivision. There are 5 cases:
*
The point falls into some facet.
``loc``
is
``CV_PTLOC_INSIDE``
and
``where``
is one of edges of the facet.
*
The point falls onto the edge.
``loc``
is
``CV_PTLOC_ON_EDGE``
and
``where``
is the edge.
*
The point coincides with one of the subdivision vertices.
``loc``
is
``CV_PTLOC_VERTEX``
and
``where``
is the vertex.
*
The point is outside the subdivsion reference rectangle.
``loc``
is
``CV_PTLOC_OUTSIDE_RECT``
and
``where``
is None.
*
One of input arguments is invalid. The function raises an exception.
.. index:: Subdiv2DRotateEdge
.. _Subdiv2DRotateEdge:
Subdiv2DRotateEdge
------------------
`id=0.775095566923 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/Subdiv2DRotateEdge>`__
.. function:: Subdiv2DRotateEdge(edge,rotate)-> CvSubdiv2DEdge
Returns another edge of the same quad-edge.
:param edge: Subdivision edge (not a quad-edge)
:type edge: :class:`CvSubdiv2DEdge`
:param rotate: Specifies which of the edges of the same quad-edge as the input one to return, one of the following:
* **0** the input edge ( ``e`` on the picture below if ``e`` is the input edge)
* **1** the rotated edge ( ``eRot`` )
* **2** the reversed edge (reversed ``e`` (in green))
* **3** the reversed rotated edge (reversed ``eRot`` (in green))
:type rotate: int
.. image:: ../pics/quadedge.png
The function returns one of the edges of the same quad-edge as the input edge.
.. index:: SubdivDelaunay2DInsert
.. _SubdivDelaunay2DInsert:
SubdivDelaunay2DInsert
----------------------
`id=0.291010420302 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/imgproc/SubdivDelaunay2DInsert>`__
.. function:: SubdivDelaunay2DInsert(subdiv,pt)-> point
Inserts a single point into a Delaunay triangulation.
:param subdiv: Delaunay subdivision created by the function :ref:`CreateSubdivDelaunay2D`
:type subdiv: :class:`CvSubdiv2D`
:param pt: Inserted point
:type pt: :class:`CvPoint2D32f`
The function inserts a single point into a subdivision and modifies the subdivision topology appropriately. If a point with the same coordinates exists already, no new point is added. The function returns a pointer to the allocated point. No virtual point coordinates are calculated at this stage.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,37 @@
************
Introduction
************
Starting with release 2.0, OpenCV has a new Python interface. This replaces the previous
`SWIG-based Python interface <http://opencv.willowgarage.com/wiki/SwigPythonInterface>`_
.
Some highlights of the new bindings:
* single import of all of OpenCV using ``import cv``
* OpenCV functions no longer have the "cv" prefix
* simple types like CvRect and CvScalar use Python tuples
* sharing of Image storage, so image transport between OpenCV and other systems (e.g. numpy and ROS) is very efficient
* complete documentation for the Python functions
This cookbook section contains a few illustrative examples of OpenCV Python code.
.. toctree::
:maxdepth: 2
cookbook

View File

@ -0,0 +1,10 @@
***************************
objdetect. Object Detection
***************************
.. toctree::
:maxdepth: 2
objdetect_cascade_classification

View File

@ -0,0 +1,180 @@
Cascade Classification
======================
.. highlight:: python
Haar Feature-based Cascade Classifier for Object Detection
----------------------------------------------------------
The object detector described below has been initially proposed by Paul Viola
:ref:`Viola01`
and improved by Rainer Lienhart
:ref:`Lienhart02`
. First, a classifier (namely a
*cascade of boosted classifiers working with haar-like features*
) is trained with a few hundred sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size.
After a classifier is trained, it can be applied to a region of interest
(of the same size as used during the training) in an input image. The
classifier outputs a "1" if the region is likely to show the object
(i.e., face/car), and "0" otherwise. To search for the object in the
whole image one can move the search window across the image and check
every location using the classifier. The classifier is designed so that
it can be easily "resized" in order to be able to find the objects of
interest at different sizes, which is more efficient than resizing the
image itself. So, to find an object of an unknown size in the image the
scan procedure should be done several times at different scales.
The word "cascade" in the classifier name means that the resultant
classifier consists of several simpler classifiers (
*stages*
) that
are applied subsequently to a region of interest until at some stage the
candidate is rejected or all the stages are passed. The word "boosted"
means that the classifiers at every stage of the cascade are complex
themselves and they are built out of basic classifiers using one of four
different
``boosting``
techniques (weighted voting). Currently
Discrete Adaboost, Real Adaboost, Gentle Adaboost and Logitboost are
supported. The basic classifiers are decision-tree classifiers with at
least 2 leaves. Haar-like features are the input to the basic classifers,
and are calculated as described below. The current algorithm uses the
following Haar-like features:
.. image:: ../pics/haarfeatures.png
The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied). For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the
:ref:`Integral`
description).
A simple demonstration of face detection, which draws a rectangle around each detected face:
::
hc = cv.Load("haarcascade_frontalface_default.xml")
img = cv.LoadImage("faces.jpg", 0)
faces = cv.HaarDetectObjects(img, hc, cv.CreateMemStorage())
for (x,y,w,h),n in faces:
cv.Rectangle(img, (x,y), (x+w,y+h), 255)
cv.SaveImage("faces_detected.jpg", img)
..
.. index:: HaarDetectObjects
.. _HaarDetectObjects:
HaarDetectObjects
-----------------
`id=0.467753723618 Comments from the Wiki <http://opencv.willowgarage.com/wiki/documentation/py/objdetect/HaarDetectObjects>`__
.. function:: HaarDetectObjects(image,cascade,storage,scaleFactor=1.1,minNeighbors=3,flags=0,minSize=(0,0))-> detected_objects
Detects objects in the image.
:param image: Image to detect objects in
:type image: :class:`CvArr`
:param cascade: Haar classifier cascade in internal representation
:type cascade: :class:`CvHaarClassifierCascade`
:param storage: Memory storage to store the resultant sequence of the object candidate rectangles
:type storage: :class:`CvMemStorage`
:param scaleFactor: The factor by which the search window is scaled between the subsequent scans, 1.1 means increasing window by 10 %
:param minNeighbors: Minimum number (minus 1) of neighbor rectangles that makes up an object. All the groups of a smaller number of rectangles than ``min_neighbors`` -1 are rejected. If ``minNeighbors`` is 0, the function does not any grouping at all and returns all the detected candidate rectangles, which may be useful if the user wants to apply a customized grouping procedure
:param flags: Mode of operation. Currently the only flag that may be specified is ``CV_HAAR_DO_CANNY_PRUNING`` . If it is set, the function uses Canny edge detector to reject some image regions that contain too few or too much edges and thus can not contain the searched object. The particular threshold values are tuned for face detection and in this case the pruning speeds up the processing
:type flags: int
:param minSize: Minimum window size. By default, it is set to the size of samples the classifier has been trained on ( :math:`\sim 20\times 20` for face detection)
:param maxSize: Maximum window size to use. By default, it is set to the size of the image.
The function finds rectangular regions in the given image that are likely to contain objects the cascade has been trained for and returns those regions as a sequence of rectangles. The function scans the image several times at different scales (see
:ref:`SetImagesForHaarClassifierCascade`
). Each time it considers overlapping regions in the image and applies the classifiers to the regions using
:ref:`RunHaarClassifierCascade`
. It may also apply some heuristics to reduce number of analyzed regions, such as Canny prunning. After it has proceeded and collected the candidate rectangles (regions that passed the classifier cascade), it groups them and returns a sequence of average rectangles for each large enough group. The default parameters (
``scale_factor``
=1.1,
``min_neighbors``
=3,
``flags``
=0) are tuned for accurate yet slow object detection. For a faster operation on real video images the settings are:
``scale_factor``
=1.2,
``min_neighbors``
=2,
``flags``
=
``CV_HAAR_DO_CANNY_PRUNING``
,
``min_size``
=
*minimum possible face size*
(for example,
:math:`\sim`
1/4 to 1/16 of the image area in the case of video conferencing).
The function returns a list of tuples,
``(rect, neighbors)``
, where rect is a
:ref:`CvRect`
specifying the object's extents
and neighbors is a number of neighbors.
.. doctest::
>>> import cv
>>> image = cv.LoadImageM("lena.jpg", cv.CV_LOAD_IMAGE_GRAYSCALE)
>>> cascade = cv.Load("../../data/haarcascades/haarcascade_frontalface_alt.xml")
>>> print cv.HaarDetectObjects(image, cascade, cv.CreateMemStorage(0), 1.2, 2, 0, (20, 20))
[((217, 203, 169, 169), 24)]
..

View File

@ -0,0 +1,17 @@
################
Python Reference
################
.. highlight:: python
.. toctree::
:maxdepth: 2
introduction
core
imgproc
features2d
objdetect
video
highgui
calib3d

10
doc/opencv1/py/video.rst Normal file
View File

@ -0,0 +1,10 @@
*********************
video. Video Analysis
*********************
.. toctree::
:maxdepth: 2
video_motion_analysis_and_object_tracking

File diff suppressed because it is too large Load Diff