Merge pull request #17325 from hunter-college-ossd-spr-2020:nav-links

This commit is contained in:
Alexander Alekhin 2020-05-20 18:14:27 +00:00
commit cb82388a84
67 changed files with 207 additions and 0 deletions

View File

@ -1,6 +1,10 @@
Camera calibration With OpenCV {#tutorial_camera_calibration}
==============================
@prev_tutorial{tutorial_camera_calibration_square_chess}
@next_tutorial{tutorial_real_time_pose}
Cameras have been around for a long-long time. However, with the introduction of the cheap *pinhole*
cameras in the late 20th century, they became a common occurrence in our everyday life.
Unfortunately, this cheapness comes with its price: significant distortion. Luckily, these are

View File

@ -1,6 +1,9 @@
Create calibration pattern {#tutorial_camera_calibration_pattern}
=========================================
@next_tutorial{tutorial_camera_calibration_square_chess}
The goal of this tutorial is to learn how to create calibration pattern.
You can find a chessboard pattern in https://github.com/opencv/opencv/blob/3.4/doc/pattern.png

View File

@ -1,6 +1,10 @@
Camera calibration with square chessboard {#tutorial_camera_calibration_square_chess}
=========================================
@prev_tutorial{tutorial_camera_calibration_pattern}
@next_tutorial{tutorial_camera_calibration}
The goal of this tutorial is to learn how to calibrate a camera given a set of chessboard images.
*Test data*: use images in your data/chess folder.

View File

@ -1,6 +1,9 @@
Interactive camera calibration application {#tutorial_interactive_calibration}
==============================
@prev_tutorial{tutorial_real_time_pose}
According to classical calibration technique user must collect all data first and when run @ref cv::calibrateCamera function
to obtain camera parameters. If average re-projection error is huge or if estimated parameters seems to be wrong, process of
selection or collecting data and starting of @ref cv::calibrateCamera repeats.

View File

@ -1,6 +1,10 @@
Real Time pose estimation of a textured object {#tutorial_real_time_pose}
==============================================
@prev_tutorial{tutorial_camera_calibration}
@next_tutorial{tutorial_interactive_calibration}
Nowadays, augmented reality is one of the top research topic in computer vision and robotics fields.
The most elemental problem in augmented reality is the estimation of the camera pose respect of an
object in the case of computer vision area to do later some 3D rendering or in the case of robotics

View File

@ -1,5 +1,8 @@
# How to run deep networks on Android device {#tutorial_dnn_android}
@prev_tutorial{tutorial_dnn_halide_scheduling}
@next_tutorial{tutorial_dnn_yolo}
## Introduction
In this tutorial you'll know how to run deep learning networks on Android device
using OpenCV deep learning module.

View File

@ -1,5 +1,7 @@
# Custom deep learning layers support {#tutorial_dnn_custom_layers}
@prev_tutorial{tutorial_dnn_javascript}
## Introduction
Deep learning is a fast growing area. The new approaches to build neural networks
usually introduce new types of layers. They could be modifications of existing

View File

@ -1,6 +1,8 @@
Load Caffe framework models {#tutorial_dnn_googlenet}
===========================
@next_tutorial{tutorial_dnn_halide}
Introduction
------------

View File

@ -1,5 +1,8 @@
# How to enable Halide backend for improve efficiency {#tutorial_dnn_halide}
@prev_tutorial{tutorial_dnn_googlenet}
@next_tutorial{tutorial_dnn_halide_scheduling}
## Introduction
This tutorial guidelines how to run your models in OpenCV deep learning module
using Halide language backend. Halide is an open-source project that let us

View File

@ -1,5 +1,8 @@
# How to schedule your network for Halide backend {#tutorial_dnn_halide_scheduling}
@prev_tutorial{tutorial_dnn_halide}
@next_tutorial{tutorial_dnn_android}
## Introduction
Halide code is the same for every device we use. But for achieving the satisfied
efficiency we should schedule computations properly. In this tutorial we describe

View File

@ -1,5 +1,8 @@
# How to run deep networks in browser {#tutorial_dnn_javascript}
@prev_tutorial{tutorial_dnn_yolo}
@next_tutorial{tutorial_dnn_custom_layers}
## Introduction
This tutorial will show us how to run deep learning models using OpenCV.js right
in a browser. Tutorial refers a sample of face detection and face recognition

View File

@ -1,6 +1,9 @@
YOLO DNNs {#tutorial_dnn_yolo}
===============================
@prev_tutorial{tutorial_dnn_android}
@next_tutorial{tutorial_dnn_javascript}
Introduction
------------

View File

@ -1,6 +1,9 @@
AKAZE local features matching {#tutorial_akaze_matching}
=============================
@prev_tutorial{tutorial_detection_of_planar_objects}
@next_tutorial{tutorial_akaze_tracking}
Introduction
------------

View File

@ -1,6 +1,9 @@
AKAZE and ORB planar tracking {#tutorial_akaze_tracking}
=============================
@prev_tutorial{tutorial_akaze_matching}
@next_tutorial{tutorial_homography}
Introduction
------------

View File

@ -1,6 +1,10 @@
Detection of planar objects {#tutorial_detection_of_planar_objects}
===========================
@prev_tutorial{tutorial_feature_homography}
@next_tutorial{tutorial_akaze_matching}
The goal of this tutorial is to learn how to use *features2d* and *calib3d* modules for detecting
known planar objects in scenes.

View File

@ -1,6 +1,9 @@
Feature Description {#tutorial_feature_description}
===================
@prev_tutorial{tutorial_feature_detection}
@next_tutorial{tutorial_feature_flann_matcher}
Goal
----

View File

@ -1,6 +1,9 @@
Feature Detection {#tutorial_feature_detection}
=================
@prev_tutorial{tutorial_corner_subpixels}
@next_tutorial{tutorial_feature_description}
Goal
----

View File

@ -1,6 +1,9 @@
Feature Matching with FLANN {#tutorial_feature_flann_matcher}
===========================
@prev_tutorial{tutorial_feature_description}
@next_tutorial{tutorial_feature_homography}
Goal
----

View File

@ -1,6 +1,9 @@
Features2D + Homography to find a known object {#tutorial_feature_homography}
==============================================
@prev_tutorial{tutorial_feature_flann_matcher}
@next_tutorial{tutorial_detection_of_planar_objects}
Goal
----

View File

@ -1,6 +1,8 @@
Basic concepts of the homography explained with code {#tutorial_homography}
====================================================
@prev_tutorial{tutorial_akaze_tracking}
@tableofcontents
Introduction {#tutorial_homography_Introduction}

View File

@ -1,6 +1,9 @@
Detecting corners location in subpixels {#tutorial_corner_subpixels}
=======================================
@prev_tutorial{tutorial_generic_corner_detector}
@next_tutorial{tutorial_feature_detection}
Goal
----

View File

@ -1,6 +1,10 @@
Creating your own corner detector {#tutorial_generic_corner_detector}
=================================
@prev_tutorial{tutorial_good_features_to_track}
@next_tutorial{tutorial_corner_subpixels}
Goal
----

View File

@ -1,6 +1,9 @@
Shi-Tomasi corner detector {#tutorial_good_features_to_track}
==========================
@prev_tutorial{tutorial_harris_detector}
@next_tutorial{tutorial_generic_corner_detector}
Goal
----

View File

@ -1,6 +1,8 @@
Harris corner detector {#tutorial_harris_detector}
======================
@next_tutorial{tutorial_good_features_to_track}
Goal
----

View File

@ -2,6 +2,8 @@ Similarity check (PNSR and SSIM) on the GPU {#tutorial_gpu_basics_similarity}
===========================================
@todo update this tutorial
@next_tutorial{tutorial_gpu_thrust_interop}
Goal
----

View File

@ -1,6 +1,8 @@
Using a cv::cuda::GpuMat with thrust {#tutorial_gpu_thrust_interop}
===========================================
@prev_tutorial{tutorial_gpu_basics_similarity}
Goal
----

View File

@ -1,6 +1,10 @@
OpenCV4Android SDK {#tutorial_O4A_SDK}
==================
@prev_tutorial{tutorial_android_dev_intro}
@next_tutorial{tutorial_dev_with_OCV_on_Android}
This tutorial was designed to help you with installation and configuration of OpenCV4Android SDK.
This guide was written with MS Windows 7 in mind, though it should work with GNU Linux and Apple Mac

View File

@ -1,6 +1,10 @@
Introduction into Android Development {#tutorial_android_dev_intro}
=====================================
@prev_tutorial{tutorial_clojure_dev_intro}
@next_tutorial{tutorial_O4A_SDK}
This guide was designed to help you in learning Android development basics and setting up your
working environment quickly. It was written with Windows 7 in mind, though it would work with Linux
(Ubuntu), Mac OS X and any other OS supported by Android SDK.

View File

@ -1,6 +1,10 @@
Use OpenCL in Android camera preview based CV application {#tutorial_android_ocl_intro}
=====================================
@prev_tutorial{tutorial_dev_with_OCV_on_Android}
@next_tutorial{tutorial_macos_install}
This guide was designed to help you in use of [OpenCL ™](https://www.khronos.org/opencl/) in Android camera preview based CV application.
It was written for [Eclipse-based ADT tools](http://developer.android.com/tools/help/adt.html)
(deprecated by Google now), but it easily can be reproduced with [Android Studio](http://developer.android.com/tools/studio/index.html).

View File

@ -1,6 +1,10 @@
Android Development with OpenCV {#tutorial_dev_with_OCV_on_Android}
===============================
@prev_tutorial{tutorial_O4A_SDK}
@next_tutorial{tutorial_android_ocl_intro}
This tutorial has been created to help you use OpenCV library within your Android project.
This guide was written with Windows 7 in mind, though it should work with any other OS supported by

View File

@ -1,6 +1,10 @@
Building OpenCV for Tegra with CUDA {#tutorial_building_tegra_cuda}
===================================
@prev_tutorial{tutorial_arm_crosscompile_with_cmake}
@next_tutorial{tutorial_display_image}
@tableofcontents
OpenCV with CUDA for Tegra

View File

@ -1,6 +1,10 @@
Introduction to OpenCV Development with Clojure {#tutorial_clojure_dev_intro}
===============================================
@prev_tutorial{tutorial_java_eclipse}
@next_tutorial{tutorial_android_dev_intro}
As of OpenCV 2.4.4, OpenCV supports desktop Java development using nearly the same interface as for
Android development.

View File

@ -1,6 +1,9 @@
Cross referencing OpenCV from other Doxygen projects {#tutorial_cross_referencing}
====================================================
@prev_tutorial{tutorial_transition_guide}
Cross referencing OpenCV
------------------------

View File

@ -1,6 +1,10 @@
Cross compilation for ARM based Linux systems {#tutorial_arm_crosscompile_with_cmake}
=============================================
@prev_tutorial{tutorial_ios_install}
@next_tutorial{tutorial_building_tegra_cuda}
This steps are tested on Ubuntu Linux 12.04, but should work for other Linux distributions. I case
of other distributions package names and names of cross compilation tools may differ. There are
several popular EABI versions that are used on ARM platform. This tutorial is written for *gnueabi*

View File

@ -1,6 +1,10 @@
Introduction to Java Development {#tutorial_java_dev_intro}
================================
@prev_tutorial{tutorial_windows_visual_studio_image_watch}
@next_tutorial{tutorial_java_eclipse}
As of OpenCV 2.4.4, OpenCV supports desktop Java development using nearly the same interface as for
Android development. This guide will help you to create your first Java (or Scala) application using
OpenCV. We will use either [Apache Ant](http://ant.apache.org/) or [Simple Build Tool

View File

@ -1,6 +1,9 @@
Getting Started with Images {#tutorial_display_image}
===========================
@prev_tutorial{tutorial_building_tegra_cuda}
@next_tutorial{tutorial_documentation}
Goal
----

View File

@ -1,6 +1,10 @@
Writing documentation for OpenCV {#tutorial_documentation}
================================
@prev_tutorial{tutorial_display_image}
@next_tutorial{tutorial_transition_guide}
@tableofcontents
Doxygen overview {#tutorial_documentation_overview}

View File

@ -1,6 +1,9 @@
Installation in iOS {#tutorial_ios_install}
===================
@prev_tutorial{tutorial_macos_install}
@next_tutorial{tutorial_arm_crosscompile_with_cmake}
Required Packages
-----------------

View File

@ -1,6 +1,10 @@
Using OpenCV Java with Eclipse {#tutorial_java_eclipse}
==============================
@prev_tutorial{tutorial_java_dev_intro}
@next_tutorial{tutorial_clojure_dev_intro}
Since version 2.4.4 [OpenCV supports Java](http://opencv.org/opencv-java-api.html). In this tutorial
I will explain how to setup development environment for using OpenCV Java with Eclipse in
**Windows**, so you can enjoy the benefits of garbage collected, very refactorable (rename variable,

View File

@ -1,6 +1,9 @@
Using OpenCV with Eclipse (plugin CDT) {#tutorial_linux_eclipse}
======================================
@prev_tutorial{tutorial_linux_gcc_cmake}
@next_tutorial{tutorial_windows_install}
Prerequisites
-------------
Two ways, one by forming a project directly, and another by CMake Prerequisites

View File

@ -1,6 +1,10 @@
Using OpenCV with gcc and CMake {#tutorial_linux_gcc_cmake}
===============================
@prev_tutorial{tutorial_linux_install}
@next_tutorial{tutorial_linux_eclipse}
@note We assume that you have successfully installed OpenCV in your workstation.
- The easiest way of using OpenCV in your code is to use [CMake](http://www.cmake.org/). A few

View File

@ -1,6 +1,9 @@
Installation in Linux {#tutorial_linux_install}
=====================
@next_tutorial{tutorial_linux_gcc_cmake}
The following steps have been tested for Ubuntu 10.04 but should work with other distros as well.
Required Packages

View File

@ -1,6 +1,10 @@
Installation in MacOS {#tutorial_macos_install}
=====================
@prev_tutorial{tutorial_android_ocl_intro}
@next_tutorial{tutorial_ios_install}
The following steps have been tested for MacOSX (Mavericks) but should work with other versions as well.
Required Packages

View File

@ -1,6 +1,10 @@
Transition guide {#tutorial_transition_guide}
================
@prev_tutorial{tutorial_documentation}
@next_tutorial{tutorial_cross_referencing}
@tableofcontents
Changes overview {#tutorial_transition_overview}

View File

@ -1,6 +1,10 @@
Installation in Windows {#tutorial_windows_install}
=======================
@prev_tutorial{tutorial_linux_eclipse}
@next_tutorial{tutorial_windows_visual_studio_opencv}
The description here was tested on Windows 7 SP1. Nevertheless, it should also work on any other
relatively modern version of Windows OS. If you encounter errors after following the steps described
below, feel free to contact us via our [OpenCV Q&A forum](http://answers.opencv.org). We'll do our

View File

@ -1,6 +1,10 @@
Image Watch: viewing in-memory images in the Visual Studio debugger {#tutorial_windows_visual_studio_image_watch}
===================================================================
@prev_tutorial{tutorial_windows_visual_studio_opencv}
@next_tutorial{tutorial_java_dev_intro}
Image Watch is a plug-in for Microsoft Visual Studio that lets you to visualize in-memory images
(*cv::Mat* or *IplImage_* objects, for example) while debugging an application. This can be helpful
for tracking down bugs, or for simply understanding what a given piece of code is doing.

View File

@ -1,6 +1,10 @@
How to build applications with OpenCV inside the "Microsoft Visual Studio" {#tutorial_windows_visual_studio_opencv}
==========================================================================
@prev_tutorial{tutorial_windows_install}
@next_tutorial{tutorial_windows_visual_studio_image_watch}
Everything I describe here will apply to the `C\C++` interface of OpenCV. I start out from the
assumption that you have read and completed with success the @ref tutorial_windows_install tutorial.
Therefore, before you go any further make sure you have an OpenCV directory that contains the OpenCV

View File

@ -1,6 +1,8 @@
OpenCV iOS Hello {#tutorial_hello}
================
@next_tutorial{tutorial_image_manipulation}
Goal
----

View File

@ -1,6 +1,9 @@
OpenCV iOS - Image Processing {#tutorial_image_manipulation}
=============================
@prev_tutorial{tutorial_hello}
@next_tutorial{tutorial_video_processing}
Goal
----

View File

@ -1,6 +1,9 @@
OpenCV iOS - Video Processing {#tutorial_video_processing}
=============================
@prev_tutorial{tutorial_image_manipulation}
This tutorial explains how to process video frames using the iPhone's camera and OpenCV.
Prerequisites:

View File

@ -1,6 +1,8 @@
Introduction to Principal Component Analysis (PCA) {#tutorial_introduction_to_pca}
=======================================
@prev_tutorial{tutorial_non_linear_svms}
Goal
----

View File

@ -1,6 +1,8 @@
Introduction to Support Vector Machines {#tutorial_introduction_to_svm}
=======================================
@next_tutorial{tutorial_non_linear_svms}
Goal
----

View File

@ -1,6 +1,9 @@
Support Vector Machines for Non-Linearly Separable Data {#tutorial_non_linear_svms}
=======================================================
@prev_tutorial{tutorial_introduction_to_svm}
@next_tutorial{tutorial_introduction_to_pca}
Goal
----

View File

@ -1,6 +1,8 @@
Cascade Classifier {#tutorial_cascade_classifier}
==================
@next_tutorial{tutorial_traincascade}
Goal
----

View File

@ -1,6 +1,8 @@
Cascade Classifier Training {#tutorial_traincascade}
===========================
@prev_tutorial{tutorial_cascade_classifier}
Introduction
------------

View File

@ -1,6 +1,8 @@
How to Use Background Subtraction Methods {#tutorial_background_subtraction}
=========================================
@next_tutorial{tutorial_meanshift}
- Background subtraction (BS) is a common and widely used technique for generating a foreground
mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by
using static cameras.

View File

@ -1,6 +1,9 @@
Meanshift and Camshift {#tutorial_meanshift}
======================
@prev_tutorial{tutorial_background_subtraction}
@next_tutorial{tutorial_optical_flow}
Goal
----

View File

@ -1,6 +1,8 @@
Optical Flow {#tutorial_optical_flow}
============
@prev_tutorial{tutorial_meanshift}
Goal
----

View File

@ -1,6 +1,9 @@
Using Creative Senz3D and other Intel Perceptual Computing SDK compatible depth sensors {#tutorial_intelperc}
=======================================================================================
@prev_tutorial{tutorial_kinect_openni}
Depth sensors compatible with Intel Perceptual Computing SDK are supported through VideoCapture
class. Depth map, RGB image and some other formats of output can be retrieved by using familiar
interface of VideoCapture.

View File

@ -1,6 +1,10 @@
Using Kinect and other OpenNI compatible depth sensors {#tutorial_kinect_openni}
======================================================
@prev_tutorial{tutorial_video_write}
@next_tutorial{tutorial_intelperc}
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture
class. Depth map, BGR image and some other formats of output can be retrieved by using familiar
interface of VideoCapture.

View File

@ -1,6 +1,8 @@
Video Input with OpenCV and similarity measurement {#tutorial_video_input_psnr_ssim}
==================================================
@next_tutorial{tutorial_video_write}
Goal
----

View File

@ -1,6 +1,9 @@
Creating a video with OpenCV {#tutorial_video_write}
============================
@prev_tutorial{tutorial_video_input_psnr_ssim}
@next_tutorial{tutorial_kinect_openni}
Goal
----

View File

@ -1,6 +1,9 @@
Creating Widgets {#tutorial_creating_widgets}
================
@prev_tutorial{tutorial_transformations}
@next_tutorial{tutorial_histo3D}
Goal
----

View File

@ -1,6 +1,8 @@
Creating a 3D histogram {#tutorial_histo3D}
================
@prev_tutorial{tutorial_creating_widgets}
Goal
----

View File

@ -1,6 +1,8 @@
Launching Viz {#tutorial_launching_viz}
=============
@next_tutorial{tutorial_widget_pose}
Goal
----

View File

@ -1,6 +1,9 @@
Transformations {#tutorial_transformations}
===============
@prev_tutorial{tutorial_widget_pose}
@next_tutorial{tutorial_creating_widgets}
Goal
----

View File

@ -1,6 +1,9 @@
Pose of a widget {#tutorial_widget_pose}
================
@prev_tutorial{tutorial_launching_viz}
@next_tutorial{tutorial_transformations}
Goal
----