mirror of
https://github.com/opencv/opencv.git
synced 2024-11-28 05:06:29 +08:00
Merge pull request #21743 from luzpaz:typos
This commit is contained in:
commit
90671233c6
@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "image_classification",
|
||||
"version": "0.0.1",
|
||||
"description": "An Electon.js example of image_classification using webnn-native",
|
||||
"description": "An Electron.js example of image_classification using webnn-native",
|
||||
"main": "main.js",
|
||||
"author": "WebNN-native Authors",
|
||||
"license": "Apache-2.0",
|
||||
|
@ -97,10 +97,10 @@ Building OpenCV.js from Source
|
||||
@endcode
|
||||
|
||||
@note
|
||||
The loader is implemented as a js file in the path `<opencv_js_dir>/bin/loader.js`. The loader utilizes the [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) to detect the features of the broswer and load corresponding OpenCV.js automatically. To use it, you need to use the UMD version of [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) and introduce the `loader.js` in your Web application.
|
||||
The loader is implemented as a js file in the path `<opencv_js_dir>/bin/loader.js`. The loader utilizes the [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) to detect the features of the browser and load corresponding OpenCV.js automatically. To use it, you need to use the UMD version of [WebAssembly Feature Detection](https://github.com/GoogleChromeLabs/wasm-feature-detect) and introduce the `loader.js` in your Web application.
|
||||
|
||||
Example Code:
|
||||
@code{.javascipt}
|
||||
@code{.javascript}
|
||||
// Set paths configuration
|
||||
let pathsConfig = {
|
||||
wasm: "../../build_wasm/opencv.js",
|
||||
@ -173,7 +173,7 @@ This snippet and the following require [Node.js](https://nodejs.org) to be insta
|
||||
|
||||
### Headless with Puppeteer
|
||||
|
||||
Alternatively tests can run with [GoogleChrome/puppeteer](https://github.com/GoogleChrome/puppeteer#readme) which is a version of Google Chrome that runs in the terminal (useful for Continuos integration like travis CI, etc)
|
||||
Alternatively tests can run with [GoogleChrome/puppeteer](https://github.com/GoogleChrome/puppeteer#readme) which is a version of Google Chrome that runs in the terminal (useful for Continuous integration like travis CI, etc)
|
||||
|
||||
@code{.sh}
|
||||
cd build_js/bin
|
||||
@ -229,7 +229,7 @@ node tests.js
|
||||
The simd optimization is experimental as wasm simd is still in development.
|
||||
|
||||
@note
|
||||
Now only emscripten LLVM upstream backend supports wasm simd, refering to https://emscripten.org/docs/porting/simd.html. So you need to setup upstream backend environment with the following command first:
|
||||
Now only emscripten LLVM upstream backend supports wasm simd, referring to https://emscripten.org/docs/porting/simd.html. So you need to setup upstream backend environment with the following command first:
|
||||
@code{.bash}
|
||||
./emsdk update
|
||||
./emsdk install latest-upstream
|
||||
|
@ -244,9 +244,9 @@ Samples:
|
||||
There are three new sample files in opencv/samples directory.
|
||||
|
||||
1. `epipolar_lines.cpp` – input arguments of `main` function are two
|
||||
pathes to images. Then correspondences are found using
|
||||
paths to images. Then correspondences are found using
|
||||
SIFT detector. Fundamental matrix is found using RANSAC from
|
||||
tentaive correspondences and epipolar lines are plot.
|
||||
tentative correspondences and epipolar lines are plot.
|
||||
|
||||
2. `essential_mat_reconstr.cpp` – input arguments are path to data file
|
||||
containing image names and single intrinsic matrix and directory
|
||||
|
@ -92,7 +92,7 @@ We then fill value to the corresponding pixel in the dst image.
|
||||
|
||||
### Parallel implementation
|
||||
|
||||
When looking at the sequential implementation, we can notice that each pixel depends on multiple neighbouring pixels but only one pixel is edited at a time. Thus, to optimize the computation, we can split the image into stripes and parallely perform convolution on each, by exploiting the multi-core architecture of modern processor. The OpenCV @ref cv::parallel_for_ framework automatically decides how to split the computation efficiently and does most of the work for us.
|
||||
When looking at the sequential implementation, we can notice that each pixel depends on multiple neighbouring pixels but only one pixel is edited at a time. Thus, to optimize the computation, we can split the image into stripes and parallelly perform convolution on each, by exploiting the multi-core architecture of modern processor. The OpenCV @ref cv::parallel_for_ framework automatically decides how to split the computation efficiently and does most of the work for us.
|
||||
|
||||
@note Although values of a pixel in a particular stripe may depend on pixel values outside the stripe, these are only read only operations and hence will not cause undefined behaviour.
|
||||
|
||||
|
@ -70,7 +70,7 @@ Sometimes networks built using blocked structure that means some layer are
|
||||
identical or quite similar. If you want to apply the same scheduling for
|
||||
different layers accurate to tiling or vectorization factors, define scheduling
|
||||
patterns in section `patterns` at the beginning of scheduling file.
|
||||
Also, your patters may use some parametric variables.
|
||||
Also, your patterns may use some parametric variables.
|
||||
@code
|
||||
# At the beginning of the file
|
||||
patterns:
|
||||
|
@ -29,8 +29,8 @@ Before recognition, you should `setVocabulary` and `setDecodeType`.
|
||||
- "CTC-prefix-beam-search", the output of the text recognition model should be a probability matrix same with "CTC-greedy".
|
||||
- The algorithm is proposed at Hannun's [paper](https://arxiv.org/abs/1408.2873).
|
||||
- `setDecodeOptsCTCPrefixBeamSearch` could be used to control the beam size in search step.
|
||||
- To futher optimize for big vocabulary, a new option `vocPruneSize` is introduced to avoid iterate the whole vocbulary
|
||||
but only the number of `vocPruneSize` tokens with top probabilty.
|
||||
- To further optimize for big vocabulary, a new option `vocPruneSize` is introduced to avoid iterate the whole vocbulary
|
||||
but only the number of `vocPruneSize` tokens with top probability.
|
||||
|
||||
@ref cv::dnn::TextRecognitionModel::recognize() is the main function for text recognition.
|
||||
- The input image should be a cropped text image or an image with `roiRects`
|
||||
|
@ -142,7 +142,7 @@ being a Graph API, doesn't force its users to do that.
|
||||
However, a graph is still built implicitly when a cv::GComputation
|
||||
object is defined. It may be useful to inspect how the resulting graph
|
||||
looks like to check if it is generated correctly and if it really
|
||||
represents our alrogithm. It is also useful to learn the structure of
|
||||
represents our algorithm. It is also useful to learn the structure of
|
||||
the graph to see if it has any redundancies.
|
||||
|
||||
G-API allows to dump generated graphs to `.dot` files which then
|
||||
|
@ -241,7 +241,7 @@ pipeline is compiled for streaming:
|
||||
cv::GComputation::compileStreaming() triggers a special video-oriented
|
||||
form of graph compilation where G-API is trying to optimize
|
||||
throughput. Result of this compilation is an object of special type
|
||||
cv::GStreamingCompiled -- in constract to a traditional callable
|
||||
cv::GStreamingCompiled -- in contrast to a traditional callable
|
||||
cv::GCompiled, these objects are closer to media players in their
|
||||
semantics.
|
||||
|
||||
|
@ -79,7 +79,7 @@ The main function is rather simple, as follows from the comments we do the follo
|
||||
In general callback functions are used to react to some kind of signal, in our
|
||||
case it's trackbar's state change.
|
||||
Explicit one-time call of `thresh_callback` is necessary to display
|
||||
the "Contours" window simultaniously with the "Source" window.
|
||||
the "Contours" window simultaneously with the "Source" window.
|
||||
|
||||
@add_toggle_cpp
|
||||
@snippet samples/cpp/tutorial_code/ShapeDescriptors/generalContours_demo1.cpp trackbar
|
||||
|
@ -240,7 +240,7 @@ taken:
|
||||
Hello OpenCV Sample
|
||||
-------------------
|
||||
|
||||
Here are basic steps to guide you trough the process of creating a simple OpenCV-centric
|
||||
Here are basic steps to guide you through the process of creating a simple OpenCV-centric
|
||||
application. It will be capable of accessing camera output, processing it and displaying the result.
|
||||
|
||||
-# Open Eclipse IDE, create a new clean workspace, create a new Android project
|
||||
|
@ -20,7 +20,7 @@ This pretty-printer can show element type, `is_continuous`, `is_submatrix` flags
|
||||
|
||||
# Installation {#tutorial_linux_gdb_pretty_printer_installation}
|
||||
|
||||
Move into `opencv/samples/gdb/`. Place `mat_pretty_printer.py` in a convinient place, rename `gdbinit` to `.gdbinit` and move it into your home folder. Change 'source' line of `.gdbinit` to point to your `mat_pretty_printer.py` path.
|
||||
Move into `opencv/samples/gdb/`. Place `mat_pretty_printer.py` in a convenient place, rename `gdbinit` to `.gdbinit` and move it into your home folder. Change 'source' line of `.gdbinit` to point to your `mat_pretty_printer.py` path.
|
||||
|
||||
In order to check version of python bundled with your gdb, use the following commands from the gdb shell:
|
||||
|
||||
@ -34,5 +34,5 @@ If the version of python 3 installed in your system doesn't match the version in
|
||||
|
||||
# Usage {#tutorial_linux_gdb_pretty_printer_usage}
|
||||
|
||||
The fields in a debugger prefixed with `view_` are pseudo-fields added for convinience, the rest are left as is.
|
||||
If you feel that the number of elements in truncated view is too low, you can edit `mat_pretty_printer.py` - `np.set_printoptions` controlls everything matrix display-related.
|
||||
The fields in a debugger prefixed with `view_` are pseudo-fields added for convenience, the rest are left as is.
|
||||
If you feel that the number of elements in truncated view is too low, you can edit `mat_pretty_printer.py` - `np.set_printoptions` controls everything matrix display-related.
|
||||
|
@ -22,7 +22,7 @@ Introduction
|
||||
In *OpenCV* all the image processing operations are usually carried out on the *Mat* structure. In
|
||||
iOS however, to render an image on screen it have to be an instance of the *UIImage* class. To
|
||||
convert an *OpenCV Mat* to an *UIImage* we use the *Core Graphics* framework available in iOS. Below
|
||||
is the code needed to covert back and forth between Mat's and UIImage's.
|
||||
is the code needed to convert back and forth between Mat's and UIImage's.
|
||||
@code{.m}
|
||||
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
|
||||
{
|
||||
|
Loading…
Reference in New Issue
Block a user