Merge pull request #26469 from vpisarev:move_gapi_to_contrib_part1

Removed g-api from the main repo #26469

Following #25000.
CI patch: https://github.com/opencv/ci-gha-workflow/pull/196

This is migration of G-API from opencv to opencv_contrib, part 1.
Here we simply remove G-API from the main repo. The next patch should bring G-API to opencv_contrib.

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
This commit is contained in:
Vadim Pisarevsky 2024-11-19 10:29:24 +03:00 committed by GitHub
parent ea0f9336e2
commit 2ed6d0f590
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
594 changed files with 1 additions and 153147 deletions

View File

@ -123,7 +123,7 @@ endif()
set(STD_OPENCV_LIBS opencv-data)
set(STD_OPENCV_DEV libopencv-dev)
foreach(module 3d calib core dnn features flann gapi highgui
foreach(module 3d calib core dnn features flann highgui
imgcodecs imgproc ml objdetect
photo stereo stitching ts video videoio)
if(HAVE_opencv_${module})

View File

@ -21,7 +21,6 @@ ocv_update(BUILD_opencv_java OFF)
# <[thread|mutex|condition_variable|future]>` and linkage into
# `libpthread` to work.
ocv_update(BUILD_opencv_objdetect OFF)
ocv_update(BUILD_opencv_gapi OFF)
ocv_update(BUILD_opencv_dnn OFF)
set(OPJ_USE_THREAD "OFF" CACHE INTERNAL "")

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 220 KiB

View File

@ -1,404 +0,0 @@
# Porting anisotropic image segmentation on G-API {#tutorial_gapi_anisotropic_segmentation}
@prev_tutorial{tutorial_gapi_interactive_face_detection}
@next_tutorial{tutorial_gapi_face_beautification}
[TOC]
# Introduction {#gapi_anisotropic_intro}
In this tutorial you will learn:
* How an existing algorithm can be transformed into a G-API
computation (graph);
* How to inspect and profile G-API graphs;
* How to customize graph execution without changing its code.
This tutorial is based on @ref
tutorial_anisotropic_image_segmentation_by_a_gst.
# Quick start: using OpenCV backend {#gapi_anisotropic_start}
Before we start, let's review the original algorithm implementation:
@include cpp/tutorial_code/ImgProc/anisotropic_image_segmentation/anisotropic_image_segmentation.cpp
## Examining calcGST() {#gapi_anisotropic_calcgst}
The function calcGST() is clearly an image processing pipeline:
* It is just a sequence of operations over a number of cv::Mat;
* No logic (conditionals) and loops involved in the code;
* All functions operate on 2D images (like cv::Sobel, cv::multiply,
cv::boxFilter, cv::sqrt, etc).
Considering the above, calcGST() is a great candidate to start
with. In the original code, its prototype is defined like this:
@snippet cpp/tutorial_code/ImgProc/anisotropic_image_segmentation/anisotropic_image_segmentation.cpp calcGST_proto
With G-API, we can define it as follows:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi.cpp calcGST_proto
It is important to understand that the new G-API based version of
calcGST() will just produce a compute graph, in contrast to its
original version, which actually calculates the values. This is a
principal difference -- G-API based functions like this are used to
construct graphs, not to process the actual data.
Let's start implementing calcGST() with calculation of \f$J\f$
matrix. This is how the original code looks like:
@snippet cpp/tutorial_code/ImgProc/anisotropic_image_segmentation/anisotropic_image_segmentation.cpp calcJ_header
Here we need to declare output objects for every new operation (see
img as a result for cv::Mat::convertTo, imgDiffX and others as results for
cv::Sobel and cv::multiply).
The G-API analogue is listed below:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi.cpp calcGST_header
This snippet demonstrates the following syntactic difference between
G-API and traditional OpenCV:
* All standard G-API functions are by default placed in "cv::gapi"
namespace;
* G-API operations _return_ its results -- there's no need to pass
extra "output" parameters to the functions.
Note -- this code is also using `auto` -- types of intermediate objects
like `img`, `imgDiffX`, and so on are inferred automatically by the
C++ compiler. In this example, the types are determined by G-API
operation return values which all are cv::GMat.
G-API standard kernels are trying to follow OpenCV API conventions
whenever possible -- so cv::gapi::sobel takes the same arguments as
cv::Sobel, cv::gapi::mul follows cv::multiply, and so on (except
having a return value).
The rest of calcGST() function can be implemented the same
way trivially. Below is its full source code:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi.cpp calcGST
## Running G-API graph {#gapi_anisotropic_running}
After calcGST() is defined in G-API language, we can construct a graph
based on it and finally run it -- pass input image and obtain
result. Before we do it, let's have a look how original code looked
like:
@snippet cpp/tutorial_code/ImgProc/anisotropic_image_segmentation/anisotropic_image_segmentation.cpp main_extra
G-API-based functions like calcGST() can't be applied to input data
directly, since it is a _construction_ code, not the _processing_ code.
In order to _run_ computations, a special object of class
cv::GComputation needs to be created. This object wraps our G-API code
(which is a composition of G-API data and operations) into a callable
object, similar to C++11
[std::function<>](https://en.cppreference.com/w/cpp/utility/functional/function).
cv::GComputation class has a number of constructors which can be used
to define a graph. Generally, user needs to pass graph boundaries
-- _input_ and _output_ objects, on which a GComputation is
defined. Then G-API analyzes the call flow from _outputs_ to _inputs_
and reconstructs the graph with operations in-between the specified
boundaries. This may sound complex, however in fact the code looks
like this:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi.cpp main
Note that this code slightly changes from the original one: forming up
the resulting image is also a part of the pipeline (done with
cv::gapi::addWeighted).
Result of this G-API pipeline bit-exact matches the original one
(given the same input image):
![Segmentation result with G-API](pics/result.jpg)
## G-API initial version: full listing {#gapi_anisotropic_ocv}
Below is the full listing of the initial anisotropic image
segmentation port on G-API:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi.cpp full_sample
# Inspecting the initial version {#gapi_anisotropic_inspect}
After we have got the initial working version of our algorithm working
with G-API, we can use it to inspect and learn how G-API works. This
chapter covers two aspects: understanding the graph structure, and
memory profiling.
## Understanding the graph structure {#gapi_anisotropic_inspect_graph}
G-API stands for "Graph API", but did you mention any graphs in the
above example? It was one of the initial design goals -- G-API was
designed with expressions in mind to make adoption and porting process
more straightforward. People _usually_ don't think in terms of
_Nodes_ and _Edges_ when writing ordinary code, and so G-API, while
being a Graph API, doesn't force its users to do that.
However, a graph is still built implicitly when a cv::GComputation
object is defined. It may be useful to inspect how the resulting graph
looks like to check if it is generated correctly and if it really
represents our algorithm. It is also useful to learn the structure of
the graph to see if it has any redundancies.
G-API allows to dump generated graphs to `.dot` files which then
could be visualized with [Graphviz](https://www.graphviz.org/), a
popular open graph visualization software.
<!-- TODO THIS VARIABLE NEEDS TO BE FIXED TO DUMP DIR ASAP! -->
In order to dump our graph to a `.dot` file, set `GRAPH_DUMP_PATH` to a
file name before running the application, e.g.:
$ GRAPH_DUMP_PATH=segm.dot ./bin/example_tutorial_porting_anisotropic_image_segmentation_gapi
Now this file can be visualized with a `dot` command like this:
$ dot segm.dot -Tpng -o segm.png
or viewed interactively with `xdot` (please refer to your
distribution/operating system documentation on how to install these
packages).
![Anisotropic image segmentation graph](pics/segm.gif)
The above diagram demonstrates a number of interesting aspects of
G-API's internal algorithm representation:
1. G-API underlying graph is a bipartite graph: it consists of
_Operation_ and _Data_ nodes such that a _Data_ node can only be
connected to an _Operation_ node, _Operation_ node can only be
connected to a _Data_ node, and nodes of a single kind are never
connected directly.
2. Graph is directed - every edge in the graph has a direction.
3. Graph "begins" and "ends" with a _Data_ kind of nodes.
4. A _Data_ node can have only a single writer and multiple readers.
5. An _Operation_ node may have multiple inputs, though every input
must have an unique _port number_ (among inputs).
6. An _Operation_ node may have multiple outputs, and every output
must have an unique _port number_ (among outputs).
## Measuring memory footprint {#gapi_anisotropic_memory_ocv}
Let's measure and compare memory footprint of the algorithm in its two
versions: G-API-based and OpenCV-based. At the moment, G-API version
is also OpenCV-based since it fallbacks to OpenCV functions inside.
On GNU/Linux, application memory footprint can be profiled with
[Valgrind](http://valgrind.org/). On Debian/Ubuntu systems it can be
installed like this (assuming you have administrator privileges):
$ sudo apt-get install valgrind massif-visualizer
Once installed, we can collect memory profiles easily for our two
algorithm versions:
$ valgrind --tool=massif --massif-out-file=ocv.out ./bin/example_tutorial_anisotropic_image_segmentation
==6101== Massif, a heap profiler
==6101== Copyright (C) 2003-2015, and GNU GPL'd, by Nicholas Nethercote
==6101== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==6101== Command: ./bin/example_tutorial_anisotropic_image_segmentation
==6101==
==6101==
$ valgrind --tool=massif --massif-out-file=gapi.out ./bin/example_tutorial_porting_anisotropic_image_segmentation_gapi
==6117== Massif, a heap profiler
==6117== Copyright (C) 2003-2015, and GNU GPL'd, by Nicholas Nethercote
==6117== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==6117== Command: ./bin/example_tutorial_porting_anisotropic_image_segmentation_gapi
==6117==
==6117==
Once done, we can inspect the collected profiles with
[Massif Visualizer](https://github.com/KDE/massif-visualizer)
(installed in the above step).
Below is the visualized memory profile of the original OpenCV version
of the algorithm:
![Memory profile: original Anisotropic Image Segmentation sample](pics/massif_export_ocv.png)
We see that memory is allocated as the application
executes, reaching its peak in the calcGST() function; then the
footprint drops as calcGST() completes its execution and all temporary
buffers are freed. Massif reports us peak memory consumption of 7.6 MiB.
Now let's have a look on the profile of G-API version:
![Memory profile: G-API port of Anisotropic Image Segmentation sample](pics/massif_export_gapi.png)
Once G-API computation is created and its execution starts, G-API
allocates all required memory at once and then the memory profile
remains flat until the termination of the program. Massif reports us
peak memory consumption of 11.4 MiB.
A reader may ask a right question at this point -- is G-API that bad?
What is the reason in using it than?
Hopefully, it is not. The reason why we see here an increased memory
consumption is because the default naive OpenCV-based backend is used to
execute this graph. This backend serves mostly for quick prototyping
and debugging algorithms before offload/further optimization.
This backend doesn't utilize any complex memory management strategies yet
since it is not its point at the moment. In the following chapter,
we'll learn about Fluid backend and see how the same G-API code can
run in a completely different model (and the footprint shrunk to a
number of kilobytes).
# Backends and kernels {#gapi_anisotropic_backends}
This chapter covers how a G-API computation can be executed in a
special way -- e.g. offloaded to another device, or scheduled with a
special intelligence. G-API is designed to make its graphs portable --
it means that once a graph is defined in G-API terms, no changes
should be required in it if we want to run it on CPU or on GPU or on
both devices at once. [G-API High-level overview](@ref gapi_hld) and
[G-API Kernel API](@ref gapi_kernel_api) shed more light on technical
details which make it possible. In this chapter, we will utilize G-API
Fluid backend to make our graph cache-efficient on CPU.
G-API defines _backend_ as the lower-level entity which knows how to
run kernels. Backends may have (and, in fact, do have) different
_Kernel APIs_ which are used to program and integrate kernels for that
backends. In this context, _kernel_ is an implementation of an
_operation_, which is defined on the top API level (see
G_TYPED_KERNEL() macro).
Backend is a thing which is aware of device & platform specifics, and
which executes its kernels with keeping that specifics in mind. For
example, there may be [Halide](http://halide-lang.org/) backend which
allows to write (implement) G-API operations in Halide language and
then generate functional Halide code for portions of G-API graph which
map well there.
## Running a graph with a Fluid backend {#gapi_anisotropic_fluid}
OpenCV 4.0 is bundled with two G-API backends -- the default "OpenCV"
which we just used, and a special "Fluid" backend.
Fluid backend reorganizes the execution to save memory and to achieve
near-perfect cache locality, implementing so-called "streaming" model
of execution.
In order to start using Fluid kernels, we need first to include
appropriate header files (which are not included by default):
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi_fluid.cpp fluid_includes
Once these headers are included, we can form up a new _kernel package_
and specify it to G-API:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi_fluid.cpp kernel_pkg
In G-API, kernels (or operation implementations) are objects. Kernels are
organized into collections, or _kernel packages_, represented by class
cv::GKernelPackage. The main purpose of a kernel package is to
capture which kernels we would like to use in our graph, and pass it
as a _graph compilation option_:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi_fluid.cpp kernel_pkg_use
Traditional OpenCV is logically divided into modules, with every
module providing a set of functions. In G-API, there are also
"modules" which are represented as kernel packages provided by a
particular backend. In this example, we pass Fluid kernel packages to
G-API to utilize appropriate Fluid functions in our graph.
Kernel packages are combinable -- in the above example, we take "Core"
and "ImgProc" Fluid kernel packages and combine it into a single
one. See documentation reference on cv::gapi::combine.
If no kernel packages are specified in options, G-API is using
_default_ package which consists of default OpenCV implementations and
thus G-API graphs are executed via OpenCV functions by default. OpenCV
backend provides broader functional coverage than any other
backend. If a kernel package is specified, like in this example, then
it is being combined with the _default_.
It means that user-specified implementations will replace default implementations in case of
conflict.
<!-- FIXME Document this process better as a part of regular -->
<!-- documentation, not a tutorial kind of thing -->
## Troubleshooting and customization {#gapi_anisotropic_trouble}
After the above modifications, (in OpenCV 4.0) the app should crash
with a message like this:
```
$ ./bin/example_tutorial_porting_anisotropic_image_segmentation_gapi_fluid
terminate called after throwing an instance of 'std::logic_error'
what(): .../modules/gapi/src/backends/fluid/gfluidimgproc.cpp:436: Assertion kernelSize.width == 3 && kernelSize.height == 3 in function run failed
Aborted (core dumped)
```
Fluid backend has a number of limitations in OpenCV 4.0 (see this
[wiki page](https://github.com/opencv/opencv/wiki/Graph-API) for a
more up-to-date status). In particular, the Box filter used in this
sample supports only static 3x3 kernel size.
We can overcome this problem easily by avoiding G-API using Fluid
version of Box filter kernel in this sample. It can be done by
removing the appropriate kernel from the kernel package we've just
created:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi_fluid.cpp kernel_hotfix
Now this kernel package doesn't have _any_ implementation of Box
filter kernel interface (specified as a template parameter). As
described above, G-API will fall-back to OpenCV to run this kernel
now. The resulting code with this change now looks like:
@snippet cpp/tutorial_code/gapi/porting_anisotropic_image_segmentation/porting_anisotropic_image_segmentation_gapi_fluid.cpp kernel_pkg_proper
Let's examine the memory profile for this sample after we switched to
Fluid backend. Now it looks like this:
![Memory profile: G-API/Fluid port of Anisotropic Image Segmentation sample](pics/massif_export_gapi_fluid.png)
Now the tool reports 4.7MiB -- and we just changed a few lines in our
code, without modifying the graph itself! It is a ~2.4X improvement of
the previous G-API result, and ~1.6X improvement of the original OpenCV
version.
Let's also examine how the internal representation of the graph now
looks like. Dumping the graph into `.dot` would result into a
visualization like this:
![Anisotropic image segmentation graph with OpenCV & Fluid kernels](pics/segm_fluid.gif)
This graph doesn't differ structurally from its previous version (in
terms of operations and data objects), though a changed layout (on the
left side of the dump) is easily noticeable.
The visualization reflects how G-API deals with mixed graphs, also
called _heterogeneous_ graphs. The majority of operations in this
graph are implemented with Fluid backend, but Box filters are executed
by the OpenCV backend. One can easily see that the graph is partitioned
(with rectangles). G-API groups connected operations based on their
affinity, forming _subgraphs_ (or _islands_ in G-API terminology), and
our top-level graph becomes a composition of multiple smaller
subgraphs. Every backend determines how its subgraph (island) is
executed, so Fluid backend optimizes out memory where possible, and
six intermediate buffers accessed by OpenCV Box filters are allocated
fully and can't be optimized out.
<!-- TODO: add a chapter on custom kernels -->
<!-- TODO: make a full-fluid pipeline -->
<!-- TODO: talk about parallelism when it is available -->
# Conclusion {#gapi_tutor_conclusion}
This tutorial demonstrates what G-API is and what its key design
concepts are, how an algorithm can be ported to G-API, and
how to utilize graph model benefits after that.
In OpenCV 4.0, G-API is still in its inception stage -- it is more a
foundation for all future work, though ready for use even now.
Further, this tutorial will be extended with new chapters on custom
kernels programming, parallelism, and more.

View File

@ -1,442 +0,0 @@
# Implementing a face beautification algorithm with G-API {#tutorial_gapi_face_beautification}
@prev_tutorial{tutorial_gapi_anisotropic_segmentation}
[TOC]
# Introduction {#gapi_fb_intro}
In this tutorial you will learn:
* Basics of a sample face beautification algorithm;
* How to infer different networks inside a pipeline with G-API;
* How to run a G-API pipeline on a video stream.
## Prerequisites {#gapi_fb_prerec}
This sample requires:
- PC with GNU/Linux or Microsoft Windows (Apple macOS is supported but
was not tested);
- OpenCV 4.2 or later built with Intel® Distribution of [OpenVINO™
Toolkit](https://docs.openvinotoolkit.org/) (building with [Intel®
TBB](https://www.threadingbuildingblocks.org/intel-tbb-tutorial) is
a plus);
- The following topologies from OpenVINO™ Toolkit [Open Model
Zoo](https://github.com/opencv/open_model_zoo):
- `face-detection-adas-0001`;
- `facial-landmarks-35-adas-0002`.
## Face beautification algorithm {#gapi_fb_algorithm}
We will implement a simple face beautification algorithm using a
combination of modern Deep Learning techniques and traditional
Computer Vision. The general idea behind the algorithm is to make
face skin smoother while preserving face features like eyes or a
mouth contrast. The algorithm identifies parts of the face using a DNN
inference, applies different filters to the parts found, and then
combines it into the final result using basic image arithmetics:
\dot
strict digraph Pipeline {
node [shape=record fontname=Helvetica fontsize=10 style=filled color="#4c7aa4" fillcolor="#5b9bd5" fontcolor="white"];
edge [color="#62a8e7"];
ordering="out";
splines=ortho;
rankdir=LR;
input [label="Input"];
fd [label="Face\ndetector"];
bgMask [label="Generate\nBG mask"];
unshMask [label="Unsharp\nmask"];
bilFil [label="Bilateral\nfilter"];
shMask [label="Generate\nsharp mask"];
blMask [label="Generate\nblur mask"];
mul_1 [label="*" fontsize=24 shape=circle labelloc=b];
mul_2 [label="*" fontsize=24 shape=circle labelloc=b];
mul_3 [label="*" fontsize=24 shape=circle labelloc=b];
subgraph cluster_0 {
style=dashed
fontsize=10
ld [label="Landmarks\ndetector"];
label="for each face"
}
sum_1 [label="+" fontsize=24 shape=circle];
out [label="Output"];
temp_1 [style=invis shape=point width=0];
temp_2 [style=invis shape=point width=0];
temp_3 [style=invis shape=point width=0];
temp_4 [style=invis shape=point width=0];
temp_5 [style=invis shape=point width=0];
temp_6 [style=invis shape=point width=0];
temp_7 [style=invis shape=point width=0];
temp_8 [style=invis shape=point width=0];
temp_9 [style=invis shape=point width=0];
input -> temp_1 [arrowhead=none]
temp_1 -> fd -> ld
ld -> temp_4 [arrowhead=none]
temp_4 -> bgMask
bgMask -> mul_1 -> sum_1 -> out
temp_4 -> temp_5 -> temp_6 [arrowhead=none constraint=none]
ld -> temp_2 -> temp_3 [style=invis constraint=none]
temp_1 -> {unshMask, bilFil}
fd -> unshMask [style=invis constraint=none]
unshMask -> bilFil [style=invis constraint=none]
bgMask -> shMask [style=invis constraint=none]
shMask -> blMask [style=invis constraint=none]
mul_1 -> mul_2 [style=invis constraint=none]
temp_5 -> shMask -> mul_2
temp_6 -> blMask -> mul_3
unshMask -> temp_2 -> temp_5 [style=invis]
bilFil -> temp_3 -> temp_6 [style=invis]
mul_2 -> temp_7 [arrowhead=none]
mul_3 -> temp_8 [arrowhead=none]
temp_8 -> temp_7 [arrowhead=none constraint=none]
temp_7 -> sum_1 [constraint=none]
unshMask -> mul_2 [constraint=none]
bilFil -> mul_3 [constraint=none]
temp_1 -> mul_1 [constraint=none]
}
\enddot
Briefly the algorithm is described as follows:
- Input image \f$I\f$ is passed to unsharp mask and bilateral filters
(\f$U\f$ and \f$L\f$ respectively);
- Input image \f$I\f$ is passed to an SSD-based face detector;
- SSD result (a \f$[1 \times 1 \times 200 \times 7]\f$ blob) is parsed
and converted to an array of faces;
- Every face is passed to a landmarks detector;
- Based on landmarks found for every face, three image masks are
generated:
- A background mask \f$b\f$ -- indicating which areas from the
original image to keep as-is;
- A face part mask \f$p\f$ -- identifying regions to preserve
(sharpen).
- A face skin mask \f$s\f$ -- identifying regions to blur;
- The final result \f$O\f$ is a composition of features above
calculated as \f$O = b*I + p*U + s*L\f$.
Generating face element masks based on a limited set of features (just
35 per face, including all its parts) is not very trivial and is
described in the sections below.
# Constructing a G-API pipeline {#gapi_fb_pipeline}
## Declaring Deep Learning topologies {#gapi_fb_decl_nets}
This sample is using two DNN detectors. Every network takes one input
and produces one output. In G-API, networks are defined with macro
G_API_NET():
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp net_decl
To get more information, see
[Declaring Deep Learning topologies](@ref gapi_ifd_declaring_nets)
described in the "Face Analytics pipeline" tutorial.
## Describing the processing graph {#gapi_fb_ppline}
The code below generates a graph for the algorithm above:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp ppl
The resulting graph is a mixture of G-API's standard operations,
user-defined operations (namespace `custom::`), and DNN inference.
The generic function `cv::gapi::infer<>()` allows to trigger inference
within the pipeline; networks to infer are specified as template
parameters. The sample code is using two versions of `cv::gapi::infer<>()`:
- A frame-oriented one is used to detect faces on the input frame.
- An ROI-list oriented one is used to run landmarks inference on a
list of faces -- this version produces an array of landmarks per
every face.
More on this in "Face Analytics pipeline"
([Building a GComputation](@ref gapi_ifd_gcomputation) section).
## Unsharp mask in G-API {#gapi_fb_unsh}
The unsharp mask \f$U\f$ for image \f$I\f$ is defined as:
\f[U = I - s * L(M(I)),\f]
where \f$M()\f$ is a median filter, \f$L()\f$ is the Laplace operator,
and \f$s\f$ is a strength coefficient. While G-API doesn't provide
this function out-of-the-box, it is expressed naturally with the
existing G-API operations:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp unsh
Note that the code snipped above is a regular C++ function defined
with G-API types. Users can write functions like this to simplify
graph construction; when called, this function just puts the relevant
nodes to the pipeline it is used in.
# Custom operations {#gapi_fb_proc}
The face beautification graph is using custom operations
extensively. This chapter focuses on the most interesting kernels,
refer to [G-API Kernel API](@ref gapi_kernel_api) for general
information on defining operations and implementing kernels in G-API.
## Face detector post-processing {#gapi_fb_face_detect}
A face detector output is converted to an array of faces with the
following kernel:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp vec_ROI
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp fd_pp
## Facial landmarks post-processing {#gapi_fb_landm_detect}
The algorithm infers locations of face elements (like the eyes, the mouth
and the head contour itself) using a generic facial landmarks detector
(<a href="https://github.com/opencv/open_model_zoo/blob/master/models/intel/facial-landmarks-35-adas-0002/description/facial-landmarks-35-adas-0002.md">details</a>)
from OpenVINO™ Open Model Zoo. However, the detected landmarks as-is are not
enough to generate masks --- this operation requires regions of interest on
the face represented by closed contours, so some interpolation is applied to
get them. This landmarks
processing and interpolation is performed by the following kernel:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp ld_pp_cnts
The kernel takes two arrays of denormalized landmarks coordinates and
returns an array of elements' closed contours and an array of faces'
closed contours; in other words, outputs are, the first, an array of
contours of image areas to be sharpened and, the second, another one
to be smoothed.
Here and below `Contour` is a vector of points.
### Getting an eye contour {#gapi_fb_ld_eye}
Eye contours are estimated with the following function:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp ld_pp_incl
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp ld_pp_eye
Briefly, this function restores the bottom side of an eye by a
half-ellipse based on two points in left and right eye
corners. In fact, `cv::ellipse2Poly()` is used to approximate the eye region, and
the function only defines ellipse parameters based on just two points:
- The ellipse center and the \f$X\f$ half-axis calculated by two eye Points;
- The \f$Y\f$ half-axis calculated according to the assumption that an average
eye width is \f$1/3\f$ of its length;
- The start and the end angles which are 0 and 180 (refer to
`cv::ellipse()` documentation);
- The angle delta: how much points to produce in the contour;
- The inclination angle of the axes.
The use of the `atan2()` instead of just `atan()` in function
`custom::getLineInclinationAngleDegrees()` is essential as it allows to
return a negative value depending on the `x` and the `y` signs so we
can get the right angle even in case of upside-down face arrangement
(if we put the points in the right order, of course).
### Getting a forehead contour {#gapi_fb_ld_fhd}
The function approximates the forehead contour:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp ld_pp_fhd
As we have only jaw points in our detected landmarks, we have to get a
half-ellipse based on three points of a jaw: the leftmost, the
rightmost and the lowest one. The jaw width is assumed to be equal to the
forehead width and the latter is calculated using the left and the
right points. Speaking of the \f$Y\f$ axis, we have no points to get
it directly, and instead assume that the forehead height is about \f$2/3\f$
of the jaw height, which can be figured out from the face center (the
middle between the left and right points) and the lowest jaw point.
## Drawing masks {#gapi_fb_masks_drw}
When we have all the contours needed, we are able to draw masks:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp msk_ppline
The steps to get the masks are:
* the "sharp" mask calculation:
* fill the contours that should be sharpened;
* blur that to get the "sharp" mask (`mskSharpG`);
* the "bilateral" mask calculation:
* fill all the face contours fully;
* blur that;
* subtract areas which intersect with the "sharp" mask --- and get the
"bilateral" mask (`mskBlurFinal`);
* the background mask calculation:
* add two previous masks
* set all non-zero pixels of the result as 255 (by `cv::gapi::threshold()`)
* revert the output (by `cv::gapi::bitwise_not`) to get the background
mask (`mskNoFaces`).
# Configuring and running the pipeline {#gapi_fb_comp_args}
Once the graph is fully expressed, we can finally compile it and run
on real data. G-API graph compilation is the stage where the G-API
framework actually understands which kernels and networks to use. This
configuration happens via G-API compilation arguments.
## DNN parameters {#gapi_fb_comp_args_net}
This sample is using OpenVINO™ Toolkit Inference Engine backend for DL
inference, which is configured the following way:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp net_param
Every `cv::gapi::ie::Params<>` object is related to the network
specified in its template argument. We should pass there the network
type we have defined in `G_API_NET()` in the early beginning of the
tutorial.
Network parameters are then wrapped in `cv::gapi::NetworkPackage`:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp netw
More details in "Face Analytics Pipeline"
([Configuring the pipeline](@ref gapi_ifd_configuration) section).
## Kernel packages {#gapi_fb_comp_args_kernels}
In this example we use a lot of custom kernels, in addition to that we
use Fluid backend to optimize out memory for G-API's standard kernels
where applicable. The resulting kernel package is formed like this:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp kern_pass_1
## Compiling the streaming pipeline {#gapi_fb_compiling}
G-API optimizes execution for video streams when compiled in the
"Streaming" mode.
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp str_comp
More on this in "Face Analytics Pipeline"
([Configuring the pipeline](@ref gapi_ifd_configuration) section).
## Running the streaming pipeline {#gapi_fb_running}
In order to run the G-API streaming pipeline, all we need is to
specify the input video source, call
`cv::GStreamingCompiled::start()`, and then fetch the pipeline
processing results:
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp str_src
@snippet cpp/tutorial_code/gapi/face_beautification/face_beautification.cpp str_loop
Once results are ready and can be pulled from the pipeline we display
it on the screen and handle GUI events.
See [Running the pipeline](@ref gapi_ifd_running) section
in the "Face Analytics Pipeline" tutorial for more details.
# Conclusion {#gapi_fb_cncl}
The tutorial has two goals: to show the use of brand new features of
G-API introduced in OpenCV 4.2, and give a basic understanding on a
sample face beautification algorithm.
The result of the algorithm application:
![Face Beautification example](pics/example.jpg)
On the test machine (Intel® Core™ i7-8700) the G-API-optimized video
pipeline outperforms its serial (non-pipelined) version by a factor of
**2.7** -- meaning that for such a non-trivial graph, the proper
pipelining can bring almost 3x increase in performance.
<!---
The idea in general is to implement a real-time video stream processing that
detects faces and applies some filters to make them look beautiful (more or
less). The pipeline is the following:
Two topologies from OMZ have been used in this sample: the
<a href="https://github.com/opencv/open_model_zoo/tree/master/models/intel
/face-detection-adas-0001">face-detection-adas-0001</a>
and the
<a href="https://github.com/opencv/open_model_zoo/blob/master/models/intel
/facial-landmarks-35-adas-0002/description/facial-landmarks-35-adas-0002.md">
facial-landmarks-35-adas-0002</a>.
The face detector takes the input image and returns a blob with the shape
[1,1,200,7] after the inference (200 is the maximum number of
faces which can be detected).
In order to process every face individually, we need to convert this output to a
list of regions on the image.
The masks for different filters are built based on facial landmarks, which are
inferred for every face. The result of the inference
is a blob with 35 landmarks: the first 18 of them are facial elements
(eyes, eyebrows, a nose, a mouth) and the last 17 --- a jaw contour. Landmarks
are floating point values of coordinates normalized relatively to an input ROI
(not the original frame). In addition, for the further goals we need contours of
eyes, mouths, faces, etc., not the landmarks. So, post-processing of the Mat is
also required here. The process is split into two parts --- landmarks'
coordinates denormalization to the real pixel coordinates of the source frame
and getting necessary closed contours based on these coordinates.
The last step of processing the inference data is drawing masks using the
calculated contours. In this demo the contours don't need to be pixel accurate,
since masks are blurred with Gaussian filter anyway. Another point that should
be mentioned here is getting
three masks (for areas to be smoothed, for ones to be sharpened and for the
background) which have no intersections with each other; this approach allows to
apply the calculated masks to the corresponding images prepared beforehand and
then just to summarize them to get the output image without any other actions.
As we can see, this algorithm is appropriate to illustrate G-API usage
convenience and efficiency in the context of solving a real CV/DL problem.
(On detector post-proc)
Some points to be mentioned about this kernel implementation:
- It takes a `cv::Mat` from the detector and a `cv::Mat` from the input; it
returns an array of ROI's where faces have been detected.
- `cv::Mat` data parsing by the pointer on a float is used here.
- By far the most important thing here is solving an issue that sometimes
detector returns coordinates located outside of the image; if we pass such an
ROI to be processed, errors in the landmarks detection will occur. The frame box
`borders` is created and then intersected with the face rectangle
(by `operator&()`) to handle such cases and save the ROI which is for sure
inside the frame.
Data parsing after the facial landmarks detector happens according to the same
scheme with inconsiderable adjustments.
## Possible further improvements
There are some points in the algorithm to be improved.
### Correct ROI reshaping for meeting conditions required by the facial landmarks detector
The input of the facial landmarks detector is a square ROI, but the face
detector gives non-square rectangles in general. If we let the backend within
Inference-API compress the rectangle to a square by itself, the lack of
inference accuracy can be noticed in some cases.
There is a solution: we can give a describing square ROI instead of the
rectangular one to the landmarks detector, so there will be no need to compress
the ROI, which will lead to accuracy improvement.
Unfortunately, another problem occurs if we do that:
if the rectangular ROI is near the border, a describing square will probably go
out of the frame --- that leads to errors of the landmarks detector.
To avoid such a mistake, we have to implement an algorithm that, firstly,
describes every rectangle by a square, then counts the farthest coordinates
turned up to be outside of the frame and, finally, pads the source image by
borders (e.g. single-colored) with the size counted. It will be safe to take
square ROIs for the facial landmarks detector after that frame adjustment.
### Research for the best parameters (used in GaussianBlur() or unsharpMask(), etc.)
### Parameters autoscaling
-->

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

View File

@ -1,355 +0,0 @@
# Face analytics pipeline with G-API {#tutorial_gapi_interactive_face_detection}
@next_tutorial{tutorial_gapi_anisotropic_segmentation}
[TOC]
# Overview {#gapi_ifd_intro}
In this tutorial you will learn:
* How to integrate Deep Learning inference in a G-API graph;
* How to run a G-API graph on a video stream and obtain data from it.
# Prerequisites {#gapi_ifd_prereq}
This sample requires:
- PC with GNU/Linux or Microsoft Windows (Apple macOS is supported but
was not tested);
- OpenCV 4.2 or later built with Intel® Distribution of [OpenVINO™
Toolkit](https://docs.openvinotoolkit.org/) (building with [Intel®
TBB](https://www.threadingbuildingblocks.org/intel-tbb-tutorial) is
a plus);
- The following topologies from OpenVINO™ Toolkit [Open Model
Zoo](https://github.com/opencv/open_model_zoo):
- `face-detection-adas-0001`;
- `age-gender-recognition-retail-0013`;
- `emotions-recognition-retail-0003`.
# Introduction: why G-API {#gapi_ifd_why}
Many computer vision algorithms run on a video stream rather than on
individual images. Stream processing usually consists of multiple
steps -- like decode, preprocessing, detection, tracking,
classification (on detected objects), and visualization -- forming a
*video processing pipeline*. Moreover, many these steps of such
pipeline can run in parallel -- modern platforms have different
hardware blocks on the same chip like decoders and GPUs, and extra
accelerators can be plugged in as extensions, like Intel® Movidius™
Neural Compute Stick for deep learning offload.
Given all this manifold of options and a variety in video analytics
algorithms, managing such pipelines effectively quickly becomes a
problem. For sure it can be done manually, but this approach doesn't
scale: if a change is required in the algorithm (e.g. a new pipeline
step is added), or if it is ported on a new platform with different
capabilities, the whole pipeline needs to be re-optimized.
Starting with version 4.2, OpenCV offers a solution to this
problem. OpenCV G-API now can manage Deep Learning inference (a
cornerstone of any modern analytics pipeline) with a traditional
Computer Vision as well as video capturing/decoding, all in a single
pipeline. G-API takes care of pipelining itself -- so if the algorithm
or platform changes, the execution model adapts to it automatically.
# Pipeline overview {#gapi_ifd_overview}
Our sample application is based on ["Interactive Face Detection"] demo
from OpenVINO™ Toolkit Open Model Zoo. A simplified pipeline consists
of the following steps:
1. Image acquisition and decode;
2. Detection with preprocessing;
3. Classification with preprocessing for every detected object with
two networks;
4. Visualization.
\dot
digraph pipeline {
node [shape=record fontname=Helvetica fontsize=10 style=filled color="#4c7aa4" fillcolor="#5b9bd5" fontcolor="white"];
edge [color="#62a8e7"];
splines=ortho;
rankdir = LR;
subgraph cluster_0 {
color=invis;
capture [label="Capture\nDecode"];
resize [label="Resize\nConvert"];
detect [label="Detect faces"];
capture -> resize -> detect
}
subgraph cluster_1 {
graph[style=dashed];
subgraph cluster_2 {
color=invis;
temp_4 [style=invis shape=point width=0];
postproc_1 [label="Crop\nResize\nConvert"];
age_gender [label="Classify\nAge/gender"];
postproc_1 -> age_gender [constraint=true]
temp_4 -> postproc_1 [constraint=none]
}
subgraph cluster_3 {
color=invis;
postproc_2 [label="Crop\nResize\nConvert"];
emo [label="Classify\nEmotions"];
postproc_2 -> emo [constraint=true]
}
label="(for each face)";
}
temp_1 [style=invis shape=point width=0];
temp_2 [style=invis shape=point width=0];
detect -> temp_1 [arrowhead=none]
temp_1 -> postproc_1
capture -> {temp_4, temp_2} [arrowhead=none constraint=false]
temp_2 -> postproc_2
temp_1 -> temp_2 [arrowhead=none constraint=false]
temp_3 [style=invis shape=point width=0];
show [label="Visualize\nDisplay"];
{age_gender, emo} -> temp_3 [arrowhead=none]
temp_3 -> show
}
\enddot
# Constructing a pipeline {#gapi_ifd_constructing}
Constructing a G-API graph for a video streaming case does not differ
much from a [regular usage](@ref gapi_example) of G-API -- it is still
about defining graph *data* (with cv::GMat, cv::GScalar, and
cv::GArray) and *operations* over it. Inference also becomes an
operation in the graph, but is defined in a little bit different way.
## Declaring Deep Learning topologies {#gapi_ifd_declaring_nets}
In contrast with traditional CV functions (see [core] and [imgproc])
where G-API declares distinct operations for every function, inference
in G-API is a single generic operation cv::gapi::infer<>. As usual, it
is just an interface and it can be implemented in a number of ways under
the hood. In OpenCV 4.2, only OpenVINO™ Inference Engine-based backend
is available, and OpenCV's own DNN module-based backend is to come.
cv::gapi::infer<> is _parametrized_ by the details of a topology we are
going to execute. Like operations, topologies in G-API are strongly
typed and are defined with a special macro G_API_NET():
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp G_API_NET
Similar to how operations are defined with G_API_OP(), network
description requires three parameters:
1. A type name. Every defined topology is declared as a distinct C++
type which is used further in the program -- see below;
2. A `std::function<>`-like API signature. G-API traits networks as
regular "functions" which take and return data. Here network
`Faces` (a detector) takes a cv::GMat and returns a cv::GMat, while
network `AgeGender` is known to provide two outputs (age and gender
blobs, respectively) -- so its has a `std::tuple<>` as a return
type.
3. A topology name -- can be any non-empty string, G-API is using
these names to distinguish networks inside. Names should be unique
in the scope of a single graph.
## Building a GComputation {#gapi_ifd_gcomputation}
Now the above pipeline is expressed in G-API like this:
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp GComputation
Every pipeline starts with declaring empty data objects -- which act
as inputs to the pipeline. Then we call a generic cv::gapi::infer<>
specialized to `Faces` detection network. cv::gapi::infer<> inherits its
signature from its template parameter -- and in this case it expects
one input cv::GMat and produces one output cv::GMat.
In this sample we use a pre-trained SSD-based network and its output
needs to be parsed to an array of detections (object regions of
interest, ROIs). It is done by a custom operation `custom::PostProc`,
which returns an array of rectangles (of type `cv::GArray<cv::Rect>`)
back to the pipeline. This operation also filters out results by a
confidence threshold -- and these details are hidden in the kernel
itself. Still, at the moment of graph construction we operate with
interfaces only and don't need actual kernels to express the pipeline
-- so the implementation of this post-processing will be listed later.
After detection result output is parsed to an array of objects, we can run
classification on any of those. G-API doesn't support syntax for
in-graph loops like `for_each()` yet, but instead cv::gapi::infer<>
comes with a special list-oriented overload.
User can call cv::gapi::infer<> with a cv::GArray as the first
argument, so then G-API assumes it needs to run the associated network
on every rectangle from the given list of the given frame (second
argument). Result of such operation is also a list -- a cv::GArray of
cv::GMat.
Since `AgeGender` network itself produces two outputs, it's output
type for a list-based version of cv::gapi::infer is a tuple of
arrays. We use `std::tie()` to decompose this input into two distinct
objects.
`Emotions` network produces a single output so its list-based
inference's return type is `cv::GArray<cv::GMat>`.
# Configuring the pipeline {#gapi_ifd_configuration}
G-API strictly separates construction from configuration -- with the
idea to keep algorithm code itself platform-neutral. In the above
listings we only declared our operations and expressed the overall
data flow, but didn't even mention that we use OpenVINO™. We only
described *what* we do, but not *how* we do it. Keeping these two
aspects clearly separated is the design goal for G-API.
Platform-specific details arise when the pipeline is *compiled* --
i.e. is turned from a declarative to an executable form. The way *how*
to run stuff is specified via compilation arguments, and new
inference/streaming features are no exception from this rule.
G-API is built on backends which implement interfaces (see
[Architecture] and [Kernels] for details) -- thus cv::gapi::infer<> is
a function which can be implemented by different backends. In OpenCV
4.2, only OpenVINO™ Inference Engine backend for inference is
available. Every inference backend in G-API has to provide a special
parameterizable structure to express *backend-specific* neural network
parameters -- and in this case, it is cv::gapi::ie::Params:
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp Param_Cfg
Here we define three parameter objects: `det_net`, `age_net`, and
`emo_net`. Every object is a cv::gapi::ie::Params structure
parametrization for each particular network we use. On a compilation
stage, G-API automatically matches network parameters with their
cv::gapi::infer<> calls in graph using this information.
Regardless of the topology, every parameter structure is constructed
with three string arguments -- specific to the OpenVINO™ Inference
Engine:
1. Path to the topology's intermediate representation (.xml file);
2. Path to the topology's model weights (.bin file);
3. Device where to run -- "CPU", "GPU", and others -- based on your
OpenVINO™ Toolkit installation.
These arguments are taken from the command-line parser.
Once networks are defined and custom kernels are implemented, the
pipeline is compiled for streaming:
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp Compile
cv::GComputation::compileStreaming() triggers a special video-oriented
form of graph compilation where G-API is trying to optimize
throughput. Result of this compilation is an object of special type
cv::GStreamingCompiled -- in contrast to a traditional callable
cv::GCompiled, these objects are closer to media players in their
semantics.
@note There is no need to pass metadata arguments describing the
format of the input video stream in
cv::GComputation::compileStreaming() -- G-API figures automatically
what are the formats of the input vector and adjusts the pipeline to
these formats on-the-fly. User still can pass metadata there as with
regular cv::GComputation::compile() in order to fix the pipeline to
the specific input format.
# Running the pipeline {#gapi_ifd_running}
Pipelining optimization is based on processing multiple input video
frames simultaneously, running different steps of the pipeline in
parallel. This is why it works best when the framework takes full
control over the video stream.
The idea behind streaming API is that user specifies an *input source*
to the pipeline and then G-API manages its execution automatically
until the source ends or user interrupts the execution. G-API pulls
new image data from the source and passes it to the pipeline for
processing.
Streaming sources are represented by the interface
cv::gapi::wip::IStreamSource. Objects implementing this interface may
be passed to `GStreamingCompiled` as regular inputs via `cv::gin()`
helper function. In OpenCV 4.2, only one streaming source is allowed
per pipeline -- this requirement will be relaxed in the future.
OpenCV comes with a great class cv::VideoCapture and by default G-API
ships with a stream source class based on it --
cv::gapi::wip::GCaptureSource. Users can implement their own
streaming sources e.g. using [VAAPI] or other Media or Networking
APIs.
Sample application specifies the input source as follows:
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp Source
Please note that a GComputation may still have multiple inputs like
cv::GMat, cv::GScalar, or cv::GArray objects. User can pass their
respective host-side types (cv::Mat, cv::Scalar, std::vector<>) in the
input vector as well, but in Streaming mode these objects will create
"endless" constant streams. Mixing a real video source stream and a
const data stream is allowed.
Running a pipeline is easy -- just call
cv::GStreamingCompiled::start() and fetch your data with blocking
cv::GStreamingCompiled::pull() or non-blocking
cv::GStreamingCompiled::try_pull(); repeat until the stream ends:
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp Run
The above code may look complex but in fact it handles two modes --
with and without graphical user interface (GUI):
- When a sample is running in a "headless" mode (`--pure` option is
set), this code simply pulls data from the pipeline with the
blocking `pull()` until it ends. This is the most performant mode of
execution.
- When results are also displayed on the screen, the Window System
needs to take some time to refresh the window contents and handle
GUI events. In this case, the demo pulls data with a non-blocking
`try_pull()` until there is no more data available (but it does not
mark end of the stream -- just means new data is not ready yet), and
only then displays the latest obtained result and refreshes the
screen. Reducing the time spent in GUI with this trick increases the
overall performance a little bit.
# Comparison with serial mode {#gapi_ifd_comparison}
The sample can also run in a serial mode for a reference and
benchmarking purposes. In this case, a regular
cv::GComputation::compile() is used and a regular single-frame
cv::GCompiled object is produced; the pipelining optimization is not
applied within G-API; it is the user responsibility to acquire image
frames from cv::VideoCapture object and pass those to G-API.
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp Run_Serial
On a test machine (Intel® Core™ i5-6600), with OpenCV built with
[Intel® TBB]
support, detector network assigned to CPU, and classifiers to iGPU,
the pipelined sample outperformes the serial one by the factor of
1.36x (thus adding +36% in overall throughput).
# Conclusion {#gapi_ifd_conclusion}
G-API introduces a technological way to build and optimize hybrid
pipelines. Switching to a new execution model does not require changes
in the algorithm code expressed with G-API -- only the way how graph
is triggered differs.
# Listing: post-processing kernel {#gapi_ifd_pp}
G-API gives an easy way to plug custom code into the pipeline even if
it is running in a streaming mode and processing tensor
data. Inference results are represented by multi-dimensional cv::Mat
objects so accessing those is as easy as with a regular DNN module.
The OpenCV-based SSD post-processing kernel is defined and implemented in this
sample as follows:
@snippet cpp/tutorial_code/gapi/age_gender_emotion_recognition/age_gender_emotion_recognition.cpp Postproc
["Interactive Face Detection"]: https://github.com/opencv/open_model_zoo/tree/master/demos/interactive_face_detection_demo
[core]: @ref gapi_core
[imgproc]: @ref gapi_imgproc
[Architecture]: @ref gapi_hld
[Kernels]: @ref gapi_kernel_api
[VAAPI]: https://01.org/vaapi

View File

@ -1,26 +0,0 @@
Using DepthAI Hardware / OAK depth sensors {#tutorial_gapi_oak_devices}
=======================================================================
@tableofcontents
@prev_tutorial{tutorial_gapi_face_beautification}
![Oak-D and Oak-D-Light cameras](pics/oak.jpg)
Depth sensors compatible with Luxonis DepthAI library are supported through OpenCV Graph API (or G-API) module. RGB image and some other formats of output can be retrieved by using familiar interface of G-API module.
In order to use DepthAI sensor with OpenCV you should do the following preliminary steps:
-# Install Luxonis DepthAI library [depthai-core](https://github.com/luxonis/depthai-core).
-# Configure OpenCV with DepthAI library support by setting `WITH_OAK` flag in CMake. If DepthAI library is found in install folders OpenCV will be built with depthai-core (see a status `WITH_OAK` in CMake log).
-# Build OpenCV.
Source code
-----------
You can find source code how to process heterogeneous graphs in the `modules/gapi/samples/oak_basic_infer.cpp` of the OpenCV source code library.
@add_toggle_cpp
@include modules/gapi/samples/oak_basic_infer.cpp
@end_toggle

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

View File

@ -1,53 +0,0 @@
# Graph API (gapi module) {#tutorial_table_of_content_gapi}
In this section you will learn about graph-based image processing and
how G-API module can be used for that.
- @subpage tutorial_gapi_interactive_face_detection
*Languages:* C++
*Compatibility:* \> OpenCV 4.2
*Author:* Dmitry Matveev
This tutorial illustrates how to build a hybrid video processing
pipeline with G-API where Deep Learning and image processing are
combined effectively to maximize the overall throughput. This
sample requires Intel® distribution of OpenVINO™ Toolkit version
2019R2 or later.
- @subpage tutorial_gapi_anisotropic_segmentation
*Languages:* C++
*Compatibility:* \> OpenCV 4.0
*Author:* Dmitry Matveev
This is an end-to-end tutorial where an existing sample algorithm
is ported on G-API, covering the basic intuition behind this
transition process, and examining benefits which a graph model
brings there.
- @subpage tutorial_gapi_face_beautification
*Languages:* C++
*Compatibility:* \> OpenCV 4.2
*Author:* Orest Chura
In this tutorial we build a complex hybrid Computer Vision/Deep
Learning video processing pipeline with G-API.
- @subpage tutorial_gapi_oak_devices
*Languages:* C++
*Compatibility:* \> OpenCV 4.6
*Author:* Alessandro de Oliveira Faria (A.K.A. CABELO)
In this tutorial we showed how to use the Luxonis DepthAI library with G-API.

View File

@ -301,26 +301,6 @@ Some external dependencies can be detached into a dynamic library, which will be
| OPENCV_TEST_CAMERA_%d_FPS | num | | fps to set for N-th camera (0-based index) (waitAny_V4L test) |
## gapi
| name | type | default | description |
|------|------|---------|-------------|
| ⭐ GRAPH_DUMP_PATH | file path | | dump graph (dot format) |
| PIPELINE_MODELS_PATH | dir path | | pipeline_modeling_tool sample application uses this var |
| OPENCV_GAPI_INFERENCE_ENGINE_CORE_LIFETIME_WORKAROUND | bool | true (Windows, Apple), false (others) | similar to OPENCV_DNN_INFERENCE_ENGINE_CORE_LIFETIME_WORKAROUND |
### gapi tests/samples
| name | type | default | description |
|------|------|---------|-------------|
| PLAIDML_DEVICE | string | | specific to PlaidML backend test |
| PLAIDML_TARGET | string | | specific to PlaidML backend test |
| OPENCV_GAPI_ONNX_MODEL_PATH | dir path | | search location for ONNX models test |
| OPENCV_TEST_FREETYPE_FONT_PATH | file path | | location of TrueType font for one of tests |
### Links:
* https://github.com/opencv/opencv/wiki/Using-G-API-with-OpenVINO-Toolkit
* https://github.com/opencv/opencv/wiki/Using-G-API-with-MS-ONNX-Runtime
## highgui
| name | type | default | description |

View File

@ -9,7 +9,6 @@ OpenCV Tutorials {#tutorial_root}
- @subpage tutorial_table_of_content_objdetect - INSERT OBJDETECT MODULE INFO
- @subpage tutorial_table_of_content_features - feature detectors, descriptors and matching framework
- @subpage tutorial_table_of_content_dnn - infer neural networks using built-in _dnn_ module
- @subpage tutorial_table_of_content_gapi - graph-based approach to computer vision algorithms building
- @subpage tutorial_table_of_content_other - other modules (stitching, video, photo)
- @subpage tutorial_table_of_content_ios - running OpenCV on an iDevice
- @subpage tutorial_table_of_content_3d - 3d objects processing and visualisation

View File

@ -1,440 +0,0 @@
# FIXME: Rework standalone build in more generic maner
# (Restructure directories, add common pass, etc)
if(NOT DEFINED OPENCV_INITIAL_PASS)
cmake_minimum_required(VERSION 3.13)
project(gapi_standalone)
include("cmake/standalone.cmake")
return()
endif()
if(NOT TARGET ade)
# can't build G-API because of the above reasons
ocv_module_disable(gapi)
return()
endif()
if(TARGET ocv.3rdparty.openvino)
# TODO: remove OPENCV_GAPI_INF_ENGINE option
set(initial_value ON)
if(DEFINED OPENCV_GAPI_INF_ENGINE)
set(initial_value ${OPENCV_GAPI_INF_ENGINE})
message(WARNING "OPENCV_GAPI_INF_ENGINE option is deprecated. Use OPENCV_GAPI_WITH_OPENVINO option instead.")
endif()
ocv_option(OPENCV_GAPI_WITH_OPENVINO "G-API: Enable OpenVINO Toolkit support" ${initial_value})
endif()
set(the_description "OpenCV G-API Core Module")
ocv_add_module(gapi
REQUIRED
opencv_imgproc
OPTIONAL
opencv_video opencv_stereo
WRAP
python
)
if(MSVC)
if(MSVC_VERSION LESS 1910)
# Disable obsolete warning C4503 popping up on MSVC << 15 2017
# https://docs.microsoft.com/en-us/cpp/error-messages/compiler-warnings/compiler-warning-level-1-c4503?view=vs-2019
# and IE deprecated code warning C4996
ocv_warnings_disable(CMAKE_CXX_FLAGS /wd4503 /wd4996)
endif()
if((MSVC_VERSION LESS 1920) OR ARM OR AARCH64) # MSVS 2015/2017 on x86 and ARM
ocv_warnings_disable(CMAKE_CXX_FLAGS /wd4702) # 'unreachable code'
endif()
endif()
file(GLOB gapi_ext_hdrs
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/*.h"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/cpu/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/fluid/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/gpu/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/infer/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/oak/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/ocl/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/own/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/plaidml/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/python/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/render/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/s11n/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/streaming/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/streaming/gstreamer/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/streaming/onevpl/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/plaidml/*.hpp"
"${CMAKE_CURRENT_LIST_DIR}/include/opencv2/${name}/util/*.hpp"
)
set(gapi_srcs
# Front-end part
src/api/grunarg.cpp
src/api/gorigin.cpp
src/api/gmat.cpp
src/api/garray.cpp
src/api/gopaque.cpp
src/api/gscalar.cpp
src/api/gframe.cpp
src/api/gkernel.cpp
src/api/gbackend.cpp
src/api/gcommon.cpp
src/api/gproto.cpp
src/api/gnode.cpp
src/api/gcall.cpp
src/api/gcomputation.cpp
src/api/operators.cpp
src/api/kernels_core.cpp
src/api/kernels_imgproc.cpp
src/api/kernels_video.cpp
src/api/kernels_nnparsers.cpp
src/api/kernels_ot.cpp
src/api/kernels_streaming.cpp
src/api/kernels_stereo.cpp
src/api/render.cpp
src/api/render_ocv.cpp
src/api/ginfer.cpp
src/api/media.cpp
src/api/rmat.cpp
# Compiler part
src/compiler/gmodel.cpp
src/compiler/gmodelbuilder.cpp
src/compiler/gislandmodel.cpp
src/compiler/gcompiler.cpp
src/compiler/gcompiled.cpp
src/compiler/gstreaming.cpp
src/compiler/passes/helpers.cpp
src/compiler/passes/dump_dot.cpp
src/compiler/passes/islands.cpp
src/compiler/passes/meta.cpp
src/compiler/passes/kernels.cpp
src/compiler/passes/exec.cpp
src/compiler/passes/transformations.cpp
src/compiler/passes/pattern_matching.cpp
src/compiler/passes/perform_substitution.cpp
src/compiler/passes/streaming.cpp
src/compiler/passes/intrin.cpp
# Executor
src/executor/gabstractexecutor.cpp
src/executor/gabstractstreamingexecutor.cpp
src/executor/gexecutor.cpp
src/executor/gtbbexecutor.cpp
src/executor/gthreadedexecutor.cpp
src/executor/gstreamingexecutor.cpp
src/executor/gasync.cpp
src/executor/thread_pool.cpp
# CPU Backend (currently built-in)
src/backends/cpu/gcpubackend.cpp
src/backends/cpu/gcpukernel.cpp
src/backends/cpu/gcpuimgproc.cpp
src/backends/cpu/gcpustereo.cpp
src/backends/cpu/gcpuvideo.cpp
src/backends/cpu/gcpucore.cpp
src/backends/cpu/gcpuot.cpp
src/backends/cpu/gnnparsers.cpp
# Fluid Backend (also built-in, FIXME:move away)
src/backends/fluid/gfluidbuffer.cpp
src/backends/fluid/gfluidbackend.cpp
src/backends/fluid/gfluidimgproc.cpp
src/backends/fluid/gfluidimgproc_func.dispatch.cpp
src/backends/fluid/gfluidcore.cpp
src/backends/fluid/gfluidcore_func.dispatch.cpp
# OAK Backend (optional)
src/backends/oak/goak.cpp
src/backends/oak/goakbackend.cpp
src/backends/oak/goak_memory_adapters.cpp
# OCL Backend (currently built-in)
src/backends/ocl/goclbackend.cpp
src/backends/ocl/goclkernel.cpp
src/backends/ocl/goclimgproc.cpp
src/backends/ocl/goclcore.cpp
# IE Backend. FIXME: should be included by CMake
# if and only if IE support is enabled
src/backends/ie/giebackend.cpp
src/backends/ie/giebackend/giewrapper.cpp
# OV Backend. FIXME: should be included by CMake
# if and only if OV support is enabled
src/backends/ov/govbackend.cpp
# ONNX backend
src/backends/onnx/gonnxbackend.cpp
src/backends/onnx/dml_ep.cpp
src/backends/onnx/coreml_ep.cpp
# Render backend
src/backends/render/grenderocv.cpp
src/backends/render/ft_render.cpp
# PlaidML Backend
src/backends/plaidml/gplaidmlcore.cpp
src/backends/plaidml/gplaidmlbackend.cpp
# Common backend code
src/backends/common/gmetabackend.cpp
src/backends/common/gcompoundbackend.cpp
src/backends/common/gcompoundkernel.cpp
# Serialization API and routines
src/api/s11n.cpp
src/backends/common/serialization.cpp
# Streaming backend
src/backends/streaming/gstreamingbackend.cpp
# Python bridge
src/backends/ie/bindings_ie.cpp
src/backends/onnx/bindings_onnx.cpp
src/backends/ov/bindings_ov.cpp
src/backends/python/gpythonbackend.cpp
# Queue Streaming source
src/streaming/queue_source.cpp
# OpenVPL Streaming source
src/streaming/onevpl/source.cpp
src/streaming/onevpl/source_priv.cpp
src/streaming/onevpl/file_data_provider.cpp
src/streaming/onevpl/cfg_params.cpp
src/streaming/onevpl/cfg_params_parser.cpp
src/streaming/onevpl/utils.cpp
src/streaming/onevpl/default.cpp
src/streaming/onevpl/data_provider_interface_exception.cpp
src/streaming/onevpl/accelerators/surface/base_frame_adapter.cpp
src/streaming/onevpl/accelerators/surface/cpu_frame_adapter.cpp
src/streaming/onevpl/accelerators/surface/dx11_frame_adapter.cpp
src/streaming/onevpl/accelerators/surface/surface.cpp
src/streaming/onevpl/accelerators/surface/surface_pool.cpp
src/streaming/onevpl/accelerators/utils/shared_lock.cpp
src/streaming/onevpl/accelerators/accel_policy_cpu.cpp
src/streaming/onevpl/accelerators/accel_policy_dx11.cpp
src/streaming/onevpl/accelerators/accel_policy_va_api.cpp
src/streaming/onevpl/accelerators/dx11_alloc_resource.cpp
src/streaming/onevpl/engine/engine_session.cpp
src/streaming/onevpl/engine/processing_engine_base.cpp
src/streaming/onevpl/engine/decode/decode_engine_legacy.cpp
src/streaming/onevpl/engine/decode/decode_session.cpp
src/streaming/onevpl/engine/transcode/transcode_engine_legacy.cpp
src/streaming/onevpl/engine/transcode/transcode_session.cpp
src/streaming/onevpl/engine/preproc/preproc_engine.cpp
src/streaming/onevpl/engine/preproc/preproc_session.cpp
src/streaming/onevpl/engine/preproc/preproc_dispatcher.cpp
src/streaming/onevpl/engine/preproc_engine_interface.cpp
src/streaming/onevpl/demux/async_mfp_demux_data_provider.cpp
src/streaming/onevpl/data_provider_dispatcher.cpp
src/streaming/onevpl/cfg_param_device_selector.cpp
src/streaming/onevpl/device_selector_interface.cpp
# GStreamer Streaming source
src/streaming/gstreamer/gstreamer_pipeline_facade.cpp
src/streaming/gstreamer/gstreamerpipeline.cpp
src/streaming/gstreamer/gstreamersource.cpp
src/streaming/gstreamer/gstreamer_buffer_utils.cpp
src/streaming/gstreamer/gstreamer_media_adapter.cpp
src/streaming/gstreamer/gstreamerenv.cpp
# Utils (ITT tracing)
src/utils/itt.cpp
)
file(GLOB_RECURSE gapi_3rdparty_srcs
"${CMAKE_CURRENT_LIST_DIR}/src/3rdparty/vasot/src/*.cpp"
)
ocv_add_dispatched_file(backends/fluid/gfluidimgproc_func SSE4_1 AVX2)
ocv_add_dispatched_file(backends/fluid/gfluidcore_func SSE4_1 AVX2)
ocv_list_add_prefix(gapi_srcs "${CMAKE_CURRENT_LIST_DIR}/")
# For IDE users
ocv_source_group("Src" FILES ${gapi_srcs} ${gapi_3rdparty_srcs})
ocv_source_group("Include" FILES ${gapi_ext_hdrs})
ocv_set_module_sources(HEADERS ${gapi_ext_hdrs} SOURCES ${gapi_srcs} ${gapi_3rdparty_srcs})
ocv_module_include_directories("${CMAKE_CURRENT_LIST_DIR}/src")
# VAS Object Tracking includes
ocv_module_include_directories(${CMAKE_CURRENT_LIST_DIR}/src/3rdparty/vasot/include)
ocv_create_module()
ocv_target_link_libraries(${the_module} PRIVATE ade)
if(TARGET ocv.3rdparty.openvino AND OPENCV_GAPI_WITH_OPENVINO)
ocv_target_link_libraries(${the_module} PRIVATE ocv.3rdparty.openvino)
ocv_install_used_external_targets(ocv.3rdparty.openvino)
endif()
if(HAVE_TBB)
ocv_target_link_libraries(${the_module} PRIVATE tbb)
endif()
# TODO: Consider support of ITT in G-API standalone mode.
if(CV_TRACE AND HAVE_ITT)
ocv_target_compile_definitions(${the_module} PRIVATE -DOPENCV_WITH_ITT=1)
ocv_module_include_directories(${ITT_INCLUDE_DIRS})
ocv_target_link_libraries(${the_module} PRIVATE ${ITT_LIBRARIES})
endif()
set(__test_extra_deps "")
if(TARGET ocv.3rdparty.openvino AND OPENCV_GAPI_WITH_OPENVINO)
list(APPEND __test_extra_deps ocv.3rdparty.openvino)
endif()
ocv_add_accuracy_tests(${__test_extra_deps})
# FIXME: test binary is linked with ADE directly since ADE symbols
# are not exported from libopencv_gapi.so in any form - thus
# there're two copies of ADE code in memory when tests run (!)
# src/ is specified to include dirs for INTERNAL tests only.
if(TARGET opencv_test_gapi)
target_include_directories(opencv_test_gapi PRIVATE "${CMAKE_CURRENT_LIST_DIR}/src")
target_link_libraries(opencv_test_gapi PRIVATE ade)
endif()
if(HAVE_TBB AND TARGET opencv_test_gapi)
ocv_target_link_libraries(opencv_test_gapi PRIVATE tbb)
endif()
if(HAVE_FREETYPE)
ocv_target_compile_definitions(${the_module} PRIVATE -DHAVE_FREETYPE)
if(TARGET opencv_test_gapi)
ocv_target_compile_definitions(opencv_test_gapi PRIVATE -DHAVE_FREETYPE)
endif()
ocv_target_link_libraries(${the_module} PRIVATE ${FREETYPE_LIBRARIES})
ocv_target_include_directories(${the_module} PRIVATE ${FREETYPE_INCLUDE_DIRS})
endif()
if(HAVE_OAK)
ocv_target_compile_definitions(${the_module} PRIVATE -DHAVE_OAK)
if(TARGET opencv_test_gapi)
ocv_target_compile_definitions(opencv_test_gapi PRIVATE -DHAVE_OAK)
endif()
ocv_target_link_libraries(${the_module} PRIVATE depthai::core)
endif()
if(HAVE_PLAIDML)
ocv_target_compile_definitions(${the_module} PRIVATE -DHAVE_PLAIDML)
if(TARGET opencv_test_gapi)
ocv_target_compile_definitions(opencv_test_gapi PRIVATE -DHAVE_PLAIDML)
endif()
ocv_target_link_libraries(${the_module} PRIVATE ${PLAIDML_LIBRARIES})
ocv_target_include_directories(${the_module} SYSTEM PRIVATE ${PLAIDML_INCLUDE_DIRS})
endif()
if(HAVE_GAPI_ONEVPL)
if(TARGET opencv_test_gapi)
ocv_target_compile_definitions(opencv_test_gapi PRIVATE -DHAVE_ONEVPL)
ocv_target_link_libraries(opencv_test_gapi PRIVATE ${VPL_IMPORTED_TARGETS})
if(MSVC)
target_compile_options(opencv_test_gapi PUBLIC "/wd4201")
endif()
if(HAVE_D3D11 AND HAVE_OPENCL)
ocv_target_include_directories(opencv_test_gapi SYSTEM PRIVATE ${OPENCL_INCLUDE_DIRS})
endif()
endif()
ocv_target_compile_definitions(${the_module} PRIVATE -DHAVE_ONEVPL)
ocv_target_link_libraries(${the_module} PRIVATE ${VPL_IMPORTED_TARGETS})
if(HAVE_DIRECTX AND HAVE_D3D11)
ocv_target_link_libraries(${the_module} PRIVATE d3d11 dxgi)
endif()
if(WIN32)
ocv_target_link_libraries(${the_module} PRIVATE mf mfuuid mfplat shlwapi mfreadwrite)
endif()
if(HAVE_D3D11 AND HAVE_OPENCL)
ocv_target_include_directories(${the_module} SYSTEM PRIVATE ${OPENCL_INCLUDE_DIRS})
endif()
if(UNIX AND HAVE_VA)
ocv_target_include_directories(${the_module} SYSTEM PRIVATE ${VA_INCLUDE_DIR})
ocv_target_link_libraries(${the_module} PRIVATE ${VA_LIBRARIES})
if(TARGET opencv_test_gapi)
ocv_target_include_directories(opencv_test_gapi SYSTEM PRIVATE ${VA_INCLUDE_DIR})
ocv_target_link_libraries(opencv_test_gapi PRIVATE ${VA_LIBRARIES})
endif()
endif()
endif()
ocv_option(OPENCV_GAPI_GSTREAMER "Build G-API with GStreamer support" HAVE_GSTREAMER)
if(HAVE_GSTREAMER AND OPENCV_GAPI_GSTREAMER)
if(TARGET opencv_test_gapi)
ocv_target_compile_definitions(opencv_test_gapi PRIVATE -DHAVE_GSTREAMER)
ocv_target_link_libraries(opencv_test_gapi PRIVATE ocv.3rdparty.gstreamer)
endif()
ocv_target_compile_definitions(${the_module} PRIVATE -DHAVE_GSTREAMER)
ocv_target_link_libraries(${the_module} PRIVATE ocv.3rdparty.gstreamer)
endif()
if(WIN32)
# Required for htonl/ntohl on Windows
ocv_target_link_libraries(${the_module} PRIVATE wsock32 ws2_32)
endif()
if(HAVE_DIRECTML)
ocv_target_compile_definitions(${the_module} PRIVATE HAVE_DIRECTML=1)
endif()
if(HAVE_ONNX)
ocv_target_link_libraries(${the_module} PRIVATE ${ONNX_LIBRARY})
ocv_target_compile_definitions(${the_module} PRIVATE HAVE_ONNX=1)
if(HAVE_ONNX_DML)
ocv_target_compile_definitions(${the_module} PRIVATE HAVE_ONNX_DML=1)
endif()
if(TARGET opencv_test_gapi)
ocv_target_compile_definitions(opencv_test_gapi PRIVATE HAVE_ONNX=1)
ocv_target_link_libraries(opencv_test_gapi PRIVATE ${ONNX_LIBRARY})
endif()
endif()
ocv_install_3rdparty_licenses(vasot "${CMAKE_CURRENT_SOURCE_DIR}/src/3rdparty/vasot/LICENSE.txt")
ocv_add_perf_tests()
ocv_add_samples()
# Required for sample with inference on host
if(TARGET example_gapi_onevpl_infer_with_advanced_device_selection)
if(TARGET ocv.3rdparty.openvino AND OPENCV_GAPI_WITH_OPENVINO)
ocv_target_link_libraries(example_gapi_onevpl_infer_with_advanced_device_selection PRIVATE ocv.3rdparty.openvino)
endif()
if(HAVE_DIRECTX AND HAVE_D3D11)
ocv_target_link_libraries(example_gapi_onevpl_infer_with_advanced_device_selection PRIVATE d3d11 dxgi)
endif()
if(HAVE_D3D11 AND HAVE_OPENCL)
ocv_target_include_directories(example_gapi_onevpl_infer_with_advanced_device_selection SYSTEM PRIVATE ${OPENCL_INCLUDE_DIRS})
endif()
if(UNIX AND HAVE_VA)
message(STATUS "GAPI VPL samples with VAAPI")
ocv_target_include_directories(example_gapi_onevpl_infer_with_advanced_device_selection SYSTEM PRIVATE ${VA_INCLUDE_DIR})
ocv_target_link_libraries(example_gapi_onevpl_infer_with_advanced_device_selection PRIVATE ${VA_LIBRARIES})
endif()
endif()
if(TARGET example_gapi_pipeline_modeling_tool)
if(WIN32)
ocv_target_link_libraries(example_gapi_pipeline_modeling_tool winmm.lib)
endif()
endif()
# perf test dependencies postprocessing
if(HAVE_GAPI_ONEVPL)
# NB: TARGET opencv_perf_gapi doesn't exist before `ocv_add_perf_tests`
# src/ is specified to include dirs for INTERNAL tests only.
if(TARGET opencv_perf_gapi)
target_include_directories(opencv_perf_gapi PRIVATE "${CMAKE_CURRENT_LIST_DIR}/src")
ocv_target_compile_definitions(opencv_perf_gapi PRIVATE -DHAVE_ONEVPL)
ocv_target_link_libraries(opencv_perf_gapi PRIVATE ${VPL_IMPORTED_TARGETS})
if(HAVE_D3D11 AND HAVE_OPENCL)
ocv_target_include_directories(opencv_perf_gapi SYSTEM PRIVATE ${OPENCL_INCLUDE_DIRS})
endif()
endif()
endif()

View File

@ -1,51 +0,0 @@
set(ade_src_dir "${OpenCV_BINARY_DIR}/3rdparty/ade")
set(ade_filename "v0.1.2e.zip")
set(ade_subdir "ade-0.1.2e")
set(ade_md5 "962ce79e0b95591f226431f7b5f152cd")
ocv_download(FILENAME ${ade_filename}
HASH ${ade_md5}
URL
"${OPENCV_ADE_URL}"
"$ENV{OPENCV_ADE_URL}"
"https://github.com/opencv/ade/archive/"
DESTINATION_DIR ${ade_src_dir}
ID ADE
STATUS res
UNPACK RELATIVE_URL)
if (NOT res)
return()
endif()
set(ADE_root "${ade_src_dir}/${ade_subdir}/sources/ade")
file(GLOB_RECURSE ADE_sources "${ADE_root}/source/*.cpp")
file(GLOB_RECURSE ADE_include "${ADE_root}/include/ade/*.hpp")
add_library(ade STATIC ${OPENCV_3RDPARTY_EXCLUDE_FROM_ALL}
${ADE_include}
${ADE_sources}
)
# https://github.com/opencv/ade/issues/32
if(CV_CLANG AND CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang" AND NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 13.1)
ocv_warnings_disable(CMAKE_CXX_FLAGS -Wdeprecated-copy)
endif()
target_include_directories(ade PUBLIC $<BUILD_INTERFACE:${ADE_root}/include>)
set_target_properties(ade PROPERTIES
POSITION_INDEPENDENT_CODE True
OUTPUT_NAME ade
DEBUG_POSTFIX "${OPENCV_DEBUG_POSTFIX}"
COMPILE_PDB_NAME ade
COMPILE_PDB_NAME_DEBUG "ade${OPENCV_DEBUG_POSTFIX}"
ARCHIVE_OUTPUT_DIRECTORY ${3P_LIBRARY_OUTPUT_PATH}
)
if(ENABLE_SOLUTION_FOLDERS)
set_target_properties(ade PROPERTIES FOLDER "3rdparty")
endif()
if(NOT BUILD_SHARED_LIBS)
ocv_install_target(ade EXPORT OpenCVModules ARCHIVE DESTINATION ${OPENCV_3P_LIB_INSTALL_PATH} COMPONENT dev OPTIONAL)
endif()
ocv_install_3rdparty_licenses(ade "${ade_src_dir}/${ade_subdir}/LICENSE")

View File

@ -1,49 +0,0 @@
OCV_OPTION(WITH_ADE "Enable ADE framework (required for Graph API module)" ON)
OCV_OPTION(WITH_FREETYPE "Enable FreeType framework" OFF)
OCV_OPTION(WITH_PLAIDML "Include PlaidML2 support" OFF)
OCV_OPTION(WITH_OAK "Include OpenCV AI Kit support" OFF)
if(NOT WITH_ADE)
return()
endif()
if(ade_DIR)
# if ade_DIR is set, use ADE-supplied CMake script
# to set up variables to the prebuilt ADE
find_package(ade 0.1.0)
endif()
if(NOT TARGET ade)
# if ade_DIR is not set, try to use automatically
# downloaded one (if there any)
include("${CMAKE_CURRENT_LIST_DIR}/DownloadADE.cmake")
endif()
if(WITH_FREETYPE)
ocv_check_modules(FREETYPE freetype2)
if (FREETYPE_FOUND)
set(HAVE_FREETYPE TRUE)
endif()
endif()
if(WITH_PLAIDML)
find_package(PlaidML2 CONFIG QUIET)
if (PLAIDML_FOUND)
set(HAVE_PLAIDML TRUE)
endif()
endif()
if(WITH_GAPI_ONEVPL)
find_package(VPL)
if(VPL_FOUND)
set(HAVE_GAPI_ONEVPL TRUE)
endif()
endif()
if(WITH_OAK)
find_package(depthai QUIET)
if(depthai_FOUND)
set(HAVE_OAK TRUE)
endif()
endif()

View File

@ -1,62 +0,0 @@
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
set(CMAKE_BUILD_TYPE "Release")
endif()
if (NOT TARGET ade )
find_package(ade 0.1.0 REQUIRED)
endif()
if (WITH_GAPI_ONEVPL)
find_package(VPL)
if(VPL_FOUND)
set(HAVE_GAPI_ONEVPL TRUE)
endif()
endif()
set(FLUID_TARGET fluid)
set(FLUID_ROOT "${CMAKE_CURRENT_LIST_DIR}/../")
file(GLOB FLUID_includes "${FLUID_ROOT}/include/opencv2/*.hpp"
"${FLUID_ROOT}/include/opencv2/gapi/g*.hpp"
"${FLUID_ROOT}/include/opencv2/gapi/util/*.hpp"
"${FLUID_ROOT}/include/opencv2/gapi/own/*.hpp"
"${FLUID_ROOT}/include/opencv2/gapi/fluid/*.hpp")
file(GLOB FLUID_sources "${FLUID_ROOT}/src/api/g*.cpp"
"${FLUID_ROOT}/src/api/rmat.cpp"
"${FLUID_ROOT}/src/api/media.cpp"
"${FLUID_ROOT}/src/compiler/*.cpp"
"${FLUID_ROOT}/src/compiler/passes/*.cpp"
"${FLUID_ROOT}/src/executor/*.cpp"
"${FLUID_ROOT}/src/backends/fluid/*.cpp"
"${FLUID_ROOT}/src/backends/streaming/*.cpp"
"${FLUID_ROOT}/src/backends/common/*.cpp")
add_library(${FLUID_TARGET} STATIC ${FLUID_includes} ${FLUID_sources})
target_include_directories(${FLUID_TARGET}
PUBLIC $<BUILD_INTERFACE:${FLUID_ROOT}/include>
PRIVATE ${FLUID_ROOT}/src)
target_compile_definitions(${FLUID_TARGET} PUBLIC GAPI_STANDALONE
# This preprocessor definition resolves symbol clash when
# standalone fluid meets gapi ocv module in one application
PUBLIC cv=fluidcv)
set_target_properties(${FLUID_TARGET} PROPERTIES POSITION_INDEPENDENT_CODE True)
set_property(TARGET ${FLUID_TARGET} PROPERTY CXX_STANDARD 11)
if(MSVC)
target_compile_options(${FLUID_TARGET} PUBLIC "/wd4251")
target_compile_options(${FLUID_TARGET} PUBLIC "/wd4275")
target_compile_definitions(${FLUID_TARGET} PRIVATE _CRT_SECURE_NO_DEPRECATE)
# Disable obsollete warning C4503 popping up on MSVC <<2017
# https://docs.microsoft.com/en-us/cpp/error-messages/compiler-warnings/compiler-warning-level-1-c4503?view=vs-2019
set_target_properties(${FLUID_TARGET} PROPERTIES COMPILE_FLAGS "/wd4503")
endif()
target_link_libraries(${FLUID_TARGET} PRIVATE ade)
if(WIN32)
# Required for htonl/ntohl on Windows
target_link_libraries(${FLUID_TARGET} PRIVATE wsock32 ws2_32)
endif()

View File

@ -1,125 +0,0 @@
# Graph API {#gapi}
# Introduction {#gapi_root_intro}
OpenCV Graph API (or G-API) is a new OpenCV module targeted to make
regular image processing fast and portable. These two goals are
achieved by introducing a new graph-based model of execution.
G-API is a special module in OpenCV -- in contrast with the majority
of other main modules, this one acts as a framework rather than some
specific CV algorithm. G-API provides means to define CV operations,
construct graphs (in form of expressions) using it, and finally
implement and run the operations for a particular backend.
@note G-API is a new module and now is in active development. It's API
is volatile at the moment and there may be minor but
compatibility-breaking changes in the future.
# Contents
G-API documentation is organized into the following chapters:
- @subpage gapi_purposes
The motivation behind G-API and its goals.
- @subpage gapi_hld
General overview of G-API architecture and its major internal
components.
- @subpage gapi_kernel_api
Learn how to introduce new operations in G-API and implement it for
various backends.
- @subpage gapi_impl
Low-level implementation details of G-API, for those who want to
contribute.
- API Reference: functions and classes
- @subpage gapi_ref
Core G-API classes, data types, backends, etc.
- @subpage gapi_core
Core G-API operations - arithmetic, boolean, and other matrix
operations;
- @subpage gapi_imgproc
Image processing functions: color space conversions, various
filters, etc.
- @subpage gapi_video
Video processing functionality.
- @subpage gapi_draw
Drawing and composition functionality
# API Example {#gapi_example}
A very basic example of G-API pipeline is shown below:
@include modules/gapi/samples/api_example.cpp
<!-- TODO align this code with text using marks and itemized list -->
G-API is a separate OpenCV module so its header files have to be
included explicitly. The first four lines of `main()` create and
initialize OpenCV's standard video capture object, which fetches
video frames from either an attached camera or a specified file.
G-API pipeline is constructed next. In fact, it is a series of G-API
operation calls on cv::GMat data. The important aspect of G-API is
that this code block is just a declaration of actions, but not the
actions themselves. No processing happens at this point, G-API only
tracks which operations form pipeline and how it is connected. G-API
_Data objects_ (here it is cv::GMat) are used to connect operations
each other. `in` is an _empty_ cv::GMat signalling that it is a
beginning of computation.
After G-API code is written, it is captured into a call graph with
instantiation of cv::GComputation object. This object takes
input/output data references (in this example, `in` and `out`
cv::GMat objects, respectively) as parameters and reconstructs the
call graph based on all the data flow between `in` and `out`.
cv::GComputation is a thin object in sense that it just captures which
operations form up a computation. However, it can be used to execute
computations -- in the following processing loop, every captured frame (a
cv::Mat `input_frame`) is passed to cv::GComputation::apply().
![Example pipeline running on sample video 'vtest.avi'](pics/demo.jpg)
cv::GComputation::apply() is a polimorphic method which accepts a
variadic number of arguments. Since this computation is defined on one
input, one output, a special overload of cv::GComputation::apply() is
used to pass input data and get output data.
Internally, cv::GComputation::apply() compiles the captured graph for
the given input parameters and executes the compiled graph on data
immediately.
There is a number important concepts can be outlines with this example:
* Graph declaration and graph execution are distinct steps;
* Graph is built implicitly from a sequence of G-API expressions;
* G-API supports function-like calls -- e.g. cv::gapi::resize(), and
operators, e.g operator|() which is used to compute bitwise OR;
* G-API syntax aims to look pure: every operation call within a graph
yields a new result, thus forming a directed acyclic graph (DAG);
* Graph declaration is not bound to any data -- real data objects
(cv::Mat) come into picture after the graph is already declared.
<!-- FIXME: The above operator|() link links to MatExpr not GAPI -->
See [tutorials and porting examples](@ref tutorial_table_of_content_gapi)
to learn more on various G-API features and concepts.
<!-- TODO Add chapter on declaration, compilation, execution -->

View File

@ -1,76 +0,0 @@
# Why Graph API? {#gapi_purposes}
# Motivation behind G-API {#gapi_intro_why}
G-API module brings graph-based model of execution to OpenCV. This
chapter briefly describes how this new model can help software
developers in two aspects: optimizing and porting image processing
algorithms.
## Optimizing with Graph API {#gapi_intro_opt}
Traditionally OpenCV provided a lot of stand-alone image processing
functions (see modules `core` and `imgproc`). Many of that functions
are well-optimized (e.g. vectorized for specific CPUs, parallel, etc)
but still the out-of-box optimization scope has been limited to a
single function only -- optimizing the whole algorithm built atop of that
functions was a responsibility of a programmer.
OpenCV 3.0 introduced _Transparent API_ (or _T-API_) which allowed to
offload OpenCV function calls transparently to OpenCL devices and save
on Host/Device data transfers with cv::UMat -- and it was a great step
forward. However, T-API is a dynamic API -- user code still remains
unconstrained and OpenCL kernels are enqueued in arbitrary order, thus
eliminating further pipeline-level optimization potential.
G-API brings implicit graph model to OpenCV 4.0. Graph model captures
all operations and its data dependencies in a pipeline and so provides
G-API framework with extra information to do pipeline-level
optimizations.
The cornerstone of graph-based optimizations is _Tiling_. Tiling
allows to break the processing into smaller parts and reorganize
operations to enable data parallelism, improve data locality, and save
memory footprint. Data locality is an especially important aspect of
software optimization due to diffent costs of memory access on modern
computer architectures -- the more data is reused in the first level
cache, the more efficient pipeline is.
Definitely the aforementioned techniques can be applied manually --
but it requires extra skills and knowledge of the target platform and
the algorithm implementation changes irrevocably -- becoming more
specific, less flexible, and harder to extend and maintain.
G-API takes this responsibility and complexity from user and does the
majority of the work by itself, keeping the algorithm code clean from
device or optimization details. This approach has its own limitations,
though, as graph model is a _constrained_ model and not every
algorithm can be represented as a graph, so the G-API scope is limited
only to regular image processing -- various filters, arithmetic,
binary operations, and well-defined geometrical transformations.
## Porting with Graph API {#gapi_intro_port}
The essence of G-API is declaring a sequence of operations to run, and
then executing that sequence. G-API is a constrained API, so it puts a
number of limitations on which operations can form a pipeline and
which data these operations may exchange each other.
This formalization in fact helps to make an algorithm portable. G-API
clearly separates operation _interfaces_ from its _implementations_.
One operation (_kernel_) may have multiple implementations even for a
single device (e.g., OpenCV-based "reference" implementation and a
tiled optimized implementation, both running on CPU). Graphs (or
_Computations_ in G-API terms) are built only using operation
interfaces, not implementations -- thus the same graph can be executed
on different devices (and, of course, using different optimization
techniques) with little-to-no changes in the graph itself.
G-API supports plugins (_Backends_) which aggregate logic and
intelligence on what is the best way to execute on a particular
platform. Once a pipeline is built with G-API, it can be parametrized
to use either of the backends (or a combination of it) and so a graph
can be ported easily to a new platform.
@sa @ref gapi_hld

View File

@ -1,160 +0,0 @@
# High-level design overview {#gapi_hld}
[TOC]
# G-API High-level design overview
G-API is a heterogeneous framework and provides an unified API to
program image processing pipelines with a number of supported
backends.
The key design idea is to keep pipeline code itself platform-neutral
while specifying which kernels to use and which devices to utilize
using extra parameters at graph compile (configuration) time. This
requirement has led to the following architecture:
<!-- FIXME: Render from dot directly -->
![G-API framework architecture](pics/gapi_scheme.png)
There are three layers in this architecture:
* **API Layer** -- this is the top layer, which implements G-API
public interface, its building blocks and semantics.
When user constructs a pipeline with G-API, he interacts with this
layer directly, and the entities the user operates on (like cv::GMat
or cv::GComputation) are provided by this layer.
* **Graph Compiler Layer** -- this is the intermediate layer which
unrolls user computation into a graph and then applies a number of
transformations to it (e.g. optimizations). This layer is built atop
of [ADE Framework](@ref gapi_detail_ade).
* **Backends Layer** -- this is the lowest level layer, which lists a
number of _Backends_. In contrast with the above two layers,
backends are highly coupled with low-level platform details, with
every backend standing for every platform. A backend operates on a
processed graph (coming from the graph compiler) and executes this
graph optimally for a specific platform or device.
# API layer {#gapi_api_layer}
API layer is what user interacts with when defining and using a
pipeline (a Computation in G-API terms). API layer defines a set of
G-API _dynamic_ objects which can be used as inputs, outputs, and
intermediate data objects within a graph:
* cv::GMat
* cv::GScalar
* cv::GArray (template class)
API layer specifies a list of Operations which are defined on these
data objects -- so called kernels. See G-API [core](@ref gapi_core)
and [imgproc](@ref gapi_imgproc) namespaces for details on which
operations G-API provides by default.
G-API is not limited to these operations only -- users can define
their own kernels easily using a special macro G_TYPED_KERNEL().
API layer is also responsible for marshalling and storing operation
parameters on pipeline creation. In addition to the aforementioned
G-API dynamic objects, operations may also accept arbitrary
parameters (more on this [here](@ref gapi_detail_params)), so API
layer captures its values and stores internally upon the moment of
execution.
Finally, cv::GComputation and cv::GCompiled are the remaining
important components of API layer. The former wraps a series of G-API
expressions into an object (graph), and the latter is a product of
graph _compilation_ (see [this chapter](@ref gapi_detail_compiler) for
details).
# Graph compiler layer {#gapi_compiler}
Every G-API computation is compiled before it executes. Compilation
process is triggered in two ways:
* _implicitly_, when cv::GComputation::apply() is used. In this case,
graph compilation is then immediately followed by execution.
* _explicitly_, when cv::GComputation::compile() is used. In this case,
a cv::GCompiled object is returned which then can be invoked as a
C++ functor.
The first way is recommended for cases when input data format is not
known in advance -- e.g. when it comes from an arbitrary input file.
The second way is recommended for deployment (production) scenarios
where input data characteristics are usually predefined.
Graph compilation process is built atop of ADE Framework. Initially, a
bipartite graph is generated from expressions captured by API layer.
This graph contains nodes of two types: _Data_ and _Operations_. Graph
always starts and ends with a Data node(s), with Operations nodes
in-between. Every Operation node has inputs and outputs, both are Data
nodes.
After the initial graph is generated, it is actually processed by a
number of graph transformations, called _passes_. ADE Framework acts
as a compiler pass management engine, and passes are written
specifically for G-API.
There are different passes which check graph validity, refine details
on operations and data, organize nodes into clusters ("Islands") based
on affinity or user-specified regioning[TBD], and more. Backends also
are able to inject backend-specific passes into the compilation
process, see more on this in the [dedicated chapter](@ref gapi_detail_meta).
Result of graph compilation is a compiled object, represented by class
cv::GCompiled. A new cv::GCompiled object is always created regardless
if there was an explicit or implicit compilation request (see
above). Actual graph execution happens within cv::GCompiled and is
determined by backends which participated in the graph compilation.
@sa cv::GComputation::apply(), cv::GComputation::compile(), cv::GCompiled
# Backends layer {#gapi_backends}
The above diagram lists two backends, _OpenCV_ and _Fluid_. _OpenCV_
is so-called "reference backend", which implements G-API operations
using plain old OpenCV functions. This backend is useful for
prototyping on a familiar development system. _Fluid_ is a plugin for
cache-efficient execution on CPU -- it implements a different
execution policy and operates with its own, special kernels. Fluid
backend allows to achieve less memory footprint and better memory
locality when running on CPU.
There may be more backends available, e.g. Halide, OpenCL, etc. --
G-API provides an uniform internal API to develop backends so any
enthusiast or a company are free to scale G-API on a new platform or
accelerator. In terms of OpenCV infrastructure, every new backend is a
new distinct OpenCV module, which extends G-API when build as a part
of OpenCV.
# Graph execution {#gapi_compiled}
The way graph executed is defined by backends selected for
compilation. In fact, every backend builds its own execution script as
the final stage of graph compilation process, when an executable
(compiled) object is being generated. For example, in OpenCV backend,
this script is just a topologically-sorted sequence of OpenCV
functions to call; for Fluid backend, it is a similar thing -- a
topologically sorted list of _Agents_ processing lines of input on
every iteration.
Graph execution is triggered in two ways:
* via cv::GComputation::apply(), with graph compiled in-place exactly
for the given input data;
* via cv::GCompiled::operator()(), when the graph has been precompiled.
Both methods are polimorphic and take a variadic number of arguments,
with validity checks performed in runtime. If a number, shapes, and
formats of passed data objects differ from expected, a runtime
exception is thrown. G-API also provides _typed_ wrappers to move
these checks to the compile time -- see `cv::GComputationT<>`.
G-API graph execution is declared stateless -- it means that a
compiled functor (cv::GCompiled) acts like a pure C++ function and
provides the same result for the same set of input arguments.
Both execution methods take \f$N+M\f$ parameters, where \f$N\f$ is a
number of inputs, and \f$M\f$ is a number of outputs on which a
cv::GComputation is defined. Note that while G-API types (cv::GMat,
etc) are used in definition, the execution methods accept OpenCV's
traditional data types (like cv::Mat) which hold actual data -- see
table in [parameter marshalling](@ref gapi_detail_params).
@sa @ref gapi_impl, @ref gapi_kernel_api

View File

@ -1,188 +0,0 @@
# Kernel API {#gapi_kernel_api}
[TOC]
# G-API Kernel API
The core idea behind G-API is portability -- a pipeline built with
G-API must be portable (or at least able to be portable). It means
that either it works out-of-the box when compiled for new platform,
_or_ G-API provides necessary tools to make it running there, with
little-to-no changes in the algorithm itself.
This idea can be achieved by separating kernel interface from its
implementation. Once a pipeline is built using kernel interfaces, it
becomes implementation-neutral -- the implementation details
(i.e. which kernels to use) are passed on a separate stage (graph
compilation).
Kernel-implementation hierarchy may look like:
@dot Kernel API/implementation hierarchy example
digraph {
rankdir=BT;
node [shape=record];
ki_a [label="{<f0> interface\nA}"];
ki_b [label="{<f0> interface\nB}"];
{rank=same; ki_a ki_b};
"CPU::A" -> ki_a [dir="forward"];
"OpenCL::A" -> ki_a [dir="forward"];
"Halide::A" -> ki_a [dir="forward"];
"CPU::B" -> ki_b [dir="forward"];
"OpenCL::B" -> ki_b [dir="forward"];
"Halide::B" -> ki_b [dir="forward"];
}
@enddot
A pipeline itself then can be expressed only in terms of `A`, `B`, and
so on, and choosing which implementation to use in execution becomes
an external parameter.
# Defining a kernel {#gapi_defining_kernel}
G-API provides a macro to define a new kernel interface --
G_TYPED_KERNEL():
@snippet samples/cpp/tutorial_code/gapi/doc_snippets/kernel_api_snippets.cpp filter2d_api
This macro is a shortcut to a new type definition. It takes three
arguments to register a new type, and requires type body to be present
(see [below](@ref gapi_kernel_supp_info)). The macro arguments are:
1. Kernel interface name -- also serves as a name of new type defined
with this macro;
2. Kernel signature -- an `std::function<>`-like signature which defines
API of the kernel;
3. Kernel's unique name -- used to identify kernel when its type
informattion is stripped within the system.
Kernel declaration may be seen as function declaration -- in both cases
a new entity must be used then according to the way it was defined.
Kernel signature defines kernel's usage syntax -- which parameters
it takes during graph construction. Implementations can also use this
signature to derive it into backend-specific callback signatures (see
next chapter).
Kernel may accept values of any type, and G-API _dynamic_ types are
handled in a special way. All other types are opaque to G-API and
passed to kernel in `outMeta()` or in execution callbacks as-is.
Kernel's return value can _only_ be of G-API dynamic type -- cv::GMat,
cv::GScalar, or `cv::GArray<T>`. If an operation has more than one
output, it should be wrapped into an `std::tuple<>` (which can contain
only mentioned G-API types). Arbitrary-output-number operations are
not supported.
Once a kernel is defined, it can be used in pipelines with special,
G-API-supplied method "::on()". This method has the same signature as
defined in kernel, so this code:
@snippet samples/cpp/tutorial_code/gapi/doc_snippets/kernel_api_snippets.cpp filter2d_on
is a perfectly legal construction. This example has some verbosity,
though, so usually a kernel declaration comes with a C++ function
wrapper ("factory method") which enables optional parameters, more
compact syntax, Doxygen comments, etc:
@snippet samples/cpp/tutorial_code/gapi/doc_snippets/kernel_api_snippets.cpp filter2d_wrap
so now it can be used like:
@snippet samples/cpp/tutorial_code/gapi/doc_snippets/kernel_api_snippets.cpp filter2d_wrap_call
# Extra information {#gapi_kernel_supp_info}
In the current version, kernel declaration body (everything within the
curly braces) must contain a static function `outMeta()`. This function
establishes a functional dependency between operation's input and
output metadata.
_Metadata_ is an information about data kernel operates on. Since
non-G-API types are opaque to G-API, G-API cares only about `G*` data
descriptors (i.e. dimensions and format of cv::GMat, etc).
`outMeta()` is also an example of how kernel's signature can be
transformed into a derived callback -- note that in this example,
`outMeta()` signature exactly follows the kernel signature (defined
within the macro) but is different -- where kernel expects cv::GMat,
`outMeta()` takes and returns cv::GMatDesc (a G-API structure metadata
for cv::GMat).
The point of `outMeta()` is to propagate metadata information within
computation from inputs to outputs and infer metadata of internal
(intermediate, temporary) data objects. This information is required
for further pipeline optimizations, memory allocation, and other
operations done by G-API framework during graph compilation.
<!-- TODO add examples -->
# Implementing a kernel {#gapi_kernel_implementing}
Once a kernel is declared, its interface can be used to implement
versions of this kernel in different backends. This concept is
naturally projected from object-oriented programming
"Interface/Implementation" idiom: an interface can be implemented
multiple times, and different implementations of a kernel should be
substitutable with each other without breaking the algorithm
(pipeline) logic (Liskov Substitution Principle).
Every backend defines its own way to implement a kernel interface.
This way is regular, though -- whatever plugin is, its kernel
implementation must be "derived" from a kernel interface type.
Kernel implementation are then organized into _kernel
packages_. Kernel packages are passed to cv::GComputation::compile()
as compile arguments, with some hints to G-API on how to select proper
kernels (see more on this in "Heterogeneity"[TBD]).
For example, the aforementioned `Filter2D` is implemented in
"reference" CPU (OpenCV) plugin this way (*NOTE* -- this is a
simplified form with improper border handling):
@snippet samples/cpp/tutorial_code/gapi/doc_snippets/kernel_api_snippets.cpp filter2d_ocv
Note how CPU (OpenCV) plugin has transformed the original kernel
signature:
- Input cv::GMat has been substituted with cv::Mat, holding actual input
data for the underlying OpenCV function call;
- Output cv::GMat has been transformed into extra output parameter, thus
`GCPUFilter2D::run()` takes one argument more than the original
kernel signature.
The basic intuition for kernel developer here is _not to care_ where
that cv::Mat objects come from instead of the original cv::GMat -- and
just follow the signature conventions defined by the plugin. G-API
will call this method during execution and supply all the necessary
information (and forward the original opaque data as-is).
# Compound kernels {#gapi_kernel_compound}
Sometimes kernel is a single thing only on API level. It is convenient
for users, but on a particular implementation side it would be better to
have multiple kernels (a subgraph) doing the thing instead. An example
is goodFeaturesToTrack() -- while in OpenCV backend it may remain a
single kernel, with Fluid it becomes compound -- Fluid can handle Harris
response calculation but can't do sparse non-maxima suppression and
point extraction to an STL vector:
<!-- PIC -->
A compound kernel _implementation_ can be defined using a generic
macro GAPI_COMPOUND_KERNEL():
@snippet samples/cpp/tutorial_code/gapi/doc_snippets/kernel_api_snippets.cpp compound
<!-- TODO: ADD on how Compound kernels may simplify dispatching -->
<!-- TODO: Add details on when expand() is called! -->
It is important to distinguish a compound kernel from G-API high-order
function, i.e. a C++ function which looks like a kernel but in fact
generates a subgraph. The core difference is that a compound kernel is
an _implementation detail_ and a kernel implementation may be either
compound or not (depending on backend capabilities), while a
high-order function is a "macro" in terms of G-API and so cannot act as
an interface which then needs to be implemented by a backend.

View File

@ -1,29 +0,0 @@
# Implementation details {#gapi_impl}
[TOC]
# G-API Implementation details
@note this section is still in progress.
# API layer {#gapi_detail_api}
## Expression unrolling {#gapi_detail_expr}
## Parameter marshalling {#gapi_detail_params}
## Operations representation {#gapi_detail_operations}
# Graph compiler {#gapi_detail_compiler}
## ADE basics {#gapi_detail_ade}
## Graph model representation {#gapi_detail_gmodel}
## G-API metadata and passes {#gapi_detail_meta}
# Backends {#gapi_detail_backends}
## Backend scope of work {#gapi_backend_scope}
## Graph transformation {#gapi_backend_pass}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.3 KiB

View File

@ -1,6 +0,0 @@
*.bbl
*.blg
*.sty
*.tex
*-converted-to.pdf
mtheme.sty/

View File

@ -1,27 +0,0 @@
# G-API Overview
This is the latest overview slide deck on G-API.
## Prerequisites
- [Emacs] v24 or higher;
- [Org]-mode 8.2.10;
- `pdflatex`;
- `texlive-latex-recommended` ([Beamer] package);
- `texlive-font-utils` (`epstopdf`);
- `wget` (for `get_sty.sh`).
## Building
1. Download and build the [Metropolis] theme with the script:
```
$ ./get_sty.sh
```
2. Now open `gapi_overview.org` with Emacs and press `C-c C-e l P`.
[Emacs]: https://www.gnu.org/software/emacs/
[Org]: https://orgmode.org/
[Beamer]: https://ctan.org/pkg/beamer
[Metropolis]: https://github.com/matze/mtheme

View File

@ -1,961 +0,0 @@
#+TITLE: OpenCV 4.4 Graph API
#+AUTHOR: Dmitry Matveev\newline Intel Corporation
#+OPTIONS: H:2 toc:t num:t
#+LATEX_CLASS: beamer
#+LATEX_CLASS_OPTIONS: [presentation]
#+LATEX_HEADER: \usepackage{transparent} \usepackage{listings} \usepackage{pgfplots} \usepackage{mtheme.sty/beamerthememetropolis}
#+LATEX_HEADER: \setbeamertemplate{frame footer}{OpenCV 4.4 G-API: Overview and programming by example}
#+BEAMER_HEADER: \subtitle{Overview and programming by example}
#+BEAMER_HEADER: \titlegraphic{ \vspace*{3cm}\hspace*{5cm} {\transparent{0.2}\includegraphics[height=\textheight]{ocv_logo.eps}}}
#+COLUMNS: %45ITEM %10BEAMER_ENV(Env) %10BEAMER_ACT(Act) %4BEAMER_COL(Col) %8BEAMER_OPT(Opt)
* G-API: What is, why, what's for?
** OpenCV evolution in one slide
*** Version 1.x -- Library inception
- Just a set of CV functions + helpers around (visualization, IO);
*** Version 2.x -- Library rewrite
- OpenCV meets C++, ~cv::Mat~ replaces ~IplImage*~;
*** Version 3.0 -- Welcome Transparent API (T-API)
- ~cv::UMat~ is introduced as a /transparent/ addition to
~cv::Mat~;
- With ~cv::UMat~, an OpenCL kernel can be enqeueud instead of
immediately running C code;
- ~cv::UMat~ data is kept on a /device/ until explicitly queried.
** OpenCV evolution in one slide (cont'd)
# FIXME: Learn proper page-breaking!
*** Version 4.0 -- Welcome Graph API (G-API)
- A new separate module (not a full library rewrite);
- A framework (or even a /meta/-framework);
- Usage model:
- /Express/ an image/vision processing graph and then /execute/ it;
- Fine-tune execution without changes in the graph;
- Similar to Halide -- separates logic from
platform details.
- More than Halide:
- Kernels can be written in unconstrained platform-native code;
- Halide can serve as a backend (one of many).
** OpenCV evolution in one slide (cont'd)
# FIXME: Learn proper page-breaking!
*** Version 4.2 -- New horizons
- Introduced in-graph inference via OpenVINO™ Toolkit;
- Introduced video-oriented Streaming execution mode;
- Extended focus from individual image processing to the full
application pipeline optimization.
*** Version 4.4 -- More on video
- Introduced a notion of stateful kernels;
- The road to object tracking, background subtraction, etc. in the
graph;
- Added more video-oriented operations (feature detection, Optical
flow).
** Why G-API?
*** Why introduce a new execution model?
- Ultimately it is all about optimizations;
- or at least about a /possibility/ to optimize;
- A CV algorithm is usually not a single function call, but a
composition of functions;
- Different models operate at different levels of knowledge on the
algorithm (problem) we run.
** Why G-API? (cont'd)
# FIXME: Learn proper page-breaking!
*** Why introduce a new execution model?
- *Traditional* -- every function can be optimized (e.g. vectorized)
and parallelized, the rest is up to programmer to care about.
- *Queue-based* -- kernels are enqueued dynamically with no guarantee
where the end is or what is called next;
- *Graph-based* -- nearly all information is there, some compiler
magic can be done!
** What is G-API for?
*** Bring the value of graph model with OpenCV where it makes sense:
- *Memory consumption* can be reduced dramatically;
- *Memory access* can be optimized to maximize cache reuse;
- *Parallelism* can be applied automatically where it is hard to do
it manually;
- It also becomes more efficient when working with graphs;
- *Heterogeneity* gets extra benefits like:
- Avoiding unnecessary data transfers;
- Shadowing transfer costs with parallel host co-execution;
- Improving system throughput with frame-level pipelining.
* Programming with G-API
** G-API Basics
*** G-API Concepts
- *Graphs* are built by applying /operations/ to /data objects/;
- API itself has no "graphs", it is expression-based instead;
- *Data objects* do not hold actual data, only capture /dependencies/;
- *Operations* consume and produce data objects.
- A graph is defined by specifying its /boundaries/ with data objects:
- What data objects are /inputs/ to the graph?
- What are its /outputs/?
** The code is worth a thousand words
:PROPERTIES:
:BEAMER_opt: shrink=42
:END:
#+BEGIN_SRC C++
#include <opencv2/gapi.hpp> // G-API framework header
#include <opencv2/gapi/imgproc.hpp> // cv::gapi::blur()
#include <opencv2/highgui.hpp> // cv::imread/imwrite
int main(int argc, char *argv[]) {
if (argc < 3) return 1;
cv::GMat in; // Express the graph:
cv::GMat out = cv::gapi::blur(in, cv::Size(3,3)); // `out` is a result of `blur` of `in`
cv::Mat in_mat = cv::imread(argv[1]); // Get the real data
cv::Mat out_mat; // Output buffer (may be empty)
cv::GComputation(cv::GIn(in), cv::GOut(out)) // Declare a graph from `in` to `out`
.apply(cv::gin(in_mat), cv::gout(out_mat)); // ...and run it immediately
cv::imwrite(argv[2], out_mat); // Save the result
return 0;
}
#+END_SRC
** The code is worth a thousand words
:PROPERTIES:
:BEAMER_opt: shrink=42
:END:
*** Traditional OpenCV :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.45
:END:
#+BEGIN_SRC C++
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
int main(int argc, char *argv[]) {
using namespace cv;
if (argc != 3) return 1;
Mat in_mat = imread(argv[1]);
Mat gx, gy;
Sobel(in_mat, gx, CV_32F, 1, 0);
Sobel(in_mat, gy, CV_32F, 0, 1);
Mat mag, out_mat;
sqrt(gx.mul(gx) + gy.mul(gy), mag);
mag.convertTo(out_mat, CV_8U);
imwrite(argv[2], out_mat);
return 0;
}
#+END_SRC
*** OpenCV G-API :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.5
:END:
#+BEGIN_SRC C++
#include <opencv2/gapi.hpp>
#include <opencv2/gapi/core.hpp>
#include <opencv2/gapi/imgproc.hpp>
#include <opencv2/highgui.hpp>
int main(int argc, char *argv[]) {
using namespace cv;
if (argc != 3) return 1;
GMat in;
GMat gx = gapi::Sobel(in, CV_32F, 1, 0);
GMat gy = gapi::Sobel(in, CV_32F, 0, 1);
GMat mag = gapi::sqrt( gapi::mul(gx, gx)
+ gapi::mul(gy, gy));
GMat out = gapi::convertTo(mag, CV_8U);
GComputation sobel(GIn(in), GOut(out));
Mat in_mat = imread(argv[1]), out_mat;
sobel.apply(in_mat, out_mat);
imwrite(argv[2], out_mat);
return 0;
}
#+END_SRC
** The code is worth a thousand words (cont'd)
# FIXME: sections!!!
*** What we have just learned?
- G-API functions mimic their traditional OpenCV ancestors;
- No real data is required to construct a graph;
- Graph construction and graph execution are separate steps.
*** What else?
- Graph is first /expressed/ and then /captured/ in an object;
- Graph constructor defines /protocol/; user can pass vectors of
inputs/outputs like
#+BEGIN_SRC C++
cv::GComputation(cv::GIn(...), cv::GOut(...))
#+END_SRC
- Calls to ~.apply()~ must conform to graph's protocol
** On data objects
Graph *protocol* defines what arguments a computation was defined on
(both inputs and outputs), and what are the *shapes* (or types) of
those arguments:
| *Shape* | *Argument* | Size |
|--------------+------------------+-----------------------------|
| ~GMat~ | ~Mat~ | Static; defined during |
| | | graph compilation |
|--------------+------------------+-----------------------------|
| ~GScalar~ | ~Scalar~ | 4 x ~double~ |
|--------------+------------------+-----------------------------|
| ~GArray<T>~ | ~std::vector<T>~ | Dynamic; defined in runtime |
|--------------+------------------+-----------------------------|
| ~GOpaque<T>~ | ~T~ | Static, ~sizeof(T)~ |
~GScalar~ may be value-initialized at construction time to allow
expressions like ~GMat a = 2*(b + 1)~.
** On operations and kernels
:PROPERTIES:
:BEAMER_opt: shrink=22
:END:
*** :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.45
:END:
- Graphs are built with *Operations* over virtual *Data*;
- *Operations* define interfaces (literally);
- *Kernels* are implementations to *Operations* (like in OOP);
- An *Operation* is platform-agnostic, a *kernel* is not;
- *Kernels* are implemented for *Backends*, the latter provide
APIs to write kernels;
- Users can /add/ their *own* operations and kernels,
and also /redefine/ "standard" kernels their *own* way.
*** :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.45
:END:
#+BEGIN_SRC dot :file "000-ops-kernels.eps" :cmdline "-Kdot -Teps"
digraph G {
node [shape=box];
rankdir=BT;
Gr [label="Graph"];
Op [label="Operation\nA"];
{rank=same
Impl1 [label="Kernel\nA:2"];
Impl2 [label="Kernel\nA:1"];
}
Op -> Gr [dir=back, label="'consists of'"];
Impl1 -> Op [];
Impl2 -> Op [label="'is implemented by'"];
node [shape=note,style=dashed];
{rank=same
Op;
CommentOp [label="Abstract:\ndeclared via\nG_API_OP()"];
}
{rank=same
Comment1 [label="Platform:\ndefined with\nOpenCL backend"];
Comment2 [label="Platform:\ndefined with\nOpenCV backend"];
}
CommentOp -> Op [constraint=false, style=dashed, arrowhead=none];
Comment1 -> Impl1 [style=dashed, arrowhead=none];
Comment2 -> Impl2 [style=dashed, arrowhead=none];
}
#+END_SRC
** On operations and kernels (cont'd)
*** Defining an operation
- A type name (every operation is a C++ type);
- Operation signature (similar to ~std::function<>~);
- Operation identifier (a string);
- Metadata callback -- describe what is the output value format(s),
given the input and arguments.
- Use ~OpType::on(...)~ to use a new kernel ~OpType~ to construct graphs.
#+LaTeX: {\footnotesize
#+BEGIN_SRC C++
G_API_OP(GSqrt,<GMat(GMat)>,"org.opencv.core.math.sqrt") {
static GMatDesc outMeta(GMatDesc in) { return in; }
};
#+END_SRC
#+LaTeX: }
** On operations and kernels (cont'd)
*** ~GSqrt~ vs. ~cv::gapi::sqrt()~
- How a *type* relates to a *functions* from the example?
- These functions are just wrappers over ~::on~:
#+LaTeX: {\scriptsize
#+BEGIN_SRC C++
G_API_OP(GSqrt,<GMat(GMat)>,"org.opencv.core.math.sqrt") {
static GMatDesc outMeta(GMatDesc in) { return in; }
};
GMat gapi::sqrt(const GMat& src) { return GSqrt::on(src); }
#+END_SRC
#+LaTeX: }
- Why -- Doxygen, default parameters, 1:n mapping:
#+LaTeX: {\scriptsize
#+BEGIN_SRC C++
cv::GMat custom::unsharpMask(const cv::GMat &src,
const int sigma,
const float strength) {
cv::GMat blurred = cv::gapi::medianBlur(src, sigma);
cv::GMat laplacian = cv::gapi::Laplacian(blurred, CV_8U);
return (src - (laplacian * strength));
}
#+END_SRC
#+LaTeX: }
** On operations and kernels (cont'd)
*** Implementing an operation
- Depends on the backend and its API;
- Common part for all backends: refer to operation being implemented
using its /type/.
*** OpenCV backend
- OpenCV backend is the default one: OpenCV kernel is a wrapped OpenCV
function:
#+LaTeX: {\footnotesize
#+BEGIN_SRC C++
GAPI_OCV_KERNEL(GCPUSqrt, cv::gapi::core::GSqrt) {
static void run(const cv::Mat& in, cv::Mat &out) {
cv::sqrt(in, out);
}
};
#+END_SRC
#+LaTeX: }
** Operations and Kernels (cont'd)
# FIXME!!!
*** Fluid backend
- Fluid backend operates with row-by-row kernels and schedules its
execution to optimize data locality:
#+LaTeX: {\footnotesize
#+BEGIN_SRC C++
GAPI_FLUID_KERNEL(GFluidSqrt, cv::gapi::core::GSqrt, false) {
static const int Window = 1;
static void run(const View &in, Buffer &out) {
hal::sqrt32f(in .InLine <float>(0)
out.OutLine<float>(0),
out.length());
}
};
#+END_SRC
#+LaTeX: }
- Note ~run~ changes signature but still is derived from the operation
signature.
** Operations and Kernels (cont'd)
*** Specifying which kernels to use
- Graph execution model is defined by kernels which are available/used;
- Kernels can be specified via the graph compilation arguments:
#+LaTeX: {\footnotesize
#+BEGIN_SRC C++
#include <opencv2/gapi/fluid/core.hpp>
#include <opencv2/gapi/fluid/imgproc.hpp>
...
auto pkg = cv::gapi::combine(cv::gapi::core::fluid::kernels(),
cv::gapi::imgproc::fluid::kernels());
sobel.apply(in_mat, out_mat, cv::compile_args(pkg));
#+END_SRC
#+LaTeX: }
- Users can combine kernels of different backends and G-API will partition
the execution among those automatically.
** Heterogeneity in G-API
:PROPERTIES:
:BEAMER_opt: shrink=35
:END:
*** Automatic subgraph partitioning in G-API
*** :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.18
:END:
#+BEGIN_SRC dot :file "010-hetero-init.eps" :cmdline "-Kdot -Teps"
digraph G {
rankdir=TB;
ranksep=0.3;
node [shape=box margin=0 height=0.25];
A; B; C;
node [shape=ellipse];
GMat0;
GMat1;
GMat2;
GMat3;
GMat0 -> A -> GMat1 -> B -> GMat2;
GMat2 -> C;
GMat0 -> C -> GMat3
subgraph cluster {style=invis; A; GMat1; B; GMat2; C};
}
#+END_SRC
The initial graph: operations are not resolved yet.
*** :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.18
:END:
#+BEGIN_SRC dot :file "011-hetero-homo.eps" :cmdline "-Kdot -Teps"
digraph G {
rankdir=TB;
ranksep=0.3;
node [shape=box margin=0 height=0.25];
A; B; C;
node [shape=ellipse];
GMat0;
GMat1;
GMat2;
GMat3;
GMat0 -> A -> GMat1 -> B -> GMat2;
GMat2 -> C;
GMat0 -> C -> GMat3
subgraph cluster {style=filled;color=azure2; A; GMat1; B; GMat2; C};
}
#+END_SRC
All operations are handled by the same backend.
*** :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.18
:END:
#+BEGIN_SRC dot :file "012-hetero-a.eps" :cmdline "-Kdot -Teps"
digraph G {
rankdir=TB;
ranksep=0.3;
node [shape=box margin=0 height=0.25];
A; B; C;
node [shape=ellipse];
GMat0;
GMat1;
GMat2;
GMat3;
GMat0 -> A -> GMat1 -> B -> GMat2;
GMat2 -> C;
GMat0 -> C -> GMat3
subgraph cluster_1 {style=filled;color=azure2; A; GMat1; B; }
subgraph cluster_2 {style=filled;color=ivory2; C};
}
#+END_SRC
~A~ & ~B~ are of backend ~1~, ~C~ is of backend ~2~.
*** :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.18
:END:
#+BEGIN_SRC dot :file "013-hetero-b.eps" :cmdline "-Kdot -Teps"
digraph G {
rankdir=TB;
ranksep=0.3;
node [shape=box margin=0 height=0.25];
A; B; C;
node [shape=ellipse];
GMat0;
GMat1;
GMat2;
GMat3;
GMat0 -> A -> GMat1 -> B -> GMat2;
GMat2 -> C;
GMat0 -> C -> GMat3
subgraph cluster_1 {style=filled;color=azure2; A};
subgraph cluster_2 {style=filled;color=ivory2; B};
subgraph cluster_3 {style=filled;color=azure2; C};
}
#+END_SRC
~A~ & ~C~ are of backend ~1~, ~B~ is of backend ~2~.
** Heterogeneity in G-API
*** Heterogeneity summary
- G-API automatically partitions its graph in subgraphs (called "islands")
based on the available kernels;
- Adjacent kernels taken from the same backend are "fused" into the same
"island";
- G-API implements a two-level execution model:
- Islands are executed at the top level by a G-API's *Executor*;
- Island internals are run at the bottom level by its *Backend*;
- G-API fully delegates the low-level execution and memory management to backends.
* Inference and Streaming
** Inference with G-API
*** In-graph inference example
- Starting with OpencV 4.2 (2019), G-API allows to integrate ~infer~
operations into the graph:
#+LaTeX: {\scriptsize
#+BEGIN_SRC C++
G_API_NET(ObjDetect, <cv::GMat(cv::GMat)>, "pdf.example.od");
cv::GMat in;
cv::GMat blob = cv::gapi::infer<ObjDetect>(bgr);
cv::GOpaque<cv::Size> size = cv::gapi::streaming::size(bgr);
cv::GArray<cv::Rect> objs = cv::gapi::streaming::parseSSD(blob, size);
cv::GComputation pipelne(cv::GIn(in), cv::GOut(objs));
#+END_SRC
#+LaTeX: }
- Starting with OpenCV 4.5 (2020), G-API will provide more streaming-
and NN-oriented operations out of the box.
** Inference with G-API
*** What is the difference?
- ~ObjDetect~ is not an operation, ~cv::gapi::infer<T>~ is;
- ~cv::gapi::infer<T>~ is a *generic* operation, where ~T=ObjDetect~ describes
the calling convention:
- How many inputs the network consumes,
- How many outputs the network produces.
- Inference data types are ~GMat~ only:
- Representing an image, then preprocessed automatically;
- Representing a blob (n-dimensional ~Mat~), then passed as-is.
- Inference *backends* only need to implement a single generic operation ~infer~.
** Inference with G-API
*** But how does it run?
- Since ~infer~ is an *Operation*, backends may provide *Kernels* implementing it;
- The only publicly available inference backend now is *OpenVINO™*:
- Brings its ~infer~ kernel atop of the Inference Engine;
- NN model data is passed through G-API compile arguments (like kernels);
- Every NN backend provides its own structure to configure the network (like
a kernel API).
** Inference with G-API
*** Passing OpenVINO™ parameters to G-API
- ~ObjDetect~ example:
#+LaTeX: {\footnotesize
#+BEGIN_SRC C++
auto face_net = cv::gapi::ie::Params<ObjDetect> {
face_xml_path, // path to the topology IR
face_bin_path, // path to the topology weights
face_device_string, // OpenVINO plugin (device) string
};
auto networks = cv::gapi::networks(face_net);
pipeline.compile(.., cv::compile_args(..., networks));
#+END_SRC
#+LaTeX: }
- ~AgeGender~ requires binding Op's outputs to NN layers:
#+LaTeX: {\footnotesize
#+BEGIN_SRC C++
auto age_net = cv::gapi::ie::Params<AgeGender> {
...
}.cfgOutputLayers({"age_conv3", "prob"}); // array<string,2> !
#+END_SRC
#+LaTeX: }
** Streaming with G-API
#+BEGIN_SRC dot :file 020-fd-demo.eps :cmdline "-Kdot -Teps"
digraph {
rankdir=LR;
node [shape=box];
cap [label=Capture];
dec [label=Decode];
res [label=Resize];
cnn [label=Infer];
vis [label=Visualize];
cap -> dec;
dec -> res;
res -> cnn;
cnn -> vis;
}
#+END_SRC
Anatomy of a regular video analytics application
** Streaming with G-API
#+BEGIN_SRC dot :file 021-fd-serial.eps :cmdline "-Kdot -Teps"
digraph {
node [shape=box margin=0 width=0.3 height=0.4]
nodesep=0.2;
rankdir=LR;
subgraph cluster0 {
colorscheme=blues9
pp [label="..." shape=plaintext];
v0 [label=V];
label="Frame N-1";
color=7;
}
subgraph cluster1 {
colorscheme=blues9
c1 [label=C];
d1 [label=D];
r1 [label=R];
i1 [label=I];
v1 [label=V];
label="Frame N";
color=6;
}
subgraph cluster2 {
colorscheme=blues9
c2 [label=C];
nn [label="..." shape=plaintext];
label="Frame N+1";
color=5;
}
c1 -> d1 -> r1 -> i1 -> v1;
pp-> v0;
v0 -> c1 [style=invis];
v1 -> c2 [style=invis];
c2 -> nn;
}
#+END_SRC
Serial execution of the sample video analytics application
** Streaming with G-API
:PROPERTIES:
:BEAMER_opt: shrink
:END:
#+BEGIN_SRC dot :file 022-fd-pipelined.eps :cmdline "-Kdot -Teps"
digraph {
nodesep=0.2;
ranksep=0.2;
node [margin=0 width=0.4 height=0.2];
node [shape=plaintext]
Camera [label="Camera:"];
GPU [label="GPU:"];
FPGA [label="FPGA:"];
CPU [label="CPU:"];
Time [label="Time:"];
t6 [label="T6"];
t7 [label="T7"];
t8 [label="T8"];
t9 [label="T9"];
t10 [label="T10"];
tnn [label="..."];
node [shape=box margin=0 width=0.4 height=0.4 colorscheme=blues9]
node [color=9] V3;
node [color=8] F4; V4;
node [color=7] DR5; F5; V5;
node [color=6] C6; DR6; F6; V6;
node [color=5] C7; DR7; F7; V7;
node [color=4] C8; DR8; F8;
node [color=3] C9; DR9;
node [color=2] C10;
{rank=same; rankdir=LR; Camera C6 C7 C8 C9 C10}
Camera -> C6 -> C7 -> C8 -> C9 -> C10 [style=invis];
{rank=same; rankdir=LR; GPU DR5 DR6 DR7 DR8 DR9}
GPU -> DR5 -> DR6 -> DR7 -> DR8 -> DR9 [style=invis];
C6 -> DR5 [style=invis];
C6 -> DR6 [constraint=false];
C7 -> DR7 [constraint=false];
C8 -> DR8 [constraint=false];
C9 -> DR9 [constraint=false];
{rank=same; rankdir=LR; FPGA F4 F5 F6 F7 F8}
FPGA -> F4 -> F5 -> F6 -> F7 -> F8 [style=invis];
DR5 -> F4 [style=invis];
DR5 -> F5 [constraint=false];
DR6 -> F6 [constraint=false];
DR7 -> F7 [constraint=false];
DR8 -> F8 [constraint=false];
{rank=same; rankdir=LR; CPU V3 V4 V5 V6 V7}
CPU -> V3 -> V4 -> V5 -> V6 -> V7 [style=invis];
F4 -> V3 [style=invis];
F4 -> V4 [constraint=false];
F5 -> V5 [constraint=false];
F6 -> V6 [constraint=false];
F7 -> V7 [constraint=false];
{rank=same; rankdir=LR; Time t6 t7 t8 t9 t10 tnn}
Time -> t6 -> t7 -> t8 -> t9 -> t10 -> tnn [style=invis];
CPU -> Time [style=invis];
V3 -> t6 [style=invis];
V4 -> t7 [style=invis];
V5 -> t8 [style=invis];
V6 -> t9 [style=invis];
V7 -> t10 [style=invis];
}
#+END_SRC
Pipelined execution for the video analytics application
** Streaming with G-API: Example
**** Serial mode (4.0) :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.45
:END:
#+LaTeX: {\tiny
#+BEGIN_SRC C++
pipeline = cv::GComputation(...);
cv::VideoCapture cap(input);
cv::Mat in_frame;
std::vector<cv::Rect> out_faces;
while (cap.read(in_frame)) {
pipeline.apply(cv::gin(in_frame),
cv::gout(out_faces),
cv::compile_args(kernels,
networks));
// Process results
...
}
#+END_SRC
#+LaTeX: }
**** Streaming mode (since 4.2) :B_block:BMCOL:
:PROPERTIES:
:BEAMER_env: block
:BEAMER_col: 0.45
:END:
#+LaTeX: {\tiny
#+BEGIN_SRC C++
pipeline = cv::GComputation(...);
auto in_src = cv::gapi::wip::make_src
<cv::gapi::wip::GCaptureSource>(input)
auto cc = pipeline.compileStreaming
(cv::compile_args(kernels, networks))
cc.setSource(cv::gin(in_src));
cc.start();
std::vector<cv::Rect> out_faces;
while (cc.pull(cv::gout(out_faces))) {
// Process results
...
}
#+END_SRC
#+LaTeX: }
**** More information
#+LaTeX: {\footnotesize
https://opencv.org/hybrid-cv-dl-pipelines-with-opencv-4-4-g-api/
#+LaTeX: }
* Latest features
** Latest features
*** Python API
- Initial Python3 binding is available now in ~master~ (future 4.5);
- Only basic CV functionality is supported (~core~ & ~imgproc~ namespaces,
selecting backends);
- Adding more programmability, inference, and streaming is next.
** Latest features
*** Python API
#+LaTeX: {\footnotesize
#+BEGIN_SRC Python
import numpy as np
import cv2 as cv
sz = (1280, 720)
in1 = np.random.randint(0, 100, sz).astype(np.uint8)
in2 = np.random.randint(0, 100, sz).astype(np.uint8)
g_in1 = cv.GMat()
g_in2 = cv.GMat()
g_out = cv.gapi.add(g_in1, g_in2)
gr = cv.GComputation(g_in1, g_in2, g_out)
pkg = cv.gapi.core.fluid.kernels()
out = gr.apply(in1, in2, args=cv.compile_args(pkg))
#+END_SRC
#+LaTeX: }
* Understanding the "G-Effect"
** Understanding the "G-Effect"
*** What is "G-Effect"?
- G-API is not only an API, but also an /implementation/;
- i.e. it does some work already!
- We call "G-Effect" any measurable improvement which G-API demonstrates
against traditional methods;
- So far the list is:
- Memory consumption;
- Performance;
- Programmer efforts.
Note: in the following slides, all measurements are taken on
Intel\textregistered{} Core\texttrademark-i5 6600 CPU.
** Understanding the "G-Effect"
# FIXME
*** Memory consumption: Sobel Edge Detector
- G-API/Fluid backend is designed to minimize footprint:
#+LaTeX: {\footnotesize
| Input | OpenCV | G-API/Fluid | Factor |
| | MiB | MiB | Times |
|-------------+--------+-------------+--------|
| 512 x 512 | 17.33 | 0.59 | 28.9x |
| 640 x 480 | 20.29 | 0.62 | 32.8x |
| 1280 x 720 | 60.73 | 0.72 | 83.9x |
| 1920 x 1080 | 136.53 | 0.83 | 164.7x |
| 3840 x 2160 | 545.88 | 1.22 | 447.4x |
#+LaTeX: }
- The detector itself can be written manually in two ~for~
loops, but G-API covers cases more complex than that;
- OpenCV code requires changes to shrink footprint.
** Understanding the "G-Effect"
*** Performance: Sobel Edge Detector
- G-API/Fluid backend also optimizes cache reuse:
#+LaTeX: {\footnotesize
| Input | OpenCV | G-API/Fluid | Factor |
| | ms | ms | Times |
|-------------+--------+-------------+--------|
| 320 x 240 | 1.16 | 0.53 | 2.17x |
| 640 x 480 | 5.66 | 1.89 | 2.99x |
| 1280 x 720 | 17.24 | 5.26 | 3.28x |
| 1920 x 1080 | 39.04 | 12.29 | 3.18x |
| 3840 x 2160 | 219.57 | 51.22 | 4.29x |
#+LaTeX: }
- The more data is processed, the bigger "G-Effect" is.
** Understanding the "G-Effect"
*** Relative speed-up based on cache efficiency
#+BEGIN_LATEX
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
xlabel={Image size},
ylabel={Relative speed-up},
nodes near coords,
width=0.8\textwidth,
xtick=data,
xticklabels={QVGA, VGA, HD, FHD, UHD},
height=4.5cm,
]
\addplot plot coordinates {(1, 1.0) (2, 1.38) (3, 1.51) (4, 1.46) (5, 1.97)};
\end{axis}
\end{tikzpicture}
\end{figure}
#+END_LATEX
The higher resolution is, the higher relative speed-up is (with
speed-up on QVGA taken as 1.0).
* Resources on G-API
** Resources on G-API
:PROPERTIES:
:BEAMER_opt: shrink
:END:
*** Repository
- https://github.com/opencv/opencv (see ~modules/gapi~)
*** Article
- https://opencv.org/hybrid-cv-dl-pipelines-with-opencv-4-4-g-api/
*** Documentation
- https://docs.opencv.org/4.4.0/d0/d1e/gapi.html
*** Tutorials
- https://docs.opencv.org/4.4.0/df/d7e/tutorial_table_of_content_gapi.html
* Thank you!

View File

@ -1,25 +0,0 @@
#!/usr/bin/env bash
set -e
MTHEME_VER=2fa6084b9d34fec9d2d5470eb9a17d0bf712b6c8
MTHEME_DIR=mtheme.sty
function make_sty {
if [ -d "$MTHEME_DIR" ]; then rm -rf "$MTHEME_DIR"; fi
mkdir "$MTHEME_DIR"
# Download template from Github
tmp_dir=$(mktemp -d)
wget -P "$tmp_dir" -c https://github.com/matze/mtheme/archive/${MTHEME_VER}.tar.gz
pushd "$tmp_dir"
tar -xzvf "$MTHEME_VER.tar.gz"
popd
make -C "$tmp_dir"/mtheme-"$MTHEME_VER"
cp -v "$tmp_dir"/mtheme-"$MTHEME_VER"/*.sty "$MTHEME_DIR"
rm -r "$tmp_dir"
# Put our own .gitignore to ignore this directory completely
echo "*" > "$MTHEME_DIR/.gitignore"
}
make_sty

View File

@ -1,181 +0,0 @@
%!PS-Adobe-3.0 EPSF-3.0
%%Creator: cairo 1.14.6 (http://cairographics.org)
%%CreationDate: Wed Dec 12 17:03:17 2018
%%Pages: 1
%%DocumentData: Clean7Bit
%%LanguageLevel: 2
%%BoundingBox: 0 -1 598 739
%%EndComments
%%BeginProlog
save
50 dict begin
/q { gsave } bind def
/Q { grestore } bind def
/cm { 6 array astore concat } bind def
/w { setlinewidth } bind def
/J { setlinecap } bind def
/j { setlinejoin } bind def
/M { setmiterlimit } bind def
/d { setdash } bind def
/m { moveto } bind def
/l { lineto } bind def
/c { curveto } bind def
/h { closepath } bind def
/re { exch dup neg 3 1 roll 5 3 roll moveto 0 rlineto
0 exch rlineto 0 rlineto closepath } bind def
/S { stroke } bind def
/f { fill } bind def
/f* { eofill } bind def
/n { newpath } bind def
/W { clip } bind def
/W* { eoclip } bind def
/BT { } bind def
/ET { } bind def
/pdfmark where { pop globaldict /?pdfmark /exec load put }
{ globaldict begin /?pdfmark /pop load def /pdfmark
/cleartomark load def end } ifelse
/BDC { mark 3 1 roll /BDC pdfmark } bind def
/EMC { mark /EMC pdfmark } bind def
/cairo_store_point { /cairo_point_y exch def /cairo_point_x exch def } def
/Tj { show currentpoint cairo_store_point } bind def
/TJ {
{
dup
type /stringtype eq
{ show } { -0.001 mul 0 cairo_font_matrix dtransform rmoveto } ifelse
} forall
currentpoint cairo_store_point
} bind def
/cairo_selectfont { cairo_font_matrix aload pop pop pop 0 0 6 array astore
cairo_font exch selectfont cairo_point_x cairo_point_y moveto } bind def
/Tf { pop /cairo_font exch def /cairo_font_matrix where
{ pop cairo_selectfont } if } bind def
/Td { matrix translate cairo_font_matrix matrix concatmatrix dup
/cairo_font_matrix exch def dup 4 get exch 5 get cairo_store_point
/cairo_font where { pop cairo_selectfont } if } bind def
/Tm { 2 copy 8 2 roll 6 array astore /cairo_font_matrix exch def
cairo_store_point /cairo_font where { pop cairo_selectfont } if } bind def
/g { setgray } bind def
/rg { setrgbcolor } bind def
/d1 { setcachedevice } bind def
%%EndProlog
%%BeginSetup
%%EndSetup
%%Page: 1 1
%%BeginPageSetup
%%PageBoundingBox: 0 -1 598 739
%%EndPageSetup
q 0 -1 598 740 rectclip q
1 0.00392157 0.00392157 rg
225.648 478.363 m 171.051 509.887 144.43 574.156 160.746 635.051 c 177.066
695.945 232.254 738.277 295.301 738.277 c 358.348 738.277 413.535 695.945
429.855 635.051 c 446.172 574.156 419.551 509.887 364.949 478.363 c 323.008
551.008 l 344.73 563.547 355.324 589.117 348.832 613.34 c 342.34 637.566
320.383 654.41 295.301 654.41 c 270.219 654.41 248.262 637.566 241.77 613.34
c 235.277 589.117 245.871 563.547 267.59 551.008 c h
225.648 478.363 m f
0.00392157 0.00392157 1 rg
523.949 444.637 m 578.551 413.113 605.172 348.844 588.855 287.949 c 572.535
227.055 517.348 184.723 454.301 184.723 c 391.254 184.723 336.066 227.055
319.746 287.949 c 303.43 348.844 330.051 413.113 384.648 444.637 c 426.59
371.992 l 404.871 359.453 394.277 333.883 400.77 309.66 c 407.262 285.434
429.219 268.59 454.301 268.59 c 479.383 268.59 501.34 285.434 507.832 309.66
c 514.324 333.883 503.73 359.453 482.008 371.992 c h
523.949 444.637 m f
0.00392157 1 0.00392157 rg
278.602 324 m 278.602 260.953 236.254 205.762 175.359 189.449 c 114.461
173.133 50.207 199.762 18.684 254.363 c -12.84 308.961 -3.773 377.922 40.805
422.504 c 85.383 467.082 154.352 476.164 208.949 444.637 c 167.008 371.992
l 145.289 384.535 117.852 380.922 100.117 363.188 c 82.383 345.453 78.773
318.016 91.316 296.297 c 103.855 274.574 129.418 263.98 153.645 270.473
c 177.871 276.961 194.719 298.918 194.719 324 c h
278.602 324 m f
0.0196078 g
39.781 151.301 m 51.57 152.359 63.492 152.352 75.223 150.672 c 82.449 149.391
90.121 147.52 95.551 142.25 c 101.242 135.898 102.641 127.078 103.891 118.949
c 105.941 102.078 105.699 84.969 103.891 68.09 c 102.68 59.852 101.492
50.949 96.09 44.25 c 90.199 38.27 81.5 36.57 73.52 35.309 c 61.742 33.84
49.789 33.5 37.961 34.68 c 29.949 35.5 21.59 36.91 14.77 41.48 c 10.359
44.281 7.992 49.219 6.379 54.012 c 3.152 63.988 2.742 74.59 2.301 84.988
c 2.25 98.73 2.512 112.609 5.191 126.129 c 6.641 132.441 8.402 139.379
13.73 143.59 c 21.242 149.039 30.789 150.359 39.781 151.301 c h
41.73 132.469 m 51.723 133.27 61.922 133.512 71.801 131.57 c 75.629 130.801
80.152 128.941 80.871 124.578 c 83.871 112.309 83.172 99.531 83.289 86.988
c 82.922 78.07 83.129 68.852 80.141 60.309 c 77.531 54.699 70.422 54.238
65.062 53.422 c 54.312 52.809 43.152 52.27 32.723 55.461 c 27.91 56.73
26.391 61.891 25.652 66.219 c 23.652 79.051 24.301 92.102 24.551 105.031
c 25.082 112.281 24.992 119.801 27.602 126.691 c 30.59 131.309 36.77 131.719
41.73 132.469 c h
41.73 132.469 m f*
147.07 112.219 m 154.23 116.77 163.121 117.512 171.379 116.762 c 179.09
116.102 187.652 113.48 191.781 106.379 c 196.711 97.469 196.992 86.941
197.332 77 c 197.109 66.781 196.922 56.109 192.699 46.609 c 190.289 40.84
184.75 37.059 178.82 35.57 c 169.742 33.34 159.762 33.102 151.012 36.719
c 146.281 38.57 143.012 42.59 140.301 46.711 c 140.301 0 l 120.301 0 l
120.312 38.66 120.281 77.328 120.312 115.988 c 126.781 116.02 133.25 116.02
139.711 115.988 c 139.492 112.012 139.27 108.039 139.16 104.051 c 141.562
106.98 143.789 110.199 147.07 112.219 c h
153.582 101.781 m 159.18 102.211 165.102 102.328 170.34 100.02 c 173.66
98.59 175.41 95.078 176 91.68 c 177.742 82.91 177.52 73.852 176.902 64.969
c 176.281 59.609 175.422 52.672 169.52 50.59 c 162.699 48.359 154.922 48.219
148.18 50.828 c 141.91 53.469 141.18 61.059 140.562 66.949 c 140.191 75.988
139.742 85.289 142.289 94.07 c 143.641 99.051 148.82 101.41 153.582 101.781
c h
153.582 101.781 m f*
221.262 112.07 m 231.09 117.121 242.602 117.301 253.391 116.789 c 262.371
116.039 273.27 114.539 278.223 105.949 c 283.801 95.578 282.891 83.379
283.672 72 c 228.961 72 l 229.602 66.129 228.84 59.801 231.801 54.422 c
234.332 50.172 239.699 49.301 244.242 49.051 c 249.852 49.012 255.891 48.551
261.062 51.16 c 264.02 53.48 264.039 57.602 264.422 61 c 270.82 61.012
277.223 61.012 283.621 61 c 283.379 54.32 282.52 46.84 277.16 42.141 c 269.109
34.922 257.59 34.172 247.289 33.969 c 238.199 34.238 228.602 34.699 220.461
39.18 c 213.871 43.07 211.77 51.059 210.609 58.102 c 209.141 68.559 208.77
79.219 210.02 89.719 c 211.039 98.012 213.27 107.762 221.262 112.07 c h
232.949 99.34 m 238.41 102.66 245.172 101.988 251.301 101.898 c 255.102
101.488 259.73 101.27 262.199 97.91 c 264.723 93.762 264.27 88.68 264.289
84.02 c 252.52 84 240.762 83.969 229 84.031 c 229.18 89.211 228.77 95.531
232.949 99.34 c h
232.949 99.34 m f*
326.262 112.121 m 333.18 116.922 342.121 117.59 350.262 116.648 c 357.191
115.922 364.531 113.281 368.621 107.301 c 372.25 102.34 373.262 96.02 373.312
90.012 c 373.281 71.672 373.32 53.34 373.301 35 c 366.961 34.988 360.629
34.988 354.312 35 c 354.281 52.352 354.332 69.691 354.281 87.031 c 354.09
90.82 354.242 95.199 351.391 98.121 c 348.352 101.41 343.582 102.051 339.332
102.02 c 334.191 102.051 328.629 101.172 324.672 97.621 c 320.801 94.32
319.332 89 319.312 84.078 c 319.281 67.719 319.32 51.359 319.289 35.012
c 312.961 34.988 306.629 34.988 300.312 35 c 300.301 62 300.301 89 300.312
116 c 306.531 116.02 312.762 116.012 318.98 116 c 318.949 111.262 318.48
106.551 318.34 101.809 c 320.379 105.641 322.52 109.68 326.262 112.121
c h
326.262 112.121 m f*
407.691 147.602 m 418.172 151.121 429.34 151.621 440.301 152.012 c 450.922
151.961 462.02 151.859 471.941 147.578 c 476.98 145.48 480.473 140.879
482.172 135.801 c 484.941 128.211 485.02 119.988 485.082 112 c 477.77 112
470.461 111.98 463.16 112.012 c 463.039 117.629 463.473 123.93 459.992
128.711 c 456.473 132.309 450.973 132.301 446.301 132.852 c 436.801 133.031
426.91 133.641 417.812 130.359 c 414.531 129.32 412.832 126.039 412.172
122.879 c 410.301 114.398 410.289 105.648 410.301 97 c 410.41 85.441 410.23
73.711 412.699 62.34 c 413.352 58.18 417.18 55.621 421.02 54.699 c 429.902
52.488 439.172 52.809 448.242 53.352 c 452.973 53.969 458.73 54.281 461.699
58.621 c 464.871 63.801 464.34 70.172 464.172 75.988 c 471.551 76.02 478.922
76.012 486.301 75.988 c 486.211 66.801 486.051 57.309 482.711 48.609 c
480.992 44.059 477.441 40.199 472.84 38.461 c 463.812 34.84 453.91 34.609
444.332 34.031 c 433.223 33.84 421.973 34.109 411.109 36.699 c 404.742
38.359 397.781 41.281 394.832 47.609 c 391.062 55.98 390.371 65.289 389.402
74.301 c 388.59 86.199 388.07 98.121 388.359 110.039 c 388.93 119.691 389.812
129.859 395.02 138.27 c 397.789 142.949 402.652 145.879 407.691 147.602
c h
407.691 147.602 m f*
489.902 150.969 m 497.52 150.961 505.141 151.18 512.75 150.859 c 520.16
127.352 528.301 104.078 535.781 80.602 c 538.691 71.578 540.75 62.301 543.762
53.309 c 547.129 63.012 549.289 73.09 552.59 82.809 c 559.902 105.52 567.41
128.16 574.711 150.871 c 582.23 151.191 589.77 150.91 597.301 151.012 c
597.301 148.52 l 584.922 110.789 572.832 72.961 560.699 35.141 c 549.379
34.91 538.039 34.879 526.723 35.16 c 514.66 73.828 502.02 112.32 489.902
150.969 c h
489.902 150.969 m f*
Q Q
showpage
%%Trailer
end restore
%%EOF

View File

@ -1,42 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2021 Intel Corporation
#ifndef OPENCV_GAPI_HPP
#define OPENCV_GAPI_HPP
#include <memory>
/** \defgroup gapi_ref G-API framework
@{
@defgroup gapi_main_classes G-API Main Classes
@defgroup gapi_data_objects G-API Data Types
@{
@defgroup gapi_meta_args G-API Metadata Descriptors
@}
@defgroup gapi_std_backends G-API Standard Backends
@defgroup gapi_compile_args G-API Graph Compilation Arguments
@defgroup gapi_serialization G-API Serialization functionality
@}
*/
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/garray.hpp>
#include <opencv2/gapi/gscalar.hpp>
#include <opencv2/gapi/gopaque.hpp>
#include <opencv2/gapi/gframe.hpp>
#include <opencv2/gapi/gcomputation.hpp>
#include <opencv2/gapi/gcompiled.hpp>
#include <opencv2/gapi/gtyped.hpp>
#include <opencv2/gapi/gkernel.hpp>
#include <opencv2/gapi/operators.hpp>
// Include these files here to avoid cyclic dependency between
// Desync & GKernel & GComputation & GStreamingCompiled.
#include <opencv2/gapi/streaming/desync.hpp>
#include <opencv2/gapi/streaming/format.hpp>
#endif // OPENCV_GAPI_HPP

File diff suppressed because it is too large Load Diff

View File

@ -1,27 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_CPU_CORE_API_HPP
#define OPENCV_GAPI_CPU_CORE_API_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/own/exports.hpp> // GAPI_EXPORTS
namespace cv {
namespace gapi {
namespace core {
namespace cpu {
GAPI_EXPORTS_W cv::GKernelPackage kernels();
} // namespace cpu
} // namespace core
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_CPU_CORE_API_HPP

View File

@ -1,542 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2022 Intel Corporation
#ifndef OPENCV_GAPI_GCPUKERNEL_HPP
#define OPENCV_GAPI_GCPUKERNEL_HPP
#if defined _MSC_VER
#pragma warning(push)
#pragma warning(disable: 4702) // "Unreachable code" on postprocess(...) call inside OCVCallHelper
#endif
#include <functional>
#include <unordered_map>
#include <utility>
#include <vector>
#include <opencv2/core/mat.hpp>
#include <opencv2/gapi/gcommon.hpp>
#include <opencv2/gapi/gkernel.hpp>
#include <opencv2/gapi/garg.hpp>
#include <opencv2/gapi/gmetaarg.hpp>
#include <opencv2/gapi/util/compiler_hints.hpp> //suppress_unused_warning
#include <opencv2/gapi/util/util.hpp>
// FIXME: namespace scheme for backends?
namespace cv {
namespace gimpl
{
// Forward-declare an internal class
class GCPUExecutable;
} // namespace gimpl
namespace gapi
{
/**
* @brief This namespace contains G-API CPU backend functions,
* structures, and symbols.
*/
namespace cpu
{
/**
* \addtogroup gapi_std_backends
* @{
*
* @brief G-API backends available in this OpenCV version
*
* G-API backends play a corner stone role in G-API execution
* stack. Every backend is hardware-oriented and thus can run its
* kernels efficiently on the target platform.
*
* Backends are usually "black boxes" for G-API users -- on the API
* side, all backends are represented as different objects of the
* same class cv::gapi::GBackend.
* User can manipulate with backends by specifying which kernels to use.
*
* @sa @ref gapi_hld
*/
/**
* @brief Get a reference to CPU (OpenCV) backend.
*
* This is the default backend in G-API at the moment, providing
* broader functional coverage but losing some graph model
* advantages. Provided mostly for reference and prototyping
* purposes.
*
* @sa gapi_std_backends
*/
GAPI_EXPORTS cv::gapi::GBackend backend();
/** @} */
class GOCVFunctor;
//! @cond IGNORED
template<typename K, typename Callable>
GOCVFunctor ocv_kernel(const Callable& c);
template<typename K, typename Callable>
GOCVFunctor ocv_kernel(Callable& c);
//! @endcond
} // namespace cpu
} // namespace gapi
// Represents arguments which are passed to a wrapped CPU function
// FIXME: put into detail?
class GAPI_EXPORTS GCPUContext
{
public:
// Generic accessor API
template<typename T>
const T& inArg(int input) { return m_args.at(input).get<T>(); }
// Syntax sugar
const cv::Mat& inMat(int input);
cv::Mat& outMatR(int output); // FIXME: Avoid cv::Mat m = ctx.outMatR()
const cv::Scalar& inVal(int input);
cv::Scalar& outValR(int output); // FIXME: Avoid cv::Scalar s = ctx.outValR()
cv::MediaFrame& outFrame(int output);
template<typename T> std::vector<T>& outVecR(int output) // FIXME: the same issue
{
return outVecRef(output).wref<T>();
}
template<typename T> T& outOpaqueR(int output) // FIXME: the same issue
{
return outOpaqueRef(output).wref<T>();
}
GArg state()
{
return m_state;
}
protected:
detail::VectorRef& outVecRef(int output);
detail::OpaqueRef& outOpaqueRef(int output);
std::vector<GArg> m_args;
GArg m_state;
//FIXME: avoid conversion of arguments from internal representation to OpenCV one on each call
//to OCV kernel. (This can be achieved by a two single time conversions in GCPUExecutable::run,
//once on enter for input and output arguments, and once before return for output arguments only
std::unordered_map<std::size_t, GRunArgP> m_results;
friend class gimpl::GCPUExecutable;
};
class GAPI_EXPORTS GCPUKernel
{
public:
// This function is a kernel's execution entry point (does the processing work)
using RunF = std::function<void(GCPUContext &)>;
// This function is a stateful kernel's setup routine (configures state)
using SetupF = std::function<void(const GMetaArgs &, const GArgs &,
GArg &, const GCompileArgs &)>;
GCPUKernel();
GCPUKernel(const RunF& runF, const SetupF& setupF = nullptr);
RunF m_runF = nullptr;
SetupF m_setupF = nullptr;
bool m_isStateful = false;
};
// FIXME: This is an ugly ad-hoc implementation. TODO: refactor
namespace detail
{
template<class T> struct get_in;
template<> struct get_in<cv::GMat>
{
static cv::Mat get(GCPUContext &ctx, int idx) { return ctx.inMat(idx); }
};
template<> struct get_in<cv::GMatP>
{
static cv::Mat get(GCPUContext &ctx, int idx) { return get_in<cv::GMat>::get(ctx, idx); }
};
template<> struct get_in<cv::GFrame>
{
static cv::MediaFrame get(GCPUContext &ctx, int idx) { return ctx.inArg<cv::MediaFrame>(idx); }
};
template<> struct get_in<cv::GScalar>
{
static cv::Scalar get(GCPUContext &ctx, int idx) { return ctx.inVal(idx); }
};
template<typename U> struct get_in<cv::GArray<U> >
{
static const std::vector<U>& get(GCPUContext &ctx, int idx) { return ctx.inArg<VectorRef>(idx).rref<U>(); }
};
template<typename U> struct get_in<cv::GOpaque<U> >
{
static const U& get(GCPUContext &ctx, int idx) { return ctx.inArg<OpaqueRef>(idx).rref<U>(); }
};
//FIXME(dm): GArray<Mat>/GArray<GMat> conversion should be done more gracefully in the system
template<> struct get_in<cv::GArray<cv::GMat> >: public get_in<cv::GArray<cv::Mat> >
{
};
//FIXME(dm): GArray<Scalar>/GArray<GScalar> conversion should be done more gracefully in the system
template<> struct get_in<cv::GArray<cv::GScalar> >: public get_in<cv::GArray<cv::Scalar> >
{
};
// FIXME(dm): GArray<vector<U>>/GArray<GArray<U>> conversion should be done more gracefully in the system
template<typename U> struct get_in<cv::GArray<cv::GArray<U>> >: public get_in<cv::GArray<std::vector<U>> >
{
};
//FIXME(dm): GOpaque<Mat>/GOpaque<GMat> conversion should be done more gracefully in the system
template<> struct get_in<cv::GOpaque<cv::GMat> >: public get_in<cv::GOpaque<cv::Mat> >
{
};
//FIXME(dm): GOpaque<Scalar>/GOpaque<GScalar> conversion should be done more gracefully in the system
template<> struct get_in<cv::GOpaque<cv::GScalar> >: public get_in<cv::GOpaque<cv::Mat> >
{
};
template<class T> struct get_in
{
static T get(GCPUContext &ctx, int idx) { return ctx.inArg<T>(idx); }
};
struct tracked_cv_mat{
tracked_cv_mat(cv::Mat& m) : r{m}, original_data{m.data} {}
cv::Mat r;
uchar* original_data;
operator cv::Mat& (){ return r;}
void validate() const{
if (r.data != original_data)
{
util::throw_error
(std::logic_error
("OpenCV kernel output parameter was reallocated. \n"
"Incorrect meta data was provided ?"));
}
}
};
template<typename... Outputs>
void postprocess(Outputs&... outs)
{
struct
{
void operator()(tracked_cv_mat* bm) { bm->validate(); }
void operator()(...) { }
} validate;
//dummy array to unfold parameter pack
int dummy[] = { 0, (validate(&outs), 0)... };
cv::util::suppress_unused_warning(dummy);
}
template<class T> struct get_out;
template<> struct get_out<cv::GMat>
{
static tracked_cv_mat get(GCPUContext &ctx, int idx)
{
auto& r = ctx.outMatR(idx);
return {r};
}
};
template<> struct get_out<cv::GMatP>
{
static tracked_cv_mat get(GCPUContext &ctx, int idx)
{
return get_out<cv::GMat>::get(ctx, idx);
}
};
template<> struct get_out<cv::GScalar>
{
static cv::Scalar& get(GCPUContext &ctx, int idx)
{
return ctx.outValR(idx);
}
};
template<> struct get_out<cv::GFrame>
{
static cv::MediaFrame& get(GCPUContext &ctx, int idx)
{
return ctx.outFrame(idx);
}
};
template<typename U> struct get_out<cv::GArray<U>>
{
static std::vector<U>& get(GCPUContext &ctx, int idx)
{
return ctx.outVecR<U>(idx);
}
};
//FIXME(dm): GArray<Mat>/GArray<GMat> conversion should be done more gracefully in the system
template<> struct get_out<cv::GArray<cv::GMat> >: public get_out<cv::GArray<cv::Mat> >
{
};
// FIXME(dm): GArray<vector<U>>/GArray<GArray<U>> conversion should be done more gracefully in the system
template<typename U> struct get_out<cv::GArray<cv::GArray<U>> >: public get_out<cv::GArray<std::vector<U>> >
{
};
template<typename U> struct get_out<cv::GOpaque<U>>
{
static U& get(GCPUContext &ctx, int idx)
{
return ctx.outOpaqueR<U>(idx);
}
};
template<typename, typename>
struct OCVSetupHelper;
template<typename Impl, typename... Ins>
struct OCVSetupHelper<Impl, std::tuple<Ins...>>
{
// Using 'auto' return type and 'decltype' specifier in both 'setup_impl' versions
// to check existence of required 'Impl::setup' functions.
// While 'decltype' specifier accepts expression we pass expression with 'comma-operator'
// where first operand of comma-operator is call attempt to desired 'Impl::setup' and
// the second operand is 'void()' expression.
//
// SFINAE for 'Impl::setup' which accepts compile arguments.
template<int... IIs>
static auto setup_impl(const GMetaArgs &metaArgs, const GArgs &args,
GArg &state, const GCompileArgs &compileArgs,
detail::Seq<IIs...>) ->
decltype(Impl::setup(detail::get_in_meta<Ins>(metaArgs, args, IIs)...,
std::declval<typename std::add_lvalue_reference<
std::shared_ptr<typename Impl::State>
>::type
>(),
compileArgs)
, void())
{
// TODO: unique_ptr <-> shared_ptr conversion ?
// To check: Conversion is possible only if the state which should be passed to
// 'setup' user callback isn't required to have previous value
std::shared_ptr<typename Impl::State> stPtr;
Impl::setup(detail::get_in_meta<Ins>(metaArgs, args, IIs)..., stPtr, compileArgs);
state = GArg(stPtr);
}
// SFINAE for 'Impl::setup' which doesn't accept compile arguments.
template<int... IIs>
static auto setup_impl(const GMetaArgs &metaArgs, const GArgs &args,
GArg &state, const GCompileArgs &/* compileArgs */,
detail::Seq<IIs...>) ->
decltype(Impl::setup(detail::get_in_meta<Ins>(metaArgs, args, IIs)...,
std::declval<typename std::add_lvalue_reference<
std::shared_ptr<typename Impl::State>
>::type
>()
)
, void())
{
// The same comment as in 'setup' above.
std::shared_ptr<typename Impl::State> stPtr;
Impl::setup(detail::get_in_meta<Ins>(metaArgs, args, IIs)..., stPtr);
state = GArg(stPtr);
}
static void setup(const GMetaArgs &metaArgs, const GArgs &args,
GArg& state, const GCompileArgs &compileArgs)
{
setup_impl(metaArgs, args, state, compileArgs,
typename detail::MkSeq<sizeof...(Ins)>::type());
}
};
// OCVCallHelper is a helper class to call stateless OCV kernels and OCV kernel functors.
template<typename, typename, typename>
struct OCVCallHelper;
// FIXME: probably can be simplified with std::apply or analogue.
template<typename Impl, typename... Ins, typename... Outs>
struct OCVCallHelper<Impl, std::tuple<Ins...>, std::tuple<Outs...>>
{
template<typename... Inputs>
struct call_and_postprocess
{
template<typename... Outputs>
static void call(Inputs&&... ins, Outputs&&... outs)
{
//not using a std::forward on outs is deliberate in order to
//cause compilation error, by trying to bind rvalue references to lvalue references
Impl::run(std::forward<Inputs>(ins)..., outs...);
postprocess(outs...);
}
template<typename... Outputs>
static void call(Impl& impl, Inputs&&... ins, Outputs&&... outs)
{
impl(std::forward<Inputs>(ins)..., outs...);
}
};
template<int... IIs, int... OIs>
static void call_impl(GCPUContext &ctx, detail::Seq<IIs...>, detail::Seq<OIs...>)
{
//Make sure that OpenCV kernels do not reallocate memory for output parameters
//by comparing it's state (data ptr) before and after the call.
//This is done by converting each output Mat into tracked_cv_mat object, and binding
//them to parameters of ad-hoc function
call_and_postprocess<decltype(get_in<Ins>::get(ctx, IIs))...>
::call(get_in<Ins>::get(ctx, IIs)..., get_out<Outs>::get(ctx, OIs)...);
}
template<int... IIs, int... OIs>
static void call_impl(cv::GCPUContext &ctx, Impl& impl,
detail::Seq<IIs...>, detail::Seq<OIs...>)
{
call_and_postprocess<decltype(get_in<Ins>::get(ctx, IIs))...>
::call(impl, get_in<Ins>::get(ctx, IIs)..., get_out<Outs>::get(ctx, OIs)...);
}
static void call(GCPUContext &ctx)
{
call_impl(ctx,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<sizeof...(Outs)>::type());
}
// NB: Same as call but calling the object
// This necessary for kernel implementations that have a state
// and are represented as an object
static void callFunctor(cv::GCPUContext &ctx, Impl& impl)
{
call_impl(ctx, impl,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<sizeof...(Outs)>::type());
}
};
// OCVStCallHelper is a helper class to call stateful OCV kernels.
template<typename, typename, typename>
struct OCVStCallHelper;
template<typename Impl, typename... Ins, typename... Outs>
struct OCVStCallHelper<Impl, std::tuple<Ins...>, std::tuple<Outs...>> :
OCVCallHelper<Impl, std::tuple<Ins...>, std::tuple<Outs...>>
{
template<typename... Inputs>
struct call_and_postprocess
{
template<typename... Outputs>
static void call(typename Impl::State& st, Inputs&&... ins, Outputs&&... outs)
{
Impl::run(std::forward<Inputs>(ins)..., outs..., st);
postprocess(outs...);
}
};
template<int... IIs, int... OIs>
static void call_impl(GCPUContext &ctx, detail::Seq<IIs...>, detail::Seq<OIs...>)
{
auto& st = *ctx.state().get<std::shared_ptr<typename Impl::State>>();
call_and_postprocess<decltype(get_in<Ins>::get(ctx, IIs))...>
::call(st, get_in<Ins>::get(ctx, IIs)..., get_out<Outs>::get(ctx, OIs)...);
}
static void call(GCPUContext &ctx)
{
call_impl(ctx,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<sizeof...(Outs)>::type());
}
};
} // namespace detail
template<class Impl, class K>
class GCPUKernelImpl: public cv::detail::KernelTag
{
using CallHelper = cv::detail::OCVCallHelper<Impl, typename K::InArgs, typename K::OutArgs>;
public:
using API = K;
static cv::gapi::GBackend backend() { return cv::gapi::cpu::backend(); }
static cv::GCPUKernel kernel() { return GCPUKernel(&CallHelper::call); }
};
template<class Impl, class K, class S>
class GCPUStKernelImpl: public cv::detail::KernelTag
{
using StSetupHelper = detail::OCVSetupHelper<Impl, typename K::InArgs>;
using StCallHelper = detail::OCVStCallHelper<Impl, typename K::InArgs, typename K::OutArgs>;
public:
using API = K;
using State = S;
static cv::gapi::GBackend backend() { return cv::gapi::cpu::backend(); }
static cv::GCPUKernel kernel() { return GCPUKernel(&StCallHelper::call,
&StSetupHelper::setup); }
};
#define GAPI_OCV_KERNEL(Name, API) struct Name: public cv::GCPUKernelImpl<Name, API>
// TODO: Reuse Anatoliy's logic for support of types with commas in macro.
// Retrieve the common part from Anatoliy's logic to the separate place.
#define GAPI_OCV_KERNEL_ST(Name, API, State) \
struct Name: public cv::GCPUStKernelImpl<Name, API, State> \
/// @private
class gapi::cpu::GOCVFunctor : public gapi::GFunctor
{
public:
using Impl = std::function<void(GCPUContext &)>;
using Meta = cv::GKernel::M;
GOCVFunctor(const char* id, const Meta &meta, const Impl& impl)
: gapi::GFunctor(id), impl_{GCPUKernel(impl), meta}
{
}
GKernelImpl impl() const override { return impl_; }
gapi::GBackend backend() const override { return gapi::cpu::backend(); }
private:
GKernelImpl impl_;
};
//! @cond IGNORED
template<typename K, typename Callable>
gapi::cpu::GOCVFunctor gapi::cpu::ocv_kernel(Callable& c)
{
using P = cv::detail::OCVCallHelper<Callable, typename K::InArgs, typename K::OutArgs>;
return GOCVFunctor{ K::id()
, &K::getOutMeta
, std::bind(&P::callFunctor, std::placeholders::_1, std::ref(c))
};
}
template<typename K, typename Callable>
gapi::cpu::GOCVFunctor gapi::cpu::ocv_kernel(const Callable& c)
{
using P = cv::detail::OCVCallHelper<Callable, typename K::InArgs, typename K::OutArgs>;
return GOCVFunctor{ K::id()
, &K::getOutMeta
, std::bind(&P::callFunctor, std::placeholders::_1, c)
};
}
//! @endcond
} // namespace cv
#if defined _MSC_VER
#pragma warning(pop)
#endif
#endif // OPENCV_GAPI_GCPUKERNEL_HPP

View File

@ -1,27 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_CPU_IMGPROC_API_HPP
#define OPENCV_GAPI_CPU_IMGPROC_API_HPP
#include <opencv2/core/cvdef.h> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
namespace cv {
namespace gapi {
namespace imgproc {
namespace cpu {
GAPI_EXPORTS GKernelPackage kernels();
} // namespace cpu
} // namespace imgproc
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_CPU_IMGPROC_API_HPP

View File

@ -1,29 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_CPU_OT_API_HPP
#define OPENCV_GAPI_CPU_OT_API_HPP
#include <opencv2/core/cvdef.h> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
namespace cv {
namespace gapi {
/**
* @brief This namespace contains G-API Operation Types for
* VAS Object Tracking module functionality.
*/
namespace ot {
namespace cpu {
GAPI_EXPORTS_W GKernelPackage kernels();
} // namespace cpu
} // namespace ot
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_CPU_OT_API_HPP

View File

@ -1,48 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2021 Intel Corporation
#ifndef OPENCV_GAPI_CPU_STEREO_API_HPP
#define OPENCV_GAPI_CPU_STEREO_API_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
namespace cv {
namespace gapi {
namespace calib3d {
namespace cpu {
GAPI_EXPORTS GKernelPackage kernels();
/** @brief Structure for the Stereo operation initialization parameters.*/
struct GAPI_EXPORTS StereoInitParam {
StereoInitParam(int nD, int bS, double bL, double f):
numDisparities(nD), blockSize(bS), baseline(bL), focus(f) {}
StereoInitParam() = default;
int numDisparities = 0;
int blockSize = 21;
double baseline = 63.5;
double focus = 3.6;
};
} // namespace cpu
} // namespace calib3d
} // namespace gapi
namespace detail {
template<> struct CompileArgTag<cv::gapi::calib3d::cpu::StereoInitParam> {
static const char* tag() {
return "org.opencv.stereoInit";
}
};
} // namespace detail
} // namespace cv
#endif // OPENCV_GAPI_CPU_STEREO_API_HPP

View File

@ -1,25 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2020 Intel Corporation
#ifndef OPENCV_GAPI_CPU_VIDEO_API_HPP
#define OPENCV_GAPI_CPU_VIDEO_API_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
namespace cv {
namespace gapi {
namespace video {
namespace cpu {
GAPI_EXPORTS GKernelPackage kernels();
} // namespace cpu
} // namespace video
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_CPU_VIDEO_API_HPP

View File

@ -1,20 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_FLUID_CORE_HPP
#define OPENCV_GAPI_FLUID_CORE_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/own/exports.hpp> // GAPI_EXPORTS
namespace cv { namespace gapi { namespace core { namespace fluid {
GAPI_EXPORTS_W cv::GKernelPackage kernels();
}}}}
#endif // OPENCV_GAPI_FLUID_CORE_HPP

View File

@ -1,154 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_FLUID_BUFFER_HPP
#define OPENCV_GAPI_FLUID_BUFFER_HPP
#include <list>
#include <numeric> // accumulate
#include <ostream> // ostream
#include <cstdint> // uint8_t
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/util/optional.hpp>
namespace cv {
namespace gapi {
namespace fluid {
struct Border
{
// This constructor is required to support existing kernels which are part of G-API
Border(int _type, cv::Scalar _val) : type(_type), value(_val) {}
int type;
cv::Scalar value;
};
using BorderOpt = util::optional<Border>;
bool operator == (const Border& b1, const Border& b2);
class GAPI_EXPORTS Buffer;
class GAPI_EXPORTS View
{
public:
struct Cache
{
std::vector<const uint8_t*> m_linePtrs;
GMatDesc m_desc;
int m_border_size = 0;
inline const uint8_t* linePtr(int index) const
{
// "out_of_window" check:
// user must not request the lines which are outside of specified kernel window
GAPI_DbgAssert(index >= -m_border_size
&& index < -m_border_size + static_cast<int>(m_linePtrs.size()));
return m_linePtrs[index + m_border_size];
}
};
const inline uint8_t* InLineB(int index) const // -(w-1)/2...0...+(w-1)/2 for Filters
{
return m_cache->linePtr(index);
}
template<typename T> const inline T* InLine(int i) const
{
const uint8_t* ptr = this->InLineB(i);
return reinterpret_cast<const T*>(ptr);
}
inline operator bool() const { return m_priv != nullptr; }
bool ready() const;
inline int length() const { return m_cache->m_desc.size.width; }
int y() const;
inline const GMatDesc& meta() const { return m_cache->m_desc; }
class GAPI_EXPORTS Priv; // internal use only
Priv& priv(); // internal use only
const Priv& priv() const; // internal use only
View();
View(std::unique_ptr<Priv>&& p);
View(View&& v);
View& operator=(View&& v);
~View();
private:
std::unique_ptr<Priv> m_priv;
const Cache* m_cache = nullptr;
};
class GAPI_EXPORTS Buffer
{
public:
struct Cache
{
std::vector<uint8_t*> m_linePtrs;
GMatDesc m_desc;
};
// Default constructor (executable creation stage,
// all following initialization performed in Priv::init())
Buffer();
// Scratch constructor (user kernels)
Buffer(const cv::GMatDesc &desc);
// Constructor for intermediate buffers (for tests)
Buffer(const cv::GMatDesc &desc,
int max_line_consumption, int border_size,
int skew,
int wlpi,
BorderOpt border);
// Constructor for in/out buffers (for tests)
Buffer(const cv::Mat &data, bool is_input);
~Buffer();
Buffer& operator=(Buffer&&);
inline uint8_t* OutLineB(int index = 0)
{
return m_cache->m_linePtrs[index];
}
template<typename T> inline T* OutLine(int index = 0)
{
uint8_t* ptr = this->OutLineB(index);
return reinterpret_cast<T*>(ptr);
}
int y() const;
int linesReady() const;
void debug(std::ostream &os) const;
inline int length() const { return m_cache->m_desc.size.width; }
int lpi() const; // LPI for WRITER
inline const GMatDesc& meta() const { return m_cache->m_desc; }
View mkView(int borderSize, bool ownStorage);
void addView(const View* v);
class GAPI_EXPORTS Priv; // internal use only
Priv& priv(); // internal use only
const Priv& priv() const; // internal use only
private:
std::unique_ptr<Priv> m_priv;
const Cache* m_cache;
};
} // namespace cv::gapi::fluid
} // namespace cv::gapi
} // namespace cv
#endif // OPENCV_GAPI_FLUID_BUFFER_HPP

View File

@ -1,442 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2019 Intel Corporation
#ifndef OPENCV_GAPI_FLUID_KERNEL_HPP
#define OPENCV_GAPI_FLUID_KERNEL_HPP
#include <vector>
#include <functional>
#include <map>
#include <unordered_map>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/gcommon.hpp>
#include <opencv2/gapi/gkernel.hpp>
#include <opencv2/gapi/garg.hpp>
#include <opencv2/gapi/fluid/gfluidbuffer.hpp>
// FIXME: namespace scheme for backends?
namespace cv {
namespace gapi
{
/**
* @brief This namespace contains G-API Fluid backend functions, structures, and symbols.
*/
namespace fluid
{
/**
* \addtogroup gapi_std_backends G-API Standard Backends
* @{
*/
/**
* @brief Get a reference to Fluid backend.
*
* @sa gapi_std_backends
*/
GAPI_EXPORTS cv::gapi::GBackend backend();
/** @} */
} // namespace fluid
} // namespace gapi
class GAPI_EXPORTS GFluidKernel
{
public:
enum class Kind
{
Filter,
Resize,
YUV420toRGB //Color conversion of 4:2:0 chroma sub-sampling formats (NV12, I420 ..etc) to RGB
};
// This function is a generic "doWork" callback
using F = std::function<void(const cv::GArgs&, const std::vector<gapi::fluid::Buffer*> &)>;
// This function is a generic "initScratch" callback
using IS = std::function<void(const cv::GMetaArgs &, const cv::GArgs&, gapi::fluid::Buffer &)>;
// This function is a generic "resetScratch" callback
using RS = std::function<void(gapi::fluid::Buffer &)>;
// This function describes kernel metadata inference rule.
using M = std::function<GMetaArgs(const GMetaArgs &, const GArgs &)>;
// This function is a generic "getBorder" callback (extracts border-related data from kernel's input parameters)
using B = std::function<gapi::fluid::BorderOpt(const GMetaArgs&, const GArgs&)>;
// This function is a generic "getWindow" callback (extracts window-related data from kernel's input parameters)
using GW = std::function<int(const GMetaArgs&, const GArgs&)>;
// FIXME: move implementations out of header file
GFluidKernel() {}
GFluidKernel(Kind k, int l, bool scratch, const F& f, const IS &is, const RS &rs, const B& b, const GW& win)
: m_kind(k)
, m_lpi(l)
, m_scratch(scratch)
, m_f(f)
, m_is(is)
, m_rs(rs)
, m_b(b)
, m_gw(win) {}
Kind m_kind;
const int m_lpi = -1;
const bool m_scratch = false;
const F m_f;
const IS m_is;
const RS m_rs;
const B m_b;
const GW m_gw;
};
// FIXME!!!
// This is the temporary and experimental API
// which should be replaced by runtime roi-based scheduling
/** \addtogroup gapi_compile_args
* @{
*/
/**
* @brief This structure allows to control the output image region
* which Fluid backend will produce in the graph.
*
* This feature is useful for external tiling and parallelism, but
* will be deprecated in the future releases.
*/
struct GFluidOutputRois
{
std::vector<cv::Rect> rois;
};
/**
* @brief This structure forces Fluid backend to generate multiple
* parallel output regions in the graph. These regions execute in parallel.
*
* This feature may be deprecated in the future releases.
*/
struct GFluidParallelOutputRois
{
std::vector<GFluidOutputRois> parallel_rois;
};
/**
* @brief This structure allows to customize the way how Fluid executes
* parallel regions.
*
* For example, user can utilize his own threading runtime via this parameter.
* The `parallel_for` member functor is called by the Fluid runtime with the
* following arguments:
*
* @param size Size of the parallel range to process
* @param f A function which should be called for every integer index
* in this range by the specified parallel_for implementation.
*
* This feature may be deprecated in the future releases.
*/
struct GFluidParallelFor
{
//this function accepts:
// - size of the "parallel" range as the first argument
// - and a function to be called on the range items, designated by item index
std::function<void(std::size_t size, std::function<void(std::size_t index)>)> parallel_for;
};
/** @} gapi_compile_args */
namespace detail
{
template<> struct CompileArgTag<GFluidOutputRois>
{
static const char* tag() { return "gapi.fluid.outputRois"; }
};
template<> struct CompileArgTag<GFluidParallelFor>
{
static const char* tag() { return "gapi.fluid.parallelFor"; }
};
template<> struct CompileArgTag<GFluidParallelOutputRois>
{
static const char* tag() { return "gapi.fluid.parallelOutputRois"; }
};
} // namespace detail
namespace detail
{
template<class T> struct fluid_get_in;
template<> struct fluid_get_in<cv::GMat>
{
static const cv::gapi::fluid::View& get(const cv::GArgs &in_args, int idx)
{
return *in_args[idx].unsafe_get<cv::gapi::fluid::View*>();
}
};
template<> struct fluid_get_in<cv::GScalar>
{
// FIXME: change to return by reference when moved to own::Scalar
static cv::Scalar get(const cv::GArgs &in_args, int idx)
{
return in_args[idx].unsafe_get<cv::Scalar>();
}
};
template<typename U> struct fluid_get_in<cv::GArray<U>>
{
static const std::vector<U>& get(const cv::GArgs &in_args, int idx)
{
return in_args.at(idx).unsafe_get<cv::detail::VectorRef>().rref<U>();
}
};
template<typename U> struct fluid_get_in<cv::GOpaque<U>>
{
static const U& get(const cv::GArgs &in_args, int idx)
{
return in_args.at(idx).unsafe_get<cv::detail::OpaqueRef>().rref<U>();
}
};
template<class T> struct fluid_get_in
{
static const T& get(const cv::GArgs &in_args, int idx)
{
return in_args[idx].unsafe_get<T>();
}
};
template<bool, typename Impl, typename... Ins>
struct scratch_helper;
template<typename Impl, typename... Ins>
struct scratch_helper<true, Impl, Ins...>
{
// Init
template<int... IIs>
static void help_init_impl(const cv::GMetaArgs &metas,
const cv::GArgs &in_args,
gapi::fluid::Buffer &scratch_buf,
detail::Seq<IIs...>)
{
Impl::initScratch(get_in_meta<Ins>(metas, in_args, IIs)..., scratch_buf);
}
static void help_init(const cv::GMetaArgs &metas,
const cv::GArgs &in_args,
gapi::fluid::Buffer &b)
{
help_init_impl(metas, in_args, b, typename detail::MkSeq<sizeof...(Ins)>::type());
}
// Reset
static void help_reset(gapi::fluid::Buffer &b)
{
Impl::resetScratch(b);
}
};
template<typename Impl, typename... Ins>
struct scratch_helper<false, Impl, Ins...>
{
static void help_init(const cv::GMetaArgs &,
const cv::GArgs &,
gapi::fluid::Buffer &)
{
GAPI_Error("InternalError");
}
static void help_reset(gapi::fluid::Buffer &)
{
GAPI_Error("InternalError");
}
};
template<typename T> struct is_gmat_type
{
static const constexpr bool value = std::is_same<cv::GMat, T>::value;
};
template<bool CallCustomGetBorder, typename Impl, typename... Ins>
struct get_border_helper;
template<typename Impl, typename... Ins>
struct get_border_helper<true, Impl, Ins...>
{
template<int... IIs>
static gapi::fluid::BorderOpt get_border_impl(const GMetaArgs &metas,
const cv::GArgs &in_args,
cv::detail::Seq<IIs...>)
{
return util::make_optional(Impl::getBorder(cv::detail::get_in_meta<Ins>(metas, in_args, IIs)...));
}
static gapi::fluid::BorderOpt help(const GMetaArgs &metas,
const cv::GArgs &in_args)
{
return get_border_impl(metas, in_args, typename detail::MkSeq<sizeof...(Ins)>::type());
}
};
template<typename Impl, typename... Ins>
struct get_border_helper<false, Impl, Ins...>
{
static gapi::fluid::BorderOpt help(const cv::GMetaArgs &,
const cv::GArgs &)
{
return {};
}
};
template<bool CallCustomGetWindow, typename, typename... Ins>
struct get_window_helper;
template<typename Impl, typename... Ins>
struct get_window_helper<true, Impl, Ins...>
{
template<int... IIs>
static int get_window_impl(const GMetaArgs &metas,
const cv::GArgs &in_args,
cv::detail::Seq<IIs...>)
{
return Impl::getWindow(cv::detail::get_in_meta<Ins>(metas, in_args, IIs)...);
}
static int help(const GMetaArgs &metas, const cv::GArgs &in_args)
{
return get_window_impl(metas, in_args, typename detail::MkSeq<sizeof...(Ins)>::type());
}
};
template<typename Impl, typename... Ins>
struct get_window_helper<false, Impl, Ins...>
{
static int help(const cv::GMetaArgs &,
const cv::GArgs &)
{
return Impl::Window;
}
};
template<typename C, typename T>
struct has_Window
{
private:
template<class U>
static constexpr auto Check(U*) -> typename std::is_same<decltype(U::Window), T>::type;
template<typename>
static constexpr std::false_type Check(...);
typedef decltype(Check<C>(0)) Result;
public:
static constexpr bool value = Result::value;
};
template<bool hasWindow, typename Impl>
struct callCustomGetBorder;
template<typename Impl>
struct callCustomGetBorder<true, Impl>
{
static constexpr bool value = (Impl::Window != 1);
};
template<typename Impl>
struct callCustomGetBorder<false, Impl>
{
static constexpr bool value = true;
};
template<typename, typename, typename, bool UseScratch>
struct FluidCallHelper;
template<typename Impl, typename... Ins, typename... Outs, bool UseScratch>
struct FluidCallHelper<Impl, std::tuple<Ins...>, std::tuple<Outs...>, UseScratch>
{
static_assert(all_satisfy<is_gmat_type, Outs...>::value, "return type must be GMat");
static_assert(contains<GMat, Ins...>::value, "input must contain at least one GMat");
// Execution dispatcher ////////////////////////////////////////////////////
template<int... IIs, int... OIs>
static void call_impl(const cv::GArgs &in_args,
const std::vector<gapi::fluid::Buffer*> &out_bufs,
detail::Seq<IIs...>,
detail::Seq<OIs...>)
{
Impl::run(fluid_get_in<Ins>::get(in_args, IIs)..., *out_bufs[OIs]...);
}
static void call(const cv::GArgs &in_args,
const std::vector<gapi::fluid::Buffer*> &out_bufs)
{
constexpr int numOuts = (sizeof...(Outs)) + (UseScratch ? 1 : 0);
call_impl(in_args, out_bufs,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<numOuts>::type());
}
// Scratch buffer initialization dispatcher ////////////////////////////////
static void init_scratch(const GMetaArgs &metas,
const cv::GArgs &in_args,
gapi::fluid::Buffer &b)
{
scratch_helper<UseScratch, Impl, Ins...>::help_init(metas, in_args, b);
}
// Scratch buffer reset dispatcher /////////////////////////////////////////
static void reset_scratch(gapi::fluid::Buffer &scratch_buf)
{
scratch_helper<UseScratch, Impl, Ins...>::help_reset(scratch_buf);
}
static gapi::fluid::BorderOpt getBorder(const GMetaArgs &metas, const cv::GArgs &in_args)
{
constexpr bool hasWindow = has_Window<Impl, const int>::value;
// User must provide "init" callback if Window != 1
// TODO: move to constexpr if when we enable C++17
return get_border_helper<callCustomGetBorder<hasWindow, Impl>::value, Impl, Ins...>::help(metas, in_args);
}
static int getWindow(const GMetaArgs &metas, const cv::GArgs &in_args)
{
constexpr bool callCustomGetWindow = !(has_Window<Impl, const int>::value);
return get_window_helper<callCustomGetWindow, Impl, Ins...>::help(metas, in_args);
}
};
} // namespace detail
template<class Impl, class K, bool UseScratch>
class GFluidKernelImpl : public cv::detail::KernelTag
{
static const int LPI = 1;
static const auto Kind = GFluidKernel::Kind::Filter;
using P = detail::FluidCallHelper<Impl, typename K::InArgs, typename K::OutArgs, UseScratch>;
public:
using API = K;
static GFluidKernel kernel()
{
// FIXME: call() and getOutMeta() needs to be renamed so it is clear these
// functions are internal wrappers, not user API
return GFluidKernel(Impl::Kind, Impl::LPI,
UseScratch,
&P::call, &P::init_scratch, &P::reset_scratch, &P::getBorder, &P::getWindow);
}
static cv::gapi::GBackend backend() { return cv::gapi::fluid::backend(); }
};
#define GAPI_FLUID_KERNEL(Name, API, Scratch) struct Name: public cv::GFluidKernelImpl<Name, API, Scratch>
} // namespace cv
#endif // OPENCV_GAPI_GCPUKERNEL_HPP

View File

@ -1,20 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_FLUID_IMGPROC_HPP
#define OPENCV_GAPI_FLUID_IMGPROC_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/own/exports.hpp> // GAPI_EXPORTS
namespace cv { namespace gapi { namespace imgproc { namespace fluid {
GAPI_EXPORTS_W GKernelPackage kernels();
}}}}
#endif // OPENCV_GAPI_FLUID_IMGPROC_HPP

View File

@ -1,311 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2021 Intel Corporation
#ifndef OPENCV_GAPI_GARG_HPP
#define OPENCV_GAPI_GARG_HPP
#include <vector>
#include <unordered_map>
#include <type_traits>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/own/mat.hpp>
#include <opencv2/gapi/media.hpp>
#include <opencv2/gapi/util/util.hpp>
#include <opencv2/gapi/util/any.hpp>
#include <opencv2/gapi/util/variant.hpp>
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/gscalar.hpp>
#include <opencv2/gapi/garray.hpp>
#include <opencv2/gapi/gopaque.hpp>
#include <opencv2/gapi/gframe.hpp>
#include <opencv2/gapi/gtype_traits.hpp>
#include <opencv2/gapi/gmetaarg.hpp>
#include <opencv2/gapi/streaming/source.hpp>
#include <opencv2/gapi/rmat.hpp>
namespace cv {
class GArg;
namespace detail {
template<typename T>
using is_garg = std::is_same<GArg, typename std::decay<T>::type>;
}
// Parameter holder class for a node
// Depending on platform capabilities, can either support arbitrary types
// (as `boost::any`) or a limited number of types (as `boot::variant`).
// FIXME: put into "details" as a user shouldn't use it in his code
class GAPI_EXPORTS GArg
{
public:
GArg() {}
template<typename T, typename std::enable_if<!detail::is_garg<T>::value, int>::type = 0>
explicit GArg(const T &t)
: kind(detail::GTypeTraits<T>::kind)
, opaque_kind(detail::GOpaqueTraits<T>::kind)
, value(detail::wrap_gapi_helper<T>::wrap(t))
{
}
template<typename T, typename std::enable_if<!detail::is_garg<T>::value, int>::type = 0>
explicit GArg(T &&t)
: kind(detail::GTypeTraits<typename std::decay<T>::type>::kind)
, opaque_kind(detail::GOpaqueTraits<typename std::decay<T>::type>::kind)
, value(detail::wrap_gapi_helper<T>::wrap(t))
{
}
template<typename T> inline T& get()
{
return util::any_cast<typename std::remove_reference<T>::type>(value);
}
template<typename T> inline const T& get() const
{
return util::any_cast<typename std::remove_reference<T>::type>(value);
}
template<typename T> inline T& unsafe_get()
{
return util::unsafe_any_cast<typename std::remove_reference<T>::type>(value);
}
template<typename T> inline const T& unsafe_get() const
{
return util::unsafe_any_cast<typename std::remove_reference<T>::type>(value);
}
detail::ArgKind kind = detail::ArgKind::OPAQUE_VAL;
detail::OpaqueKind opaque_kind = detail::OpaqueKind::CV_UNKNOWN;
protected:
util::any value;
};
using GArgs = std::vector<GArg>;
// FIXME: Express as M<GProtoArg...>::type
// FIXME: Move to a separate file!
using GRunArgBase = util::variant<
#if !defined(GAPI_STANDALONE)
cv::UMat,
#endif // !defined(GAPI_STANDALONE)
cv::RMat,
cv::gapi::wip::IStreamSource::Ptr,
cv::Mat,
cv::Scalar,
cv::detail::VectorRef,
cv::detail::OpaqueRef,
cv::MediaFrame
>;
namespace detail {
template<typename,typename>
struct in_variant;
template<typename T, typename... Types>
struct in_variant<T, util::variant<Types...> >
: std::integral_constant<bool, cv::detail::contains<T, Types...>::value > {
};
} // namespace detail
struct GAPI_EXPORTS GRunArg: public GRunArgBase
{
// Metadata information here
using Meta = std::unordered_map<std::string, util::any>;
Meta meta;
// Mimic the old GRunArg semantics here, old of the times when
// GRunArg was an alias to variant<>
GRunArg();
GRunArg(const cv::GRunArg &arg);
GRunArg(cv::GRunArg &&arg);
GRunArg& operator= (const GRunArg &arg);
GRunArg& operator= (GRunArg &&arg);
template <typename T>
GRunArg(const T &t,
const Meta &m = Meta{},
typename std::enable_if< detail::in_variant<T, GRunArgBase>::value, int>::type = 0)
: GRunArgBase(t)
, meta(m)
{
}
template <typename T>
GRunArg(T &&t,
const Meta &m = Meta{},
typename std::enable_if< detail::in_variant<T, GRunArgBase>::value, int>::type = 0)
: GRunArgBase(std::move(t))
, meta(m)
{
}
template <typename T> auto operator= (const T &t)
-> typename std::enable_if< detail::in_variant<T, GRunArgBase>::value, cv::GRunArg>::type&
{
GRunArgBase::operator=(t);
return *this;
}
template <typename T> auto operator= (T&& t)
-> typename std::enable_if< detail::in_variant<T, GRunArgBase>::value, cv::GRunArg>::type&
{
GRunArgBase::operator=(std::move(t));
return *this;
}
};
using GRunArgs = std::vector<GRunArg>;
// TODO: Think about the addition operator
/**
* @brief This operator allows to complement the input vector at runtime.
*
* It's an ordinary overload of addition assignment operator.
*
* Example of usage:
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/dynamic_graph_snippets.cpp GRunArgs usage
*
*/
inline GRunArgs& operator += (GRunArgs &lhs, const GRunArgs &rhs)
{
lhs.reserve(lhs.size() + rhs.size());
lhs.insert(lhs.end(), rhs.begin(), rhs.end());
return lhs;
}
namespace gapi
{
namespace wip
{
/**
* @brief This aggregate type represents all types which G-API can
* handle (via variant).
*
* It only exists to overcome C++ language limitations (where a
* `using`-defined class can't be forward-declared).
*/
struct GAPI_EXPORTS Data: public GRunArg
{
using GRunArg::GRunArg;
template <typename T>
Data& operator= (const T& t) { GRunArg::operator=(t); return *this; }
template <typename T>
Data& operator= (T&& t) { GRunArg::operator=(std::move(t)); return *this; }
};
} // namespace wip
} // namespace gapi
using GRunArgP = util::variant<
#if !defined(GAPI_STANDALONE)
cv::UMat*,
#endif // !defined(GAPI_STANDALONE)
cv::Mat*,
cv::RMat*,
cv::Scalar*,
cv::MediaFrame*,
cv::detail::VectorRef,
cv::detail::OpaqueRef
>;
using GRunArgsP = std::vector<GRunArgP>;
// TODO: Think about the addition operator
/**
* @brief This operator allows to complement the output vector at runtime.
*
* It's an ordinary overload of addition assignment operator.
*
* Example of usage:
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/dynamic_graph_snippets.cpp GRunArgsP usage
*
*/
inline GRunArgsP& operator += (GRunArgsP &lhs, const GRunArgsP &rhs)
{
lhs.reserve(lhs.size() + rhs.size());
lhs.insert(lhs.end(), rhs.begin(), rhs.end());
return lhs;
}
namespace gapi
{
/**
* \addtogroup gapi_serialization
* @{
*
* @brief G-API functions and classes for serialization and deserialization.
*/
/** @brief Wraps deserialized output GRunArgs to GRunArgsP which can be used by GCompiled.
*
* Since it's impossible to get modifiable output arguments from deserialization
* it needs to be wrapped by this function.
*
* Example of usage:
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp bind after deserialization
*
* @param out_args deserialized GRunArgs.
* @return the same GRunArgs wrapped in GRunArgsP.
* @see deserialize
*/
GAPI_EXPORTS cv::GRunArgsP bind(cv::GRunArgs &out_args);
/** @brief Wraps output GRunArgsP available during graph execution to GRunArgs which can be serialized.
*
* GRunArgsP is pointer-to-value, so to be serialized they need to be binded to real values
* which this function does.
*
* Example of usage:
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp bind before serialization
*
* @param out output GRunArgsP available during graph execution.
* @return the same GRunArgsP wrapped in serializable GRunArgs.
* @see serialize
*/
GAPI_EXPORTS cv::GRunArg bind(cv::GRunArgP &out); // FIXME: think more about it
/** @} */
}
template<typename... Ts> inline GRunArgs gin(const Ts&... args)
{
return GRunArgs{ GRunArg(detail::wrap_host_helper<Ts>::wrap_in(args))... };
}
template<typename... Ts> inline GRunArgsP gout(Ts&... args)
{
return GRunArgsP{ GRunArgP(detail::wrap_host_helper<Ts>::wrap_out(args))... };
}
struct GTypeInfo;
using GTypesInfo = std::vector<GTypeInfo>;
// FIXME: Needed for python bridge, must be moved to more appropriate header
namespace detail {
struct ExtractArgsCallback
{
cv::GRunArgs operator()(const cv::GTypesInfo& info) const { return c(info); }
using CallBackT = std::function<cv::GRunArgs(const cv::GTypesInfo& info)>;
CallBackT c;
};
struct ExtractMetaCallback
{
cv::GMetaArgs operator()(const cv::GTypesInfo& info) const { return c(info); }
using CallBackT = std::function<cv::GMetaArgs(const cv::GTypesInfo& info)>;
CallBackT c;
};
void constructGraphOutputs(const cv::GTypesInfo &out_info,
cv::GRunArgs &args,
cv::GRunArgsP &outs);
} // namespace detail
} // namespace cv
#endif // OPENCV_GAPI_GARG_HPP

View File

@ -1,440 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2020 Intel Corporation
#ifndef OPENCV_GAPI_GARRAY_HPP
#define OPENCV_GAPI_GARRAY_HPP
#include <functional>
#include <ostream>
#include <vector>
#include <memory>
#include <opencv2/gapi/own/exports.hpp>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/util/variant.hpp>
#include <opencv2/gapi/util/throw.hpp>
#include <opencv2/gapi/own/assert.hpp>
#include <opencv2/gapi/gmat.hpp> // flatten_g only!
#include <opencv2/gapi/gscalar.hpp> // flatten_g only!
namespace cv
{
// Forward declaration; GNode and GOrigin are an internal
// (user-inaccessible) classes.
class GNode;
struct GOrigin;
template<typename T> class GArray;
/**
* \addtogroup gapi_meta_args
* @{
*/
struct GAPI_EXPORTS_W_SIMPLE GArrayDesc
{
// FIXME: Body
// FIXME: Also implement proper operator== then
bool operator== (const GArrayDesc&) const { return true; }
};
template<typename U> GArrayDesc descr_of(const std::vector<U> &) { return {};}
GAPI_EXPORTS_W inline GArrayDesc empty_array_desc() {return {}; }
/** @} */
std::ostream& operator<<(std::ostream& os, const cv::GArrayDesc &desc);
namespace detail
{
// ConstructVec is a callback which stores information about T and is used by
// G-API runtime to construct arrays in host memory (T remains opaque for G-API).
// ConstructVec is carried into G-API internals by GArrayU.
// Currently it is suitable for Host (CPU) plugins only, real offload may require
// more information for manual memory allocation on-device.
class VectorRef;
using ConstructVec = std::function<void(VectorRef&)>;
// This is the base struct for GArrayU type holder
struct TypeHintBase{virtual ~TypeHintBase() = default;};
// This class holds type of initial GArray to be checked from GArrayU
template <typename T>
struct TypeHint final : public TypeHintBase{};
// This class strips type information from GArray<T> and makes it usable
// in the G-API graph compiler (expression unrolling, graph generation, etc).
// Part of GProtoArg.
class GAPI_EXPORTS GArrayU
{
public:
GArrayU(const GNode &n, std::size_t out); // Operation result constructor
template <typename T>
bool holds() const; // Check if was created from GArray<T>
GOrigin& priv(); // Internal use only
const GOrigin& priv() const; // Internal use only
protected:
GArrayU(); // Default constructor
GArrayU(const detail::VectorRef& vref); // Constant value constructor
template<class> friend class cv::GArray; // (available to GArray<T> only)
void setConstructFcn(ConstructVec &&cv); // Store T-aware constructor
template <typename T>
void specifyType(); // Store type of initial GArray<T>
template <typename T>
void storeKind();
void setKind(cv::detail::OpaqueKind);
std::shared_ptr<GOrigin> m_priv;
std::shared_ptr<TypeHintBase> m_hint;
};
template <typename T>
bool GArrayU::holds() const{
GAPI_Assert(m_hint != nullptr);
using U = typename std::decay<T>::type;
return dynamic_cast<TypeHint<U>*>(m_hint.get()) != nullptr;
}
template <typename T>
void GArrayU::specifyType(){
m_hint.reset(new TypeHint<typename std::decay<T>::type>);
}
template <typename T>
void GArrayU::storeKind(){
setKind(cv::detail::GOpaqueTraits<T>::kind);
}
// This class represents a typed STL vector reference.
// Depending on origins, this reference may be either "just a" reference to
// an object created externally, OR actually own the underlying object
// (be value holder).
class BasicVectorRef
{
public:
// These fields are set by the derived class(es)
std::size_t m_elemSize = 0ul;
cv::GArrayDesc m_desc;
virtual ~BasicVectorRef() {}
virtual void mov(BasicVectorRef &ref) = 0;
virtual const void* ptr() const = 0;
virtual std::size_t size() const = 0;
};
template<typename T> class VectorRefT final: public BasicVectorRef
{
using empty_t = util::monostate;
using ro_ext_t = const std::vector<T> *;
using rw_ext_t = std::vector<T> *;
using rw_own_t = std::vector<T> ;
util::variant<empty_t, ro_ext_t, rw_ext_t, rw_own_t> m_ref;
inline bool isEmpty() const { return util::holds_alternative<empty_t>(m_ref); }
inline bool isROExt() const { return util::holds_alternative<ro_ext_t>(m_ref); }
inline bool isRWExt() const { return util::holds_alternative<rw_ext_t>(m_ref); }
inline bool isRWOwn() const { return util::holds_alternative<rw_own_t>(m_ref); }
void init(const std::vector<T>* vec = nullptr)
{
m_elemSize = sizeof(T);
if (vec) m_desc = cv::descr_of(*vec);
}
public:
VectorRefT() { init(); }
virtual ~VectorRefT() {}
explicit VectorRefT(const std::vector<T>& vec) : m_ref(&vec) { init(&vec); }
explicit VectorRefT(std::vector<T>& vec) : m_ref(&vec) { init(&vec); }
explicit VectorRefT(std::vector<T>&& vec) : m_ref(std::move(vec)) { init(&vec); }
// Reset a VectorRefT. Called only for objects instantiated
// internally in G-API (e.g. temporary GArray<T>'s within a
// computation). Reset here means both initialization
// (creating an object) and reset (discarding its existing
// content before the next execution). Must never be called
// for external VectorRefTs.
void reset()
{
if (isEmpty())
{
std::vector<T> empty_vector;
m_desc = cv::descr_of(empty_vector);
m_ref = std::move(empty_vector);
GAPI_Assert(isRWOwn());
}
else if (isRWOwn())
{
util::get<rw_own_t>(m_ref).clear();
}
else GAPI_Error("InternalError"); // shouldn't be called in *EXT modes
}
// Obtain a WRITE reference to underlying object
// Used by CPU kernel API wrappers when a kernel execution frame
// is created
std::vector<T>& wref()
{
GAPI_Assert(isRWExt() || isRWOwn());
if (isRWExt()) return *util::get<rw_ext_t>(m_ref);
if (isRWOwn()) return util::get<rw_own_t>(m_ref);
util::throw_error(std::logic_error("Impossible happened"));
}
// Obtain a READ reference to underlying object
// Used by CPU kernel API wrappers when a kernel execution frame
// is created
const std::vector<T>& rref() const
{
// ANY vector can be accessed for reading, even if it declared for
// output. Example -- a GComputation from [in] to [out1,out2]
// where [out2] is a result of operation applied to [out1]:
//
// GComputation boundary
// . . . . . . .
// . .
// [in] ----> foo() ----> [out1]
// . . :
// . . . .:. . .
// . V .
// . bar() ---> [out2]
// . . . . . . . . . . . .
//
if (isROExt()) return *util::get<ro_ext_t>(m_ref);
if (isRWExt()) return *util::get<rw_ext_t>(m_ref);
if (isRWOwn()) return util::get<rw_own_t>(m_ref);
util::throw_error(std::logic_error("Impossible happened"));
}
virtual void mov(BasicVectorRef &v) override {
VectorRefT<T> *tv = dynamic_cast<VectorRefT<T>*>(&v);
GAPI_Assert(tv != nullptr);
wref() = std::move(tv->wref());
}
virtual const void* ptr() const override { return &rref(); }
virtual std::size_t size() const override { return rref().size(); }
};
// This class strips type information from VectorRefT<> and makes it usable
// in the G-API executables (carrying run-time data/information to kernels).
// Part of GRunArg.
// Its methods are typed proxies to VectorRefT<T>.
// VectorRef maintains "reference" semantics so two copies of VectoRef refer
// to the same underlying object.
// FIXME: Put a good explanation on why cv::OutputArray doesn't fit this role
class VectorRef
{
std::shared_ptr<BasicVectorRef> m_ref;
cv::detail::OpaqueKind m_kind = cv::detail::OpaqueKind::CV_UNKNOWN;
template<typename T> inline void check() const
{
GAPI_DbgAssert(dynamic_cast<VectorRefT<T>*>(m_ref.get()) != nullptr);
GAPI_Assert(sizeof(T) == m_ref->m_elemSize);
}
public:
VectorRef() = default;
template<typename T> explicit VectorRef(const std::vector<T>& vec)
: m_ref(new VectorRefT<T>(vec))
, m_kind(GOpaqueTraits<T>::kind)
{}
template<typename T> explicit VectorRef(std::vector<T>& vec)
: m_ref(new VectorRefT<T>(vec))
, m_kind(GOpaqueTraits<T>::kind)
{}
template<typename T> explicit VectorRef(std::vector<T>&& vec)
: m_ref(new VectorRefT<T>(std::move(vec)))
, m_kind(GOpaqueTraits<T>::kind)
{}
cv::detail::OpaqueKind getKind() const
{
return m_kind;
}
template<typename T> void reset()
{
if (!m_ref) m_ref.reset(new VectorRefT<T>());
check<T>();
storeKind<T>();
static_cast<VectorRefT<T>&>(*m_ref).reset();
}
template <typename T>
void storeKind()
{
m_kind = cv::detail::GOpaqueTraits<T>::kind;
}
template<typename T> std::vector<T>& wref()
{
check<T>();
return static_cast<VectorRefT<T>&>(*m_ref).wref();
}
template<typename T> const std::vector<T>& rref() const
{
check<T>();
return static_cast<VectorRefT<T>&>(*m_ref).rref();
}
// Check if was created for/from std::vector<T>
template <typename T> bool holds() const
{
if (!m_ref) return false;
using U = typename std::decay<T>::type;
return dynamic_cast<VectorRefT<U>*>(m_ref.get()) != nullptr;
}
void mov(VectorRef &v)
{
m_ref->mov(*v.m_ref);
}
cv::GArrayDesc descr_of() const
{
return m_ref->m_desc;
}
std::size_t size() const
{
return m_ref->size();
}
// May be used to uniquely identify this object internally
const void *ptr() const { return m_ref->ptr(); }
};
// Helper (FIXME: work-around?)
// stripping G types to their host types
// like cv::GArray<GMat> would still map to std::vector<cv::Mat>
// but not to std::vector<cv::GMat>
#if defined(GAPI_STANDALONE)
# define FLATTEN_NS cv::gapi::own
#else
# define FLATTEN_NS cv
#endif
template<class T> struct flatten_g;
template<> struct flatten_g<cv::GMat> { using type = FLATTEN_NS::Mat; };
template<> struct flatten_g<cv::GScalar> { using type = FLATTEN_NS::Scalar; };
template<class T> struct flatten_g<GArray<T>> { using type = std::vector<T>; };
template<class T> struct flatten_g { using type = T; };
#undef FLATTEN_NS
// FIXME: the above mainly duplicates "ProtoToParam" thing from gtyped.hpp
// but I decided not to include gtyped here - probably worth moving that stuff
// to some common place? (DM)
} // namespace detail
/** \addtogroup gapi_data_objects
* @{
*/
/**
* @brief `cv::GArray<T>` template class represents a list of objects
* of class `T` in the graph.
*
* `cv::GArray<T>` describes a functional relationship between
* operations consuming and producing arrays of objects of class
* `T`. The primary purpose of `cv::GArray<T>` is to represent a
* dynamic list of objects -- where the size of the list is not known
* at the graph construction or compile time. Examples include: corner
* and feature detectors (`cv::GArray<cv::Point>`), object detection
* and tracking results (`cv::GArray<cv::Rect>`). Programmers can use
* their own types with `cv::GArray<T>` in the custom operations.
*
* Similar to `cv::GScalar`, `cv::GArray<T>` may be value-initialized
* -- in this case a graph-constant value is associated with the object.
*
* `GArray<T>` is a virtual counterpart of `std::vector<T>`, which is
* usually used to represent the `GArray<T>` data in G-API during the
* execution.
*
* @sa `cv::GOpaque<T>`
*/
template<typename T> class GArray
{
public:
// Host type (or Flat type) - the type this GArray is actually
// specified to.
/// @private
using HT = typename detail::flatten_g<typename std::decay<T>::type>::type;
/**
* @brief Constructs a value-initialized `cv::GArray<T>`
*
* `cv::GArray<T>` objects may have their values
* be associated at graph construction time. It is useful when
* some operation has a `cv::GArray<T>` input which doesn't change during
* the program execution, and is set only once. In this case,
* there is no need to declare such `cv::GArray<T>` as a graph input.
*
* @note The value of `cv::GArray<T>` may be overwritten by assigning some
* other `cv::GArray<T>` to the object using `operator=` -- on the
* assignment, the old association or value is discarded.
*
* @param v a std::vector<T> to associate with this
* `cv::GArray<T>` object. Vector data is copied into the
* `cv::GArray<T>` (no reference to the passed data is held).
*/
explicit GArray(const std::vector<HT>& v) // Constant value constructor
: m_ref(detail::GArrayU(detail::VectorRef(v))) { putDetails(); }
/**
* @overload
* @brief Constructs a value-initialized `cv::GArray<T>`
*
* @param v a std::vector<T> to associate with this
* `cv::GArray<T>` object. Vector data is moved into the `cv::GArray<T>`.
*/
explicit GArray(std::vector<HT>&& v) // Move-constructor
: m_ref(detail::GArrayU(detail::VectorRef(std::move(v)))) { putDetails(); }
/**
* @brief Constructs an empty `cv::GArray<T>`
*
* Normally, empty G-API data objects denote a starting point of
* the graph. When an empty `cv::GArray<T>` is assigned to a result
* of some operation, it obtains a functional link to this
* operation (and is not empty anymore).
*/
GArray() { putDetails(); } // Empty constructor
/// @private
explicit GArray(detail::GArrayU &&ref) // GArrayU-based constructor
: m_ref(ref) { putDetails(); } // (used by GCall, not for users)
/// @private
detail::GArrayU strip() const {
return m_ref;
}
/// @private
static void VCtor(detail::VectorRef& vref) {
vref.reset<HT>();
}
private:
void putDetails() {
m_ref.setConstructFcn(&VCtor);
m_ref.specifyType<HT>(); // FIXME: to unify those 2 to avoid excessive dynamic_cast
m_ref.storeKind<HT>(); //
}
detail::GArrayU m_ref;
};
/** @} */
} // namespace cv
#endif // OPENCV_GAPI_GARRAY_HPP

View File

@ -1,63 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019 Intel Corporation
#ifndef OPENCV_GAPI_GASYNC_CONTEXT_HPP
#define OPENCV_GAPI_GASYNC_CONTEXT_HPP
#if !defined(GAPI_STANDALONE)
# include <opencv2/core/cvdef.h>
#else // Without OpenCV
# include <opencv2/gapi/own/cvdefs.hpp>
#endif // !defined(GAPI_STANDALONE)
#include <opencv2/gapi/own/exports.hpp>
namespace cv {
namespace gapi{
/**
* @brief This namespace contains experimental G-API functionality,
* functions or structures in this namespace are subjects to change or
* removal in the future releases. This namespace also contains
* functions which API is not stabilized yet.
*/
namespace wip {
/**
* @brief A class to group async requests to cancel them in a single shot.
*
* GAsyncContext is passed as an argument to async() and async_apply() functions
*/
class GAPI_EXPORTS GAsyncContext{
std::atomic<bool> cancelation_requested = {false};
public:
/**
* @brief Start cancellation process for an associated request.
*
* User still has to wait for each individual request (either via callback or according std::future object) to make sure it actually canceled.
*
* @return true if it was a first request to cancel the context
*/
bool cancel();
/**
* @brief Returns true if cancellation was requested for this context.
*
* @return true if cancellation was requested for this context
*/
bool isCanceled() const;
};
class GAPI_EXPORTS GAsyncCanceled : public std::exception {
public:
virtual const char* what() const noexcept CV_OVERRIDE;
};
} // namespace wip
} // namespace gapi
} // namespace cv
#endif //OPENCV_GAPI_GASYNC_CONTEXT_HPP

View File

@ -1,78 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GCALL_HPP
#define OPENCV_GAPI_GCALL_HPP
#include <opencv2/gapi/garg.hpp> // GArg
#include <opencv2/gapi/gmat.hpp> // GMat
#include <opencv2/gapi/gscalar.hpp> // GScalar
#include <opencv2/gapi/gframe.hpp> // GFrame
#include <opencv2/gapi/garray.hpp> // GArray<T>
#include <opencv2/gapi/gopaque.hpp> // GOpaque<T>
namespace cv {
struct GKernel;
// The whole idea of this class is to represent an operation
// which is applied to arguments. This is part of public API,
// since it is what users should use to define kernel interfaces.
class GAPI_EXPORTS GCall final
{
public:
class Priv;
explicit GCall(const GKernel &k);
~GCall();
template<typename... Ts>
GCall& pass(Ts&&... args)
{
setArgs({cv::GArg(std::move(args))...});
return *this;
}
// A generic yield method - obtain a link to operator's particular GMat output
GMat yield (int output = 0);
GMatP yieldP (int output = 0);
GScalar yieldScalar(int output = 0);
GFrame yieldFrame (int output = 0);
template<class T> GArray<T> yieldArray(int output = 0)
{
return GArray<T>(yieldArray(output));
}
template<class T> GOpaque<T> yieldOpaque(int output = 0)
{
return GOpaque<T>(yieldOpaque(output));
}
// Internal use only
Priv& priv();
const Priv& priv() const;
// GKernel and params can be modified, it's needed for infer<Generic>,
// because information about output shapes doesn't exist in compile time
GKernel& kernel();
cv::util::any& params();
void setArgs(std::vector<GArg> &&args);
protected:
std::shared_ptr<Priv> m_priv;
// Public versions return a typed array or opaque, those are implementation details
detail::GArrayU yieldArray(int output = 0);
detail::GOpaqueU yieldOpaque(int output = 0);
};
} // namespace cv
#endif // OPENCV_GAPI_GCALL_HPP

View File

@ -1,309 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2020 Intel Corporation
#ifndef OPENCV_GAPI_GCOMMON_HPP
#define OPENCV_GAPI_GCOMMON_HPP
#include <functional> // std::hash
#include <vector> // std::vector
#include <type_traits> // decay
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/util/any.hpp>
#include <opencv2/gapi/util/optional.hpp>
#include <opencv2/gapi/own/exports.hpp>
#include <opencv2/gapi/own/assert.hpp>
#include <opencv2/gapi/render/render_types.hpp>
#include <opencv2/gapi/s11n/base.hpp>
namespace cv {
class GMat; // FIXME: forward declaration for GOpaqueTraits
namespace detail
{
// This is a trait-like structure to mark backend-specific compile arguments
// with tags
template<typename T> struct CompileArgTag;
// These structures are tags which separate kernels and transformations
struct KernelTag
{};
struct TransformTag
{};
// This enum is utilized mostly by GArray and GOpaque to store and recognize their internal data
// types (aka Host type). Also it is widely used during serialization routine.
enum class OpaqueKind: int
{
CV_UNKNOWN, // Unknown, generic, opaque-to-GAPI data type unsupported in graph seriallization
CV_BOOL, // bool user G-API data
CV_INT, // int user G-API data
CV_INT64, // int64_t user G-API data
CV_DOUBLE, // double user G-API data
CV_FLOAT, // float user G-API data
CV_UINT64, // uint64_t user G-API data
CV_STRING, // std::string user G-API data
CV_POINT, // cv::Point user G-API data
CV_POINT2F, // cv::Point2f user G-API data
CV_POINT3F, // cv::Point3f user G-API data
CV_SIZE, // cv::Size user G-API data
CV_RECT, // cv::Rect user G-API data
CV_SCALAR, // cv::Scalar user G-API data
CV_MAT, // cv::Mat user G-API data
CV_DRAW_PRIM, // cv::gapi::wip::draw::Prim user G-API data
};
// Type traits helper which simplifies the extraction of kind from type
template<typename T> struct GOpaqueTraits;
template<typename T> struct GOpaqueTraits { static constexpr const OpaqueKind kind = OpaqueKind::CV_UNKNOWN; };
template<> struct GOpaqueTraits<int> { static constexpr const OpaqueKind kind = OpaqueKind::CV_INT; };
template<> struct GOpaqueTraits<int64_t> { static constexpr const OpaqueKind kind = OpaqueKind::CV_INT64; };
template<> struct GOpaqueTraits<double> { static constexpr const OpaqueKind kind = OpaqueKind::CV_DOUBLE; };
template<> struct GOpaqueTraits<float> { static constexpr const OpaqueKind kind = OpaqueKind::CV_FLOAT; };
template<> struct GOpaqueTraits<uint64_t> { static constexpr const OpaqueKind kind = OpaqueKind::CV_UINT64; };
template<> struct GOpaqueTraits<bool> { static constexpr const OpaqueKind kind = OpaqueKind::CV_BOOL; };
template<> struct GOpaqueTraits<std::string> { static constexpr const OpaqueKind kind = OpaqueKind::CV_STRING; };
template<> struct GOpaqueTraits<cv::Size> { static constexpr const OpaqueKind kind = OpaqueKind::CV_SIZE; };
template<> struct GOpaqueTraits<cv::Scalar> { static constexpr const OpaqueKind kind = OpaqueKind::CV_SCALAR; };
template<> struct GOpaqueTraits<cv::Point> { static constexpr const OpaqueKind kind = OpaqueKind::CV_POINT; };
template<> struct GOpaqueTraits<cv::Point2f> { static constexpr const OpaqueKind kind = OpaqueKind::CV_POINT2F; };
template<> struct GOpaqueTraits<cv::Point3f> { static constexpr const OpaqueKind kind = OpaqueKind::CV_POINT3F; };
template<> struct GOpaqueTraits<cv::Mat> { static constexpr const OpaqueKind kind = OpaqueKind::CV_MAT; };
template<> struct GOpaqueTraits<cv::Rect> { static constexpr const OpaqueKind kind = OpaqueKind::CV_RECT; };
template<> struct GOpaqueTraits<cv::GMat> { static constexpr const OpaqueKind kind = OpaqueKind::CV_MAT; };
template<> struct GOpaqueTraits<cv::gapi::wip::draw::Prim>
{ static constexpr const OpaqueKind kind = OpaqueKind::CV_DRAW_PRIM; };
using GOpaqueTraitsArrayTypes = std::tuple<int, double, float, uint64_t, bool, std::string, cv::Size, cv::Scalar, cv::Point, cv::Point2f,
cv::Point3f, cv::Mat, cv::Rect, cv::gapi::wip::draw::Prim>;
// GOpaque is not supporting cv::Mat and cv::Scalar since there are GScalar and GMat types
using GOpaqueTraitsOpaqueTypes = std::tuple<int, double, float, uint64_t, bool, std::string, cv::Size, cv::Point, cv::Point2f, cv::Point3f,
cv::Rect, cv::gapi::wip::draw::Prim>;
} // namespace detail
// This definition is here because it is reused by both public(?) and internal
// modules. Keeping it here wouldn't expose public details (e.g., API-level)
// to components which are internal and operate on a lower-level entities
// (e.g., compiler, backends).
// FIXME: merge with ArgKind?
// FIXME: replace with variant[format desc]?
enum class GShape: int
{
GMAT,
GSCALAR,
GARRAY,
GOPAQUE,
GFRAME,
};
namespace gapi {
namespace s11n {
namespace detail {
template<typename T> struct wrap_serialize;
} // namespace detail
} // namespace s11n
} // namespace gapi
struct GCompileArg;
namespace detail {
template<typename T>
using is_compile_arg = std::is_same<GCompileArg, typename std::decay<T>::type>;
} // namespace detail
// CompileArg is an unified interface over backend-specific compilation
// information
// FIXME: Move to a separate file?
/** \addtogroup gapi_compile_args
* @{
*
* @brief Compilation arguments: data structures controlling the
* compilation process
*
* G-API comes with a number of graph compilation options which can be
* passed to cv::GComputation::apply() or
* cv::GComputation::compile(). Known compilation options are listed
* in this page, while extra backends may introduce their own
* compilation options (G-API transparently accepts _everything_ which
* can be passed to cv::compile_args(), it depends on underlying
* backends if an option would be interpreted or not).
*
* For example, if an example computation is executed like this:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp graph_decl_apply
*
* Extra parameter specifying which kernels to compile with can be
* passed like this:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp apply_with_param
*/
/**
* @brief Represents an arbitrary compilation argument.
*
* Any value can be wrapped into cv::GCompileArg, but only known ones
* (to G-API or its backends) can be interpreted correctly.
*
* Normally objects of this class shouldn't be created manually, use
* cv::compile_args() function which automatically wraps everything
* passed in (a variadic template parameter pack) into a vector of
* cv::GCompileArg objects.
*/
struct GCompileArg
{
public:
// NB: Required for pythnon bindings
GCompileArg() = default;
std::string tag;
// FIXME: use decay in GArg/other trait-based wrapper before leg is shot!
template<typename T, typename std::enable_if<!detail::is_compile_arg<T>::value, int>::type = 0>
explicit GCompileArg(T &&t)
: tag(detail::CompileArgTag<typename std::decay<T>::type>::tag())
, serializeF(cv::gapi::s11n::detail::has_S11N_spec<T>::value ?
&cv::gapi::s11n::detail::wrap_serialize<T>::serialize :
nullptr)
, arg(t)
{
}
template<typename T> T& get()
{
return util::any_cast<T>(arg);
}
template<typename T> const T& get() const
{
return util::any_cast<T>(arg);
}
void serialize(cv::gapi::s11n::IOStream& os) const
{
if (serializeF)
{
serializeF(os, *this);
}
}
private:
std::function<void(cv::gapi::s11n::IOStream&, const GCompileArg&)> serializeF;
util::any arg;
};
using GCompileArgs = std::vector<GCompileArg>;
inline cv::GCompileArgs& operator += ( cv::GCompileArgs &lhs,
const cv::GCompileArgs &rhs)
{
lhs.reserve(lhs.size() + rhs.size());
lhs.insert(lhs.end(), rhs.begin(), rhs.end());
return lhs;
}
/**
* @brief Wraps a list of arguments (a parameter pack) into a vector of
* compilation arguments (cv::GCompileArg).
*/
template<typename... Ts> GCompileArgs compile_args(Ts&&... args)
{
return GCompileArgs{ GCompileArg(args)... };
}
namespace gapi
{
/**
* @brief Retrieves particular compilation argument by its type from
* cv::GCompileArgs
*/
template<typename T>
inline cv::util::optional<T> getCompileArg(const cv::GCompileArgs &args)
{
for (auto &compile_arg : args)
{
if (compile_arg.tag == cv::detail::CompileArgTag<T>::tag())
{
return cv::util::optional<T>(compile_arg.get<T>());
}
}
return cv::util::optional<T>();
}
namespace s11n {
namespace detail {
template<typename T> struct wrap_serialize
{
static void serialize(IOStream& os, const GCompileArg& arg)
{
using DT = typename std::decay<T>::type;
S11N<DT>::serialize(os, arg.get<DT>());
}
};
} // namespace detail
} // namespace s11n
} // namespace gapi
/** @} gapi_compile_args */
/**
* @brief Ask G-API to dump compiled graph in Graphviz format under
* the given file name.
*
* Specifies a graph dump path (path to .dot file to be generated).
* G-API will dump a .dot file under specified path during a
* compilation process if this flag is passed.
*/
struct graph_dump_path
{
std::string m_dump_path;
};
/**
* @brief Ask G-API to use threaded executor when cv::GComputation
* is compiled via cv::GComputation::compile method.
*
* Specifies a number of threads that should be used by executor.
*/
struct GAPI_EXPORTS use_threaded_executor
{
use_threaded_executor();
explicit use_threaded_executor(const uint32_t nthreads);
uint32_t num_threads;
};
namespace detail
{
template<> struct CompileArgTag<cv::graph_dump_path>
{
static const char* tag() { return "gapi.graph_dump_path"; }
};
template<> struct CompileArgTag<cv::use_threaded_executor>
{
static const char* tag() { return "gapi.threaded_executor"; }
};
}
} // namespace cv
// std::hash overload for GShape
namespace std
{
template<> struct hash<cv::GShape>
{
size_t operator() (cv::GShape sh) const
{
return std::hash<int>()(static_cast<int>(sh));
}
};
} // namespace std
#endif // OPENCV_GAPI_GCOMMON_HPP

View File

@ -1,232 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2020 Intel Corporation
#ifndef OPENCV_GAPI_GCOMPILED_HPP
#define OPENCV_GAPI_GCOMPILED_HPP
#include <vector>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/own/assert.hpp>
#include <opencv2/gapi/garg.hpp>
namespace cv {
// This class represents a compiled computation.
// In theory (and ideally), it can be used w/o the rest of APIs.
// In theory (and ideally), it can be serialized/deserialized.
// It can enable scenarious like deployment to an autonomous devince, FuSa, etc.
//
// Currently GCompiled assumes all GMats you used to pass data to G-API
// are valid and not destroyed while you use a GCompiled object.
//
// FIXME: In future, there should be a way to name I/O objects and specify it
// to GCompiled externally (for example, when it is loaded on the target system).
/**
* \addtogroup gapi_main_classes
* @{
*/
/**
* @brief Represents a compiled computation (graph). Can only be used
* with image / data formats & resolutions it was compiled for, with
* some exceptions.
*
* This class represents a product of graph compilation (calling
* cv::GComputation::compile()). Objects of this class actually do
* data processing, and graph execution is incapsulated into objects
* of this class. Execution model itself depends on kernels and
* backends which were using during the compilation, see @ref
* gapi_compile_args for details.
*
* In a general case, GCompiled objects can be applied to data only in
* that formats/resolutions they were compiled for (see @ref
* gapi_meta_args). However, if the underlying backends allow, a
* compiled object can be _reshaped_ to handle data (images) of
* different resolution, though formats and types must remain the same.
*
* GCompiled is very similar to `std::function<>` in its semantics --
* running it looks like a function call in the user code.
*
* At the moment, GCompiled objects are not reentrant -- generally,
* the objects are stateful since graph execution itself is a stateful
* process and this state is now maintained in GCompiled's own memory
* (not on the process stack).
*
* At the same time, two different GCompiled objects produced from the
* single cv::GComputation are completely independent and can be used
* concurrently.
*
* @sa GStreamingCompiled
*/
class GAPI_EXPORTS GCompiled
{
public:
/// @private
class GAPI_EXPORTS Priv;
/**
* @brief Constructs an empty object
*/
GCompiled();
/**
* @brief Run the compiled computation, a generic version.
*
* @param ins vector of inputs to process.
* @param outs vector of outputs to produce.
*
* Input/output vectors must have the same number of elements as
* defined in the cv::GComputation protocol (at the moment of its
* construction). Shapes of elements also must conform to protocol
* (e.g. cv::Mat needs to be passed where cv::GMat has been
* declared as input, and so on). Run-time exception is generated
* otherwise.
*
* Objects in output vector may remain empty (like cv::Mat) --
* G-API will automatically initialize output objects to proper formats.
*
* @note Don't construct GRunArgs/GRunArgsP objects manually, use
* cv::gin()/cv::gout() wrappers instead.
*/
void operator() (GRunArgs &&ins, GRunArgsP &&outs); // Generic arg-to-arg
#if !defined(GAPI_STANDALONE)
/**
* @brief Execute an unary computation
*
* @overload
* @param in input cv::Mat for unary computation
* @param out output cv::Mat for unary computation
* process.
*/
void operator() (cv::Mat in, cv::Mat &out); // Unary overload
/**
* @brief Execute an unary computation
*
* @overload
* @param in input cv::Mat for unary computation
* @param out output cv::Scalar for unary computation
* process.
*/
void operator() (cv::Mat in, cv::Scalar &out); // Unary overload (scalar)
/**
* @brief Execute a binary computation
*
* @overload
* @param in1 first input cv::Mat for binary computation
* @param in2 second input cv::Mat for binary computation
* @param out output cv::Mat for binary computation
* process.
*/
void operator() (cv::Mat in1, cv::Mat in2, cv::Mat &out); // Binary overload
/**
* @brief Execute an binary computation
*
* @overload
* @param in1 first input cv::Mat for binary computation
* @param in2 second input cv::Mat for binary computation
* @param out output cv::Scalar for binary computation
* process.
*/
void operator() (cv::Mat in1, cv::Mat in2, cv::Scalar &out); // Binary overload (scalar)
/**
* @brief Execute a computation with arbitrary number of
* inputs/outputs.
*
* @overload
* @param ins vector of input cv::Mat objects to process by the
* computation.
* @param outs vector of output cv::Mat objects to produce by the
* computation.
*
* Numbers of elements in ins/outs vectors must match numbers of
* inputs/outputs which were used to define the source GComputation.
*/
void operator() (const std::vector<cv::Mat> &ins, // Compatibility overload
const std::vector<cv::Mat> &outs);
#endif // !defined(GAPI_STANDALONE)
/// @private
Priv& priv();
/**
* @brief Check if compiled object is valid (non-empty)
*
* @return true if the object is runnable (valid), false otherwise
*/
explicit operator bool () const;
/**
* @brief Vector of metadata this graph was compiled for.
*
* @return Unless _reshape_ is not supported, return value is the
* same vector which was passed to cv::GComputation::compile() to
* produce this compiled object. Otherwise, it is the latest
* metadata vector passed to reshape() (if that call was
* successful).
*/
const GMetaArgs& metas() const; // Meta passed to compile()
/**
* @brief Vector of metadata descriptions of graph outputs
*
* @return vector with formats/resolutions of graph's output
* objects, auto-inferred from input metadata vector by
* operations which form this computation.
*
* @note GCompiled objects produced from the same
* cv::GComputiation graph with different input metas may return
* different values in this vector.
*/
const GMetaArgs& outMetas() const;
/**
* @brief Check if the underlying backends support reshape or not.
*
* @return true if supported, false otherwise.
*/
bool canReshape() const;
/**
* @brief Reshape a compiled graph to support new image
* resolutions.
*
* Throws an exception if an error occurs.
*
* @param inMetas new metadata to reshape on. Vector size and
* metadata shapes must match the computation's protocol.
* @param args compilation arguments to use.
*/
// FIXME: Why it requires compile args?
void reshape(const GMetaArgs& inMetas, const GCompileArgs& args);
/**
* @brief Prepare inner kernels states for a new video-stream.
*
* GCompiled objects may be used to process video streams frame by frame.
* In this case, a GCompiled is called on every image frame individually.
* Starting OpenCV 4.4, some kernels in the graph may have their internal
* states (see GAPI_OCV_KERNEL_ST for the OpenCV backend).
* In this case, if user starts processing another video stream with
* this GCompiled, this method needs to be called to let kernels re-initialize
* their internal states to a new video stream.
*/
void prepareForNewStream();
protected:
/// @private
std::shared_ptr<Priv> m_priv;
};
/** @} */
}
#endif // OPENCV_GAPI_GCOMPILED_HPP

View File

@ -1,73 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019 Intel Corporation
#ifndef OPENCV_GAPI_GCOMPILED_ASYNC_HPP
#define OPENCV_GAPI_GCOMPILED_ASYNC_HPP
#include <future> //for std::future
#include <exception> //for std::exception_ptr
#include <functional> //for std::function
#include <opencv2/gapi/garg.hpp>
#include <opencv2/gapi/own/exports.hpp>
namespace cv {
//fwd declaration
class GCompiled;
namespace gapi{
namespace wip {
class GAsyncContext;
/**
These functions asynchronously (i.e. probably on a separate thread of execution) call GCompiled::operator() member function of their first argument with copies of rest of arguments (except callback) passed in.
The difference between the function is the way to get the completion notification (via callback or a waiting on std::future object)
If exception is occurred during execution of apply it is transferred to the callback (via function parameter) or passed to future (and will be thrown on call to std::future::get)
N.B. :
Input arguments are copied on call to async function (actually on call to cv::gin) and thus do not have to outlive the actual completion of asynchronous activity.
While output arguments are "captured" by reference(pointer) and therefore _must_ outlive the asynchronous activity
(i.e. live at least until callback is called or future is unblocked)
@param gcmpld Compiled computation (graph) to start asynchronously
@param callback Callback to be called when execution of gcmpld is done
@param ins Input parameters for gcmpld
@param outs Output parameters for gcmpld
*/
GAPI_EXPORTS void async(GCompiled& gcmpld, std::function<void(std::exception_ptr)>&& callback, GRunArgs &&ins, GRunArgsP &&outs);
/** @overload
@param gcmpld Compiled computation (graph) to run asynchronously
@param callback Callback to be called when execution of gcmpld is done
@param ins Input parameters for gcmpld
@param outs Output parameters for gcmpld
@param ctx Context this request belongs to
@see async GAsyncContext
*/
GAPI_EXPORTS void async(GCompiled& gcmpld, std::function<void(std::exception_ptr)>&& callback, GRunArgs &&ins, GRunArgsP &&outs, GAsyncContext& ctx);
/** @overload
@param gcmpld Compiled computation (graph) to run asynchronously
@param ins Input parameters for gcmpld
@param outs Output parameters for gcmpld
@return std::future<void> object to wait for completion of async operation
@see async
*/
GAPI_EXPORTS std::future<void> async(GCompiled& gcmpld, GRunArgs &&ins, GRunArgsP &&outs);
/**
@param gcmpld Compiled computation (graph) to run asynchronously
@param ins Input parameters for gcmpld
@param outs Output parameters for gcmpld
@param ctx Context this request belongs to
@return std::future<void> object to wait for completion of async operation
@see async GAsyncContext
*/
GAPI_EXPORTS std::future<void> async(GCompiled& gcmpld, GRunArgs &&ins, GRunArgsP &&outs, GAsyncContext& ctx);
} // namespace wip
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_GCOMPILED_ASYNC_HPP

View File

@ -1,139 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2019 Intel Corporation
#ifndef OPENCV_GAPI_GCOMPOUNDKERNEL_HPP
#define OPENCV_GAPI_GCOMPOUNDKERNEL_HPP
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/gcommon.hpp>
#include <opencv2/gapi/gkernel.hpp>
#include <opencv2/gapi/garg.hpp>
namespace cv {
namespace gapi
{
namespace compound
{
// FIXME User does not need to know about this function
// Needs that user may define compound kernels(as cpu kernels)
GAPI_EXPORTS cv::gapi::GBackend backend();
} // namespace compound
} // namespace gapi
namespace detail
{
struct GCompoundContext
{
explicit GCompoundContext(const GArgs& in_args);
template<typename T>
const T& inArg(int input) { return m_args.at(input).get<T>(); }
GArgs m_args;
GArgs m_results;
};
class GAPI_EXPORTS GCompoundKernel
{
// Compound kernel must use all of it's inputs
public:
using F = std::function<void(GCompoundContext& ctx)>;
explicit GCompoundKernel(const F& f);
void apply(GCompoundContext& ctx);
protected:
F m_f;
};
template<typename T> struct get_compound_in
{
static T get(GCompoundContext &ctx, int idx) { return ctx.inArg<T>(idx); }
};
template<typename U> struct get_compound_in<cv::GArray<U>>
{
static cv::GArray<U> get(GCompoundContext &ctx, int idx)
{
auto array = cv::GArray<U>();
ctx.m_args[idx] = GArg(array);
return array;
}
};
template<typename U> struct get_compound_in<cv::GOpaque<U>>
{
static cv::GOpaque<U> get(GCompoundContext &ctx, int idx)
{
auto opaq = cv::GOpaque<U>();
ctx.m_args[idx] = GArg(opaq);
return opaq;
}
};
template<> struct get_compound_in<cv::GMatP>
{
static cv::GMatP get(GCompoundContext &ctx, int idx)
{
auto mat = cv::GMatP();
ctx.m_args[idx] = GArg(mat);
return mat;
}
};
template<typename, typename, typename>
struct GCompoundCallHelper;
template<typename Impl, typename... Ins, typename... Outs>
struct GCompoundCallHelper<Impl, std::tuple<Ins...>, std::tuple<Outs...> >
{
template<int... IIs, int... OIs>
static void expand_impl(GCompoundContext &ctx, detail::Seq<IIs...>, detail::Seq<OIs...>)
{
auto result = Impl::expand(get_compound_in<Ins>::get(ctx, IIs)...);
auto tuple_return = tuple_wrap_helper<decltype(result)>::get(std::move(result));
ctx.m_results = { cv::GArg(std::get<OIs>(tuple_return))... };
}
static void expand(GCompoundContext &ctx)
{
expand_impl(ctx,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<sizeof...(Outs)>::type());
}
};
template<class Impl, class K>
class GCompoundKernelImpl: public cv::detail::GCompoundCallHelper<Impl, typename K::InArgs, typename K::OutArgs>,
public cv::detail::KernelTag
{
using P = cv::detail::GCompoundCallHelper<Impl, typename K::InArgs, typename K::OutArgs>;
public:
using API = K;
static cv::gapi::GBackend backend() { return cv::gapi::compound::backend(); }
static GCompoundKernel kernel() { return GCompoundKernel(&P::expand); }
};
} // namespace detail
/**
* Declares a new compound kernel. See this
* [documentation chapter](@ref gapi_kernel_compound)
* on compound kernels for more details.
*
* @param Name type name for new kernel
* @param API the interface this kernel implements
*/
#define GAPI_COMPOUND_KERNEL(Name, API) \
struct Name: public cv::detail::GCompoundKernelImpl<Name, API>
} // namespace cv
#endif // OPENCV_GAPI_GCOMPOUNDKERNEL_HPP

View File

@ -1,581 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GCOMPUTATION_HPP
#define OPENCV_GAPI_GCOMPUTATION_HPP
#include <functional>
#include <opencv2/gapi/util/util.hpp>
#include <opencv2/gapi/gcommon.hpp>
#include <opencv2/gapi/gproto.hpp>
#include <opencv2/gapi/garg.hpp>
#include <opencv2/gapi/gcompiled.hpp>
#include <opencv2/gapi/gstreaming.hpp>
namespace cv {
namespace detail
{
// FIXME: move to algorithm, cover with separate tests
// FIXME: replace with O(1) version (both memory and compilation time)
template<typename...>
struct last_type;
template<typename T>
struct last_type<T> { using type = T;};
template<typename T, typename... Ts>
struct last_type<T, Ts...> { using type = typename last_type<Ts...>::type; };
template<typename... Ts>
using last_type_t = typename last_type<Ts...>::type;
}
// Forward-declare the serialization objects
namespace gapi {
namespace s11n {
struct IIStream;
struct IOStream;
} // namespace s11n
} // namespace gapi
/**
* \addtogroup gapi_main_classes
* @{
*
* @brief G-API classes for constructed and compiled graphs.
*/
/**
* @brief GComputation class represents a captured computation
* graph. GComputation objects form boundaries for expression code
* user writes with G-API, allowing to compile and execute it.
*
* G-API computations are defined with input/output data
* objects. G-API will track automatically which operations connect
* specified outputs to the inputs, forming up a call graph to be
* executed. The below example expresses calculation of Sobel operator
* for edge detection (\f$G = \sqrt{G_x^2 + G_y^2}\f$):
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp graph_def
*
* Full pipeline can be now captured with this object declaration:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp graph_cap_full
*
* Input/output data objects on which a call graph should be
* reconstructed are passed using special wrappers cv::GIn and
* cv::GOut. G-API will track automatically which operations form a
* path from inputs to outputs and build the execution graph appropriately.
*
* Note that cv::GComputation doesn't take ownership on data objects
* it is defined. Moreover, multiple GComputation objects may be
* defined on the same expressions, e.g. a smaller pipeline which
* expects that image gradients are already pre-calculated may be
* defined like this:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp graph_cap_sub
*
* The resulting graph would expect two inputs and produce one
* output. In this case, it doesn't matter if gx/gy data objects are
* results of cv::gapi::Sobel operators -- G-API will stop unrolling
* expressions and building the underlying graph one reaching this
* data objects.
*
* The way how GComputation is defined is important as its definition
* specifies graph _protocol_ -- the way how the graph should be
* used. Protocol is defined by number of inputs, number of outputs,
* and shapes of inputs and outputs.
*
* In the above example, sobelEdge expects one Mat on input and
* produces one Mat; while sobelEdgeSub expects two Mats on input and
* produces one Mat. GComputation's protocol defines how other
* computation methods should be used -- cv::GComputation::compile() and
* cv::GComputation::apply(). For example, if a graph is defined on
* two GMat inputs, two cv::Mat objects have to be passed to apply()
* for execution. GComputation checks protocol correctness in runtime
* so passing a different number of objects in apply() or passing
* cv::Scalar instead of cv::Mat there would compile well as a C++
* source but raise an exception in run-time. G-API also comes with a
* typed wrapper cv::GComputationT<> which introduces this type-checking in
* compile-time.
*
* cv::GComputation itself is a thin object which just captures what
* the graph is. The compiled graph (which actually process data) is
* represented by class GCompiled. Use compile() method to generate a
* compiled graph with given compile options. cv::GComputation can
* also be used to process data with implicit graph compilation
* on-the-fly, see apply() for details.
*
* GComputation is a reference-counted object -- once defined, all its
* copies will refer to the same instance.
*
* @sa GCompiled
*/
class GAPI_EXPORTS_W GComputation
{
public:
class Priv;
typedef std::function<GComputation()> Generator;
// Various constructors enable different ways to define a computation: /////
// 1. Generic constructors
/**
* @brief Define a computation using a generator function.
*
* Graph can be defined in-place directly at the moment of its
* construction with a lambda:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp graph_gen
*
* This may be useful since all temporary objects (cv::GMats) and
* namespaces can be localized to scope of lambda, without
* contaminating the parent scope with probably unnecessary objects
* and information.
*
* @param gen generator function which returns a cv::GComputation,
* see Generator.
*/
GComputation(const Generator& gen); // Generator
// overload
/**
* @brief Generic GComputation constructor.
*
* Constructs a new graph with a given protocol, specified as a
* flow of operations connecting input/output objects. Throws if
* the passed boundaries are invalid, e.g. if there's no
* functional dependency (path) between given outputs and inputs.
*
* @param ins Input data vector.
* @param outs Output data vector.
*
* @note Don't construct GProtoInputArgs/GProtoOutputArgs objects
* directly, use cv::GIn()/cv::GOut() wrapper functions instead.
*
* @sa @ref gapi_data_objects
*/
GAPI_WRAP GComputation(GProtoInputArgs &&ins,
GProtoOutputArgs &&outs); // Arg-to-arg overload
// 2. Syntax sugar and compatibility overloads
/**
* @brief Defines an unary (one input -- one output) computation
*
* @overload
* @param in input GMat of the defined unary computation
* @param out output GMat of the defined unary computation
*/
GAPI_WRAP GComputation(GMat in, GMat out); // Unary overload
/**
* @brief Defines an unary (one input -- one output) computation
*
* @overload
* @param in input GMat of the defined unary computation
* @param out output GScalar of the defined unary computation
*/
GAPI_WRAP GComputation(GMat in, GScalar out); // Unary overload (scalar)
/**
* @brief Defines a binary (two inputs -- one output) computation
*
* @overload
* @param in1 first input GMat of the defined binary computation
* @param in2 second input GMat of the defined binary computation
* @param out output GMat of the defined binary computation
*/
GAPI_WRAP GComputation(GMat in1, GMat in2, GMat out); // Binary overload
/**
* @brief Defines a binary (two inputs -- one output) computation
*
* @overload
* @param in1 first input GMat of the defined binary computation
* @param in2 second input GMat of the defined binary computation
* @param out output GScalar of the defined binary computation
*/
GComputation(GMat in1, GMat in2, GScalar out); // Binary
// overload
// (scalar)
/**
* @brief Defines a computation with arbitrary input/output number.
*
* @overload
* @param ins vector of inputs GMats for this computation
* @param outs vector of outputs GMats for this computation
*
* Use this overload for cases when number of computation
* inputs/outputs is not known in compile-time -- e.g. when graph
* is programmatically generated to build an image pyramid with
* the given number of levels, etc.
*/
GComputation(const std::vector<GMat> &ins, // Compatibility overload
const std::vector<GMat> &outs);
// Various versions of apply(): ////////////////////////////////////////////
// 1. Generic apply()
/**
* @brief Compile graph on-the-fly and immediately execute it on
* the inputs data vectors.
*
* Number of input/output data objects must match GComputation's
* protocol, also types of host data objects (cv::Mat, cv::Scalar)
* must match the shapes of data objects from protocol (cv::GMat,
* cv::GScalar). If there's a mismatch, a run-time exception will
* be generated.
*
* Internally, a cv::GCompiled object is created for the given
* input format configuration, which then is executed on the input
* data immediately. cv::GComputation caches compiled objects
* produced within apply() -- if this method would be called next
* time with the same input parameters (image formats, image
* resolution, etc), the underlying compiled graph will be reused
* without recompilation. If new metadata doesn't match the cached
* one, the underlying compiled graph is regenerated.
*
* @note compile() always triggers a compilation process and
* produces a new GCompiled object regardless if a similar one has
* been cached via apply() or not.
*
* @param ins vector of input data to process. Don't create
* GRunArgs object manually, use cv::gin() wrapper instead.
* @param outs vector of output data to fill results in. cv::Mat
* objects may be empty in this vector, G-API will automatically
* initialize it with the required format & dimensions. Don't
* create GRunArgsP object manually, use cv::gout() wrapper instead.
* @param args a list of compilation arguments to pass to the
* underlying compilation process. Don't create GCompileArgs
* object manually, use cv::compile_args() wrapper instead.
*
* @sa @ref gapi_data_objects, @ref gapi_compile_args
*/
void apply(GRunArgs &&ins, GRunArgsP &&outs, GCompileArgs &&args = {}); // Arg-to-arg overload
/// @private -- Exclude this function from OpenCV documentation
GAPI_WRAP GRunArgs apply(const cv::detail::ExtractArgsCallback &callback,
GCompileArgs &&args = {});
/// @private -- Exclude this function from OpenCV documentation
void apply(const std::vector<cv::Mat>& ins, // Compatibility overload
const std::vector<cv::Mat>& outs,
GCompileArgs &&args = {});
// 2. Syntax sugar and compatibility overloads
#if !defined(GAPI_STANDALONE)
/**
* @brief Execute an unary computation (with compilation on the fly)
*
* @overload
* @param in input cv::Mat for unary computation
* @param out output cv::Mat for unary computation
* @param args compilation arguments for underlying compilation
* process.
*/
void apply(cv::Mat in, cv::Mat &out, GCompileArgs &&args = {}); // Unary overload
/**
* @brief Execute an unary computation (with compilation on the fly)
*
* @overload
* @param in input cv::Mat for unary computation
* @param out output cv::Scalar for unary computation
* @param args compilation arguments for underlying compilation
* process.
*/
void apply(cv::Mat in, cv::Scalar &out, GCompileArgs &&args = {}); // Unary overload (scalar)
/**
* @brief Execute a binary computation (with compilation on the fly)
*
* @overload
* @param in1 first input cv::Mat for binary computation
* @param in2 second input cv::Mat for binary computation
* @param out output cv::Mat for binary computation
* @param args compilation arguments for underlying compilation
* process.
*/
void apply(cv::Mat in1, cv::Mat in2, cv::Mat &out, GCompileArgs &&args = {}); // Binary overload
/**
* @brief Execute an binary computation (with compilation on the fly)
*
* @overload
* @param in1 first input cv::Mat for binary computation
* @param in2 second input cv::Mat for binary computation
* @param out output cv::Scalar for binary computation
* @param args compilation arguments for underlying compilation
* process.
*/
void apply(cv::Mat in1, cv::Mat in2, cv::Scalar &out, GCompileArgs &&args = {}); // Binary overload (scalar)
/**
* @brief Execute a computation with arbitrary number of
* inputs/outputs (with compilation on-the-fly).
*
* @overload
* @param ins vector of input cv::Mat objects to process by the
* computation.
* @param outs vector of output cv::Mat objects to produce by the
* computation.
* @param args compilation arguments for underlying compilation
* process.
*
* Numbers of elements in ins/outs vectors must match numbers of
* inputs/outputs which were used to define this GComputation.
*/
void apply(const std::vector<cv::Mat>& ins, // Compatibility overload
std::vector<cv::Mat>& outs,
GCompileArgs &&args = {});
#endif // !defined(GAPI_STANDALONE)
// Various versions of compile(): //////////////////////////////////////////
// 1. Generic compile() - requires metas to be passed as vector
/**
* @brief Compile the computation for specific input format(s).
*
* This method triggers compilation process and produces a new
* GCompiled object which then can process data of the given
* format. Passing data with different format to the compiled
* computation will generate a run-time exception.
*
* @param in_metas vector of input metadata configuration. Grab
* metadata from real data objects (like cv::Mat or cv::Scalar)
* using cv::descr_of(), or create it on your own.
* @param args compilation arguments for this compilation
* process. Compilation arguments directly affect what kind of
* executable object would be produced, e.g. which kernels (and
* thus, devices) would be used to execute computation.
*
* @return GCompiled, an executable computation compiled
* specifically for the given input parameters.
*
* @sa @ref gapi_compile_args
*/
GCompiled compile(GMetaArgs &&in_metas, GCompileArgs &&args = {});
// 2. Syntax sugar - variadic list of metas, no extra compile args
// FIXME: SFINAE looks ugly in the generated documentation
/**
* @overload
*
* Takes a variadic parameter pack with metadata
* descriptors for which a compiled object needs to be produced.
*
* @return GCompiled, an executable computation compiled
* specifically for the given input parameters.
*/
template<typename... Ts>
auto compile(const Ts&... metas) ->
typename std::enable_if<detail::are_meta_descrs<Ts...>::value, GCompiled>::type
{
return compile(GMetaArgs{GMetaArg(metas)...}, GCompileArgs());
}
// 3. Syntax sugar - variadic list of metas, extra compile args
// (seems optional parameters don't work well when there's an variadic template
// comes first)
//
// Ideally it should look like:
//
// template<typename... Ts>
// GCompiled compile(const Ts&... metas, GCompileArgs &&args)
//
// But not all compilers can handle this (and seems they shouldn't be able to).
// FIXME: SFINAE looks ugly in the generated documentation
/**
* @overload
*
* Takes a variadic parameter pack with metadata
* descriptors for which a compiled object needs to be produced,
* followed by GCompileArgs object representing compilation
* arguments for this process.
*
* @return GCompiled, an executable computation compiled
* specifically for the given input parameters.
*/
template<typename... Ts>
auto compile(const Ts&... meta_and_compile_args) ->
typename std::enable_if<detail::are_meta_descrs_but_last<Ts...>::value
&& std::is_same<GCompileArgs, detail::last_type_t<Ts...> >::value,
GCompiled>::type
{
//FIXME: wrapping meta_and_compile_args into a tuple to unwrap them inside a helper function is the overkill
return compile(std::make_tuple(meta_and_compile_args...),
typename detail::MkSeq<sizeof...(Ts)-1>::type());
}
// FIXME: Document properly in the Doxygen format
// Video-oriented pipeline compilation:
// 1. A generic version
/**
* @brief Compile the computation for streaming mode.
*
* This method triggers compilation process and produces a new
* GStreamingCompiled object which then can process video stream
* data of the given format. Passing a stream in a different
* format to the compiled computation will generate a run-time
* exception.
*
* @param in_metas vector of input metadata configuration. Grab
* metadata from real data objects (like cv::Mat or cv::Scalar)
* using cv::descr_of(), or create it on your own.
*
* @param args compilation arguments for this compilation
* process. Compilation arguments directly affect what kind of
* executable object would be produced, e.g. which kernels (and
* thus, devices) would be used to execute computation.
*
* @return GStreamingCompiled, a streaming-oriented executable
* computation compiled specifically for the given input
* parameters.
*
* @sa @ref gapi_compile_args
*/
GAPI_WRAP GStreamingCompiled compileStreaming(GMetaArgs &&in_metas, GCompileArgs &&args = {});
/**
* @brief Compile the computation for streaming mode.
*
* This method triggers compilation process and produces a new
* GStreamingCompiled object which then can process video stream
* data in any format. Underlying mechanisms will be adjusted to
* every new input video stream automatically, but please note that
* _not all_ existing backends support this (see reshape()).
*
* @param args compilation arguments for this compilation
* process. Compilation arguments directly affect what kind of
* executable object would be produced, e.g. which kernels (and
* thus, devices) would be used to execute computation.
*
* @return GStreamingCompiled, a streaming-oriented executable
* computation compiled for any input image format.
*
* @sa @ref gapi_compile_args
*/
GAPI_WRAP GStreamingCompiled compileStreaming(GCompileArgs &&args = {});
/// @private -- Exclude this function from OpenCV documentation
GAPI_WRAP GStreamingCompiled compileStreaming(const cv::detail::ExtractMetaCallback &callback,
GCompileArgs &&args = {});
// 2. Direct metadata version
/**
* @overload
*
* Takes a variadic parameter pack with metadata
* descriptors for which a compiled object needs to be produced.
*
* @return GStreamingCompiled, a streaming-oriented executable
* computation compiled specifically for the given input
* parameters.
*/
template<typename... Ts>
auto compileStreaming(const Ts&... metas) ->
typename std::enable_if<detail::are_meta_descrs<Ts...>::value, GStreamingCompiled>::type
{
return compileStreaming(GMetaArgs{GMetaArg(metas)...}, GCompileArgs());
}
// 2. Direct metadata + compile arguments version
/**
* @overload
*
* Takes a variadic parameter pack with metadata
* descriptors for which a compiled object needs to be produced,
* followed by GCompileArgs object representing compilation
* arguments for this process.
*
* @return GStreamingCompiled, a streaming-oriented executable
* computation compiled specifically for the given input
* parameters.
*/
template<typename... Ts>
auto compileStreaming(const Ts&... meta_and_compile_args) ->
typename std::enable_if<detail::are_meta_descrs_but_last<Ts...>::value
&& std::is_same<GCompileArgs, detail::last_type_t<Ts...> >::value,
GStreamingCompiled>::type
{
//FIXME: wrapping meta_and_compile_args into a tuple to unwrap them inside a helper function is the overkill
return compileStreaming(std::make_tuple(meta_and_compile_args...),
typename detail::MkSeq<sizeof...(Ts)-1>::type());
}
// Internal use only
/// @private
Priv& priv();
/// @private
const Priv& priv() const;
/// @private
explicit GComputation(cv::gapi::s11n::IIStream &);
/// @private
void serialize(cv::gapi::s11n::IOStream &) const;
protected:
// 4. Helper methods for (3)
/// @private
template<typename... Ts, int... IIs>
GCompiled compile(const std::tuple<Ts...> &meta_and_compile_args, detail::Seq<IIs...>)
{
GMetaArgs meta_args = {GMetaArg(std::get<IIs>(meta_and_compile_args))...};
GCompileArgs comp_args = std::get<sizeof...(Ts)-1>(meta_and_compile_args);
return compile(std::move(meta_args), std::move(comp_args));
}
template<typename... Ts, int... IIs>
GStreamingCompiled compileStreaming(const std::tuple<Ts...> &meta_and_compile_args, detail::Seq<IIs...>)
{
GMetaArgs meta_args = {GMetaArg(std::get<IIs>(meta_and_compile_args))...};
GCompileArgs comp_args = std::get<sizeof...(Ts)-1>(meta_and_compile_args);
return compileStreaming(std::move(meta_args), std::move(comp_args));
}
void recompile(GMetaArgs&& in_metas, GCompileArgs &&args);
/// @private
std::shared_ptr<Priv> m_priv;
};
/** @} */
namespace gapi
{
// FIXME: all these standalone functions need to be added to some
// common documentation section
/**
* @brief Define an tagged island (subgraph) within a computation.
*
* Declare an Island tagged with `name` and defined from `ins` to `outs`
* (exclusively, as ins/outs are data objects, and regioning is done on
* operations level).
* Throws if any operation between `ins` and `outs` are already assigned
* to another island.
*
* Islands allow to partition graph into subgraphs, fine-tuning
* the way it is scheduled by the underlying executor.
*
* @param name name of the Island to create
* @param ins vector of input data objects where the subgraph
* begins
* @param outs vector of output data objects where the subgraph
* ends.
*
* The way how an island is defined is similar to how
* cv::GComputation is defined on input/output data objects.
* Same rules apply here as well -- if there's no functional
* dependency between inputs and outputs or there's not enough
* input data objects were specified to properly calculate all
* outputs, an exception is thrown.
*
* Use cv::GIn() / cv::GOut() to specify input/output vectors.
*/
void GAPI_EXPORTS island(const std::string &name,
GProtoInputArgs &&ins,
GProtoOutputArgs &&outs);
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_GCOMPUTATION_HPP

View File

@ -1,69 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019 Intel Corporation
#ifndef OPENCV_GAPI_GCOMPUTATION_ASYNC_HPP
#define OPENCV_GAPI_GCOMPUTATION_ASYNC_HPP
#include <future> //for std::future
#include <exception> //for std::exception_ptr
#include <functional> //for std::function
#include <opencv2/gapi/garg.hpp> //for GRunArgs, GRunArgsP
#include <opencv2/gapi/gcommon.hpp> //for GCompileArgs
#include <opencv2/gapi/own/exports.hpp>
namespace cv {
//fwd declaration
class GComputation;
namespace gapi {
namespace wip {
class GAsyncContext;
/** In contrast to async() functions, these do call GComputation::apply() member function of the GComputation passed in.
@param gcomp Computation (graph) to run asynchronously
@param callback Callback to be called when execution of gcomp is done
@param ins Input parameters for gcomp
@param outs Output parameters for gcomp
@param args Compile arguments to pass to GComputation::apply()
@see async
*/
GAPI_EXPORTS void async_apply(GComputation& gcomp, std::function<void(std::exception_ptr)>&& callback, GRunArgs &&ins, GRunArgsP &&outs, GCompileArgs &&args = {});
/** @overload
@param gcomp Computation (graph) to run asynchronously
@param callback Callback to be called when execution of gcomp is done
@param ins Input parameters for gcomp
@param outs Output parameters for gcomp
@param args Compile arguments to pass to GComputation::apply()
@param ctx Context this request belongs to
@see async_apply async GAsyncContext
*/
GAPI_EXPORTS void async_apply(GComputation& gcomp, std::function<void(std::exception_ptr)>&& callback, GRunArgs &&ins, GRunArgsP &&outs, GCompileArgs &&args, GAsyncContext& ctx);
/** @overload
@param gcomp Computation (graph) to run asynchronously
@param ins Input parameters for gcomp
@param outs Output parameters for gcomp
@param args Compile arguments to pass to GComputation::apply()
@return std::future<void> object to wait for completion of async operation
@see async_apply async
*/
GAPI_EXPORTS std::future<void> async_apply(GComputation& gcomp, GRunArgs &&ins, GRunArgsP &&outs, GCompileArgs &&args = {});
/** @overload
@param gcomp Computation (graph) to run asynchronously
@param ins Input parameters for gcomp
@param outs Output parameters for gcomp
@param args Compile arguments to pass to GComputation::apply()
@param ctx Context this request belongs to
@return std::future<void> object to wait for completion of async operation
@see async_apply async GAsyncContext
*/
GAPI_EXPORTS std::future<void> async_apply(GComputation& gcomp, GRunArgs &&ins, GRunArgsP &&outs, GCompileArgs &&args, GAsyncContext& ctx);
} // namespace wip
} // namespace gapi
} // namespace cv
#endif //OPENCV_GAPI_GCOMPUTATION_ASYNC_HPP

View File

@ -1,113 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2020 Intel Corporation
#ifndef OPENCV_GAPI_GFRAME_HPP
#define OPENCV_GAPI_GFRAME_HPP
#include <ostream>
#include <memory> // std::shared_ptr
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/gcommon.hpp> // GShape
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/own/assert.hpp>
// TODO GAPI_EXPORTS or so
namespace cv
{
// Forward declaration; GNode and GOrigin are an internal
// (user-inaccessible) classes.
class GNode;
struct GOrigin;
/** \addtogroup gapi_data_objects
* @{
*/
/**
* @brief GFrame class represents an image or media frame in the graph.
*
* GFrame doesn't store any data itself, instead it describes a
* functional relationship between operations consuming and producing
* GFrame objects.
*
* GFrame is introduced to handle various media formats (e.g., NV12 or
* I420) under the same type. Various image formats may differ in the
* number of planes (e.g. two for NV12, three for I420) and the pixel
* layout inside. GFrame type allows to handle these media formats in
* the graph uniformly -- the graph structure will not change if the
* media format changes, e.g. a different camera or decoder is used
* with the same graph. G-API provides a number of operations which
* operate directly on GFrame, like `infer<>()` or
* renderFrame(); these operations are expected to handle different
* media formats inside. There is also a number of accessor
* operations like BGR(), Y(), UV() -- these operations provide
* access to frame's data in the familiar cv::GMat form, which can be
* used with the majority of the existing G-API operations. These
* accessor functions may perform color space conversion on the fly if
* the image format of the GFrame they are applied to differs from the
* operation's semantic (e.g. the BGR() accessor is called on an NV12
* image frame).
*
* GFrame is a virtual counterpart of cv::MediaFrame.
*
* @sa cv::MediaFrame, cv::GFrameDesc, BGR(), Y(), UV(), infer<>().
*/
class GAPI_EXPORTS_W_SIMPLE GFrame
{
public:
/**
* @brief Constructs an empty GFrame
*
* Normally, empty G-API data objects denote a starting point of
* the graph. When an empty GFrame is assigned to a result of some
* operation, it obtains a functional link to this operation (and
* is not empty anymore).
*/
GAPI_WRAP GFrame(); // Empty constructor
/// @private
GFrame(const GNode &n, std::size_t out); // Operation result constructor
/// @private
GOrigin& priv(); // Internal use only
/// @private
const GOrigin& priv() const; // Internal use only
private:
std::shared_ptr<GOrigin> m_priv;
};
/** @} */
enum class MediaFormat: int
{
BGR = 0,
NV12,
GRAY,
};
/**
* \addtogroup gapi_meta_args
* @{
*/
struct GAPI_EXPORTS GFrameDesc
{
MediaFormat fmt;
cv::Size size;
bool operator== (const GFrameDesc &) const;
};
static inline GFrameDesc empty_gframe_desc() { return GFrameDesc{}; }
/** @} */
class MediaFrame;
GAPI_EXPORTS GFrameDesc descr_of(const MediaFrame &frame);
GAPI_EXPORTS std::ostream& operator<<(std::ostream& os, const cv::GFrameDesc &desc);
} // namespace cv
#endif // OPENCV_GAPI_GFRAME_HPP

View File

@ -1,757 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2021 Intel Corporation
#ifndef OPENCV_GAPI_GKERNEL_HPP
#define OPENCV_GAPI_GKERNEL_HPP
#include <functional>
#include <iostream>
#include <string> // string
#include <type_traits> // false_type, true_type
#include <unordered_map> // map (for GKernelPackage)
#include <utility> // tuple
#include <opencv2/gapi/gcommon.hpp> // CompileArgTag
#include <opencv2/gapi/util/util.hpp> // Seq
#include <opencv2/gapi/gcall.hpp>
#include <opencv2/gapi/garg.hpp> // GArg
#include <opencv2/gapi/gmetaarg.hpp> // GMetaArg
#include <opencv2/gapi/gtype_traits.hpp> // GTypeTraits
#include <opencv2/gapi/util/compiler_hints.hpp> //suppress_unused_warning
#include <opencv2/gapi/gtransform.hpp>
namespace cv {
struct GTypeInfo
{
GShape shape;
cv::detail::OpaqueKind kind;
detail::HostCtor ctor;
};
using GShapes = std::vector<GShape>;
using GKinds = std::vector<cv::detail::OpaqueKind>;
using GCtors = std::vector<detail::HostCtor>;
using GTypesInfo = std::vector<GTypeInfo>;
// GKernel describes kernel API to the system
// FIXME: add attributes of a kernel, (e.g. number and types
// of inputs, etc)
struct GAPI_EXPORTS GKernel
{
using M = std::function<GMetaArgs(const GMetaArgs &, const GArgs &)>;
std::string name; // kernel ID, defined by its API (signature)
std::string tag; // some (implementation-specific) tag
M outMeta; // generic adaptor to API::outMeta(...)
GShapes outShapes; // types (shapes) kernel's outputs
GKinds inKinds; // kinds of kernel's inputs (fixme: below)
GCtors outCtors; // captured constructors for template output types
GKinds outKinds; // kinds of kernel's outputs (fixme: below)
};
// TODO: It's questionable if inKinds should really be here. Instead,
// this information could come from meta.
// GKernelImpl describes particular kernel implementation to the system
struct GAPI_EXPORTS GKernelImpl
{
util::any opaque; // backend-specific opaque info
GKernel::M outMeta; // for deserialized graphs, the outMeta is taken here
};
template<typename, typename> class GKernelTypeM;
namespace detail
{
////////////////////////////////////////////////////////////////////////////
// yield() is used in graph construction time as a generic method to obtain
// lazy "return value" of G-API operations
//
template<typename T> struct Yield;
template<> struct Yield<cv::GMat>
{
static inline cv::GMat yield(cv::GCall &call, int i) { return call.yield(i); }
};
template<> struct Yield<cv::GMatP>
{
static inline cv::GMatP yield(cv::GCall &call, int i) { return call.yieldP(i); }
};
template<> struct Yield<cv::GScalar>
{
static inline cv::GScalar yield(cv::GCall &call, int i) { return call.yieldScalar(i); }
};
template<typename U> struct Yield<cv::GArray<U> >
{
static inline cv::GArray<U> yield(cv::GCall &call, int i) { return call.yieldArray<U>(i); }
};
template<typename U> struct Yield<cv::GOpaque<U> >
{
static inline cv::GOpaque<U> yield(cv::GCall &call, int i) { return call.yieldOpaque<U>(i); }
};
template<> struct Yield<GFrame>
{
static inline cv::GFrame yield(cv::GCall &call, int i) { return call.yieldFrame(i); }
};
////////////////////////////////////////////////////////////////////////////
// Helper classes which brings outputMeta() marshalling to kernel
// implementations
//
// 1. MetaType establishes G#Type -> G#Meta mapping between G-API dynamic
// types and its metadata descriptor types.
// This mapping is used to transform types to call outMeta() callback.
template<typename T> struct MetaType;
template<> struct MetaType<cv::GMat> { using type = GMatDesc; };
template<> struct MetaType<cv::GMatP> { using type = GMatDesc; };
template<> struct MetaType<cv::GFrame> { using type = GFrameDesc; };
template<> struct MetaType<cv::GScalar> { using type = GScalarDesc; };
template<typename U> struct MetaType<cv::GArray<U> > { using type = GArrayDesc; };
template<typename U> struct MetaType<cv::GOpaque<U> > { using type = GOpaqueDesc; };
template<typename T> struct MetaType { using type = T; }; // opaque args passed as-is
// FIXME: Move it to type traits?
// 2. Hacky test based on MetaType to check if we operate on G-* type or not
template<typename T> using is_nongapi_type = std::is_same<T, typename MetaType<T>::type>;
// 3. Two ways to transform input arguments to its meta - for G-* and non-G* types:
template<typename T>
typename std::enable_if<!is_nongapi_type<T>::value, typename MetaType<T>::type>
::type get_in_meta(const GMetaArgs &in_meta, const GArgs &, int idx)
{
return util::get<typename MetaType<T>::type>(in_meta.at(idx));
}
template<typename T>
typename std::enable_if<is_nongapi_type<T>::value, T>
::type get_in_meta(const GMetaArgs &, const GArgs &in_args, int idx)
{
return in_args.at(idx).template get<T>();
}
// 4. The MetaHelper itself: an entity which generates outMeta() call
// based on kernel signature, with arguments properly substituted.
// 4.1 - case for multiple return values
// FIXME: probably can be simplified with std::apply or analogue.
template<typename, typename, typename>
struct MetaHelper;
template<typename K, typename... Ins, typename... Outs>
struct MetaHelper<K, std::tuple<Ins...>, std::tuple<Outs...> >
{
template<int... IIs, int... OIs>
static GMetaArgs getOutMeta_impl(const GMetaArgs &in_meta,
const GArgs &in_args,
detail::Seq<IIs...>,
detail::Seq<OIs...>)
{
// FIXME: decay?
using R = std::tuple<typename MetaType<Outs>::type...>;
const R r = K::outMeta( get_in_meta<Ins>(in_meta, in_args, IIs)... );
return GMetaArgs{ GMetaArg(std::get<OIs>(r))... };
}
// FIXME: help users identify how outMeta must look like (via default impl w/static_assert?)
static GMetaArgs getOutMeta(const GMetaArgs &in_meta,
const GArgs &in_args)
{
return getOutMeta_impl(in_meta,
in_args,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<sizeof...(Outs)>::type());
}
};
// 4.1 - case for a single return value
// FIXME: How to avoid duplication here?
template<typename K, typename... Ins, typename Out>
struct MetaHelper<K, std::tuple<Ins...>, Out >
{
template<int... IIs>
static GMetaArgs getOutMeta_impl(const GMetaArgs &in_meta,
const GArgs &in_args,
detail::Seq<IIs...>)
{
// FIXME: decay?
using R = typename MetaType<Out>::type;
const R r = K::outMeta( get_in_meta<Ins>(in_meta, in_args, IIs)... );
return GMetaArgs{ GMetaArg(r) };
}
// FIXME: help users identify how outMeta must look like (via default impl w/static_assert?)
static GMetaArgs getOutMeta(const GMetaArgs &in_meta,
const GArgs &in_args)
{
return getOutMeta_impl(in_meta,
in_args,
typename detail::MkSeq<sizeof...(Ins)>::type());
}
};
////////////////////////////////////////////////////////////////////////////
// Helper class to introduce tags to calls. By default there's no tag
struct NoTag {
static constexpr const char *tag() { return ""; }
};
} // namespace detail
// GKernelType and GKernelTypeM are base classes which implement typed ::on()
// method based on kernel signature. GKernelTypeM stands for multiple-return-value kernels
//
// G_TYPED_KERNEL and G_TYPED_KERNEL_M macros inherit user classes from GKernelType and
// GKernelTypeM respectively.
template<typename K, typename... R, typename... Args>
class GKernelTypeM<K, std::function<std::tuple<R...>(Args...)> >
: public detail::MetaHelper<K, std::tuple<Args...>, std::tuple<R...>>
, public detail::NoTag
{
template<int... IIs>
static std::tuple<R...> yield(cv::GCall &call, detail::Seq<IIs...>)
{
return std::make_tuple(detail::Yield<R>::yield(call, IIs)...);
}
public:
using InArgs = std::tuple<Args...>;
using OutArgs = std::tuple<R...>;
// TODO: Args&&... here?
static std::tuple<R...> on(Args... args)
{
cv::GCall call(GKernel{ K::id()
, K::tag()
, &K::getOutMeta
, {detail::GTypeTraits<R>::shape...}
, {detail::GTypeTraits<Args>::op_kind...}
, {detail::GObtainCtor<R>::get()...}
, {detail::GTypeTraits<R>::op_kind...}});
call.pass(args...); // TODO: std::forward() here?
return yield(call, typename detail::MkSeq<sizeof...(R)>::type());
}
};
template<typename, typename> class GKernelType;
template<typename K, typename R, typename... Args>
class GKernelType<K, std::function<R(Args...)> >
: public detail::MetaHelper<K, std::tuple<Args...>, R>
, public detail::NoTag
{
public:
using InArgs = std::tuple<Args...>;
using OutArgs = std::tuple<R>;
static R on(Args... args)
{
cv::GCall call(GKernel{ K::id()
, K::tag()
, &K::getOutMeta
, {detail::GTypeTraits<R>::shape}
, {detail::GTypeTraits<Args>::op_kind...}
, {detail::GObtainCtor<R>::get()}
, {detail::GTypeTraits<R>::op_kind}});
call.pass(args...);
return detail::Yield<R>::yield(call, 0);
}
};
namespace detail {
// This tiny class eliminates the semantic difference between
// GKernelType and GKernelTypeM.
template<typename, typename> class KernelTypeMedium;
template<typename K, typename... R, typename... Args>
class KernelTypeMedium<K, std::function<std::tuple<R...>(Args...)>> :
public cv::GKernelTypeM<K, std::function<std::tuple<R...>(Args...)>> {};
template<typename K, typename R, typename... Args>
class KernelTypeMedium<K, std::function<R(Args...)>> :
public cv::GKernelType<K, std::function<R(Args...)>> {};
} // namespace detail
} // namespace cv
// FIXME: I don't know a better way so far. Feel free to suggest one
// The problem is that every typed kernel should have ::id() but body
// of the class is defined by user (with outMeta, other stuff)
//! @cond IGNORED
#define G_ID_HELPER_CLASS(Class) Class##IdHelper
#define G_ID_HELPER_BODY(Class, Id) \
struct G_ID_HELPER_CLASS(Class) \
{ \
static constexpr const char * id() {return Id;} \
}; \
//! @endcond
#define GET_G_TYPED_KERNEL(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, NAME, ...) NAME
#define COMBINE_SIGNATURE(...) __VA_ARGS__
// Ensure correct __VA_ARGS__ expansion on Windows
#define __WRAP_VAARGS(x) x
/**
* Helper for G_TYPED_KERNEL declares a new G-API Operation. See [Kernel API](@ref gapi_kernel_api)
* for more details.
*
* @param Class type name for this operation.
* @param API an `std::function<>`-like signature for the operation;
* return type is a single value or a tuple of multiple values.
* @param Id string identifier for the operation. Must be unique.
*/
#define G_TYPED_KERNEL_HELPER(Class, API, Id) \
G_ID_HELPER_BODY(Class, Id) \
struct Class final: public cv::detail::KernelTypeMedium<Class, std::function API >, \
public G_ID_HELPER_CLASS(Class)
// {body} is to be defined by user
#define G_TYPED_KERNEL_HELPER_2(Class, _1, _2, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2), Id)
#define G_TYPED_KERNEL_HELPER_3(Class, _1, _2, _3, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3), Id)
#define G_TYPED_KERNEL_HELPER_4(Class, _1, _2, _3, _4, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3, _4), Id)
#define G_TYPED_KERNEL_HELPER_5(Class, _1, _2, _3, _4, _5, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3, _4, _5), Id)
#define G_TYPED_KERNEL_HELPER_6(Class, _1, _2, _3, _4, _5, _6, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3, _4, _5, _6), Id)
#define G_TYPED_KERNEL_HELPER_7(Class, _1, _2, _3, _4, _5, _6, _7, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3, _4, _5, _6, _7), Id)
#define G_TYPED_KERNEL_HELPER_8(Class, _1, _2, _3, _4, _5, _6, _7, _8, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3, _4, _5, _6, _7, _8), Id)
#define G_TYPED_KERNEL_HELPER_9(Class, _1, _2, _3, _4, _5, _6, _7, _8, _9, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3, _4, _5, _6, _7, _8, _9), Id)
#define G_TYPED_KERNEL_HELPER_10(Class, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, Id) \
G_TYPED_KERNEL_HELPER(Class, COMBINE_SIGNATURE(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10), Id)
/**
* Declares a new G-API Operation. See [Kernel API](@ref gapi_kernel_api)
* for more details.
*
* @param Class type name for this operation.
*/
#define G_TYPED_KERNEL(Class, ...) __WRAP_VAARGS(GET_G_TYPED_KERNEL(__VA_ARGS__, \
G_TYPED_KERNEL_HELPER_10, \
G_TYPED_KERNEL_HELPER_9, \
G_TYPED_KERNEL_HELPER_8, \
G_TYPED_KERNEL_HELPER_7, \
G_TYPED_KERNEL_HELPER_6, \
G_TYPED_KERNEL_HELPER_5, \
G_TYPED_KERNEL_HELPER_4, \
G_TYPED_KERNEL_HELPER_3, \
G_TYPED_KERNEL_HELPER_2, \
G_TYPED_KERNEL_HELPER)(Class, __VA_ARGS__)) \
/**
* Declares a new G-API Operation. See [Kernel API](@ref gapi_kernel_api) for more details.
*
* @deprecated This macro is deprecated in favor of `G_TYPED_KERNEL` that is used for declaring any
* G-API Operation.
*
* @param Class type name for this operation.
*/
#define G_TYPED_KERNEL_M G_TYPED_KERNEL
#define G_API_OP G_TYPED_KERNEL
#define G_API_OP_M G_API_OP
namespace cv
{
namespace gapi
{
// Prework: model "Device" API before it gets to G-API headers.
// FIXME: Don't mix with internal Backends class!
/// @private
class GAPI_EXPORTS GBackend
{
public:
class Priv;
// TODO: make it template (call `new` within??)
GBackend();
explicit GBackend(std::shared_ptr<Priv> &&p);
Priv& priv();
const Priv& priv() const;
std::size_t hash() const;
bool operator== (const GBackend &rhs) const;
private:
std::shared_ptr<Priv> m_priv;
};
inline bool operator != (const GBackend &lhs, const GBackend &rhs)
{
return !(lhs == rhs);
}
} // namespace gapi
} // namespace cv
namespace std
{
template<> struct hash<cv::gapi::GBackend>
{
std::size_t operator() (const cv::gapi::GBackend &b) const
{
return b.hash();
}
};
} // namespace std
namespace cv {
class GAPI_EXPORTS_W_SIMPLE GKernelPackage;
namespace gapi {
GAPI_EXPORTS_W cv::GKernelPackage combine(const cv::GKernelPackage &lhs,
const cv::GKernelPackage &rhs);
/// @private
class GFunctor
{
public:
virtual cv::GKernelImpl impl() const = 0;
virtual cv::gapi::GBackend backend() const = 0;
const char* id() const { return m_id; }
virtual ~GFunctor() = default;
protected:
GFunctor(const char* id) : m_id(id) { }
private:
const char* m_id;
};
} // namespace gapi
/** \addtogroup gapi_compile_args
* @{
*/
// FIXME: Hide implementation
/**
* @brief A container class for heterogeneous kernel
* implementation collections and graph transformations.
*
* GKernelPackage is a special container class which stores kernel
* _implementations_ and graph _transformations_. Objects of this class
* are created and passed to cv::GComputation::compile() to specify
* which kernels to use and which transformations to apply in the
* compiled graph. GKernelPackage may contain kernels of
* different backends, e.g. be heterogeneous.
*
* The most easy way to create a kernel package is to use function
* cv::gapi::kernels(). This template functions takes kernel
* implementations in form of type list (variadic template) and
* generates a kernel package atop of that.
*
* Kernel packages can be also generated programmatically, starting
* with an empty package (created with the default constructor)
* and then by populating it with kernels via call to
* GKernelPackage::include(). Note this method is also a template
* one since G-API kernel and transformation implementations are _types_,
* not objects.
*
* Finally, two kernel packages can be combined into a new one
* with function cv::gapi::combine().
*/
class GAPI_EXPORTS_W_SIMPLE GKernelPackage
{
/// @private
using M = std::unordered_map<std::string, std::pair<cv::gapi::GBackend, cv::GKernelImpl>>;
/// @private
M m_id_kernels;
/// @private
std::vector<GTransform> m_transformations;
protected:
/// @private
// Remove ALL implementations of the given API (identified by ID)
void removeAPI(const std::string &id);
/// @private
// Partial include() specialization for kernels
template <typename KImpl>
typename std::enable_if<(std::is_base_of<cv::detail::KernelTag, KImpl>::value), void>::type
includeHelper()
{
auto backend = KImpl::backend();
auto kernel_id = KImpl::API::id();
auto kernel_impl = GKernelImpl{KImpl::kernel(), &KImpl::API::getOutMeta};
removeAPI(kernel_id);
m_id_kernels[kernel_id] = std::make_pair(backend, kernel_impl);
}
/// @private
// Partial include() specialization for transformations
template <typename TImpl>
typename std::enable_if<(std::is_base_of<cv::detail::TransformTag, TImpl>::value), void>::type
includeHelper()
{
m_transformations.emplace_back(TImpl::transformation());
}
public:
void include(const cv::gapi::GFunctor& functor);
/**
* @brief Returns total number of kernels
* in the package (across all backends included)
*
* @return a number of kernels in the package
*/
GAPI_WRAP std::size_t size() const;
/**
* @brief Returns vector of transformations included in the package
*
* @return vector of transformations included in the package
*/
const std::vector<GTransform>& get_transformations() const;
/**
* @brief Returns vector of kernel ids included in the package
*
* @return vector of kernel ids included in the package
*/
std::vector<std::string> get_kernel_ids() const;
/**
* @brief Test if a particular kernel _implementation_ KImpl is
* included in this kernel package.
*
* @sa includesAPI()
*
* @note cannot be applied to transformations
*
* @return true if there is such kernel, false otherwise.
*/
template<typename KImpl>
bool includes() const
{
static_assert(std::is_base_of<cv::detail::KernelTag, KImpl>::value,
"includes() can be applied to kernels only");
auto kernel_it = m_id_kernels.find(KImpl::API::id());
return kernel_it != m_id_kernels.end() &&
kernel_it->second.first == KImpl::backend();
}
/**
* @brief Remove all kernels associated with the given backend
* from the package.
*
* Does nothing if there's no kernels of this backend in the package.
*
* @param backend backend which kernels to remove
*/
void remove(const cv::gapi::GBackend& backend);
/**
* @brief Remove all kernels implementing the given API from
* the package.
*
* Does nothing if there's no kernels implementing the given interface.
*/
template<typename KAPI>
void remove()
{
removeAPI(KAPI::id());
}
// FIXME: Rename to includes() and distinguish API/impl case by
// statically?
/**
* Check if package contains ANY implementation of a kernel API
* by API type.
*/
template<typename KAPI>
bool includesAPI() const
{
return includesAPI(KAPI::id());
}
/// @private
bool includesAPI(const std::string &id) const;
// FIXME: The below comment is wrong, and who needs this function?
/**
* @brief Find a kernel (by its API)
*
* Returns implementation corresponding id.
* Throws if nothing found.
*
* @return Backend which hosts matching kernel implementation.
*
*/
template<typename KAPI>
cv::gapi::GBackend lookup() const
{
return lookup(KAPI::id()).first;
}
/// @private
std::pair<cv::gapi::GBackend, cv::GKernelImpl>
lookup(const std::string &id) const;
// FIXME: No overwrites allowed?
/**
* @brief Put a new kernel implementation or a new transformation
* KImpl into the package.
*/
template<typename KImpl>
void include()
{
includeHelper<KImpl>();
}
/**
* @brief Adds a new kernel based on it's backend and id into the kernel package
*
* @param backend backend associated with the kernel
* @param kernel_id a name/id of the kernel
*/
void include(const cv::gapi::GBackend& backend, const std::string& kernel_id);
/**
* @brief Lists all backends which are included into package
*
* @return vector of backends
*/
std::vector<cv::gapi::GBackend> backends() const;
// TODO: Doxygen bug -- it wants me to place this comment
// here, not below.
/**
* @brief Create a new package based on `lhs` and `rhs`.
*
* @param lhs "Left-hand-side" package in the process
* @param rhs "Right-hand-side" package in the process
* @return a new kernel package.
*/
friend GAPI_EXPORTS GKernelPackage cv::gapi::combine(const GKernelPackage &lhs,
const GKernelPackage &rhs);
};
/** @} */
namespace gapi {
using GKernelPackage = cv::GKernelPackage; // Keep backward compatibility
/** \addtogroup gapi_compile_args
* @{
*/
/**
* @brief Create a kernel package object containing kernels
* and transformations specified in variadic template argument.
*
* In G-API, kernel implementations and transformations are _types_.
* Every backend has its own kernel API (like GAPI_OCV_KERNEL() and
* GAPI_FLUID_KERNEL()) but all of that APIs define a new type for
* each kernel implementation.
*
* Use this function to pass kernel implementations (defined in
* either way) and transformations to the system. Example:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp kernels_snippet
*
* Note that kernels() itself is a function returning object, not
* a type, so having `()` at the end is important -- it must be a
* function call.
*/
template<typename... KK> GKernelPackage kernels()
{
// FIXME: currently there is no check that transformations' signatures are unique
// and won't be any intersection in graph compilation stage
static_assert(cv::detail::all_unique<typename KK::API...>::value, "Kernels API must be unique");
GKernelPackage pkg;
// For those who wonder - below is a trick to call a number of
// methods based on parameter pack (zeroes just help hiding these
// calls into a sequence which helps to expand this parameter pack).
// Just note that `f(),a` always equals to `a` (with f() called!)
// and parentheses are used to hide function call in the expanded sequence.
// Leading 0 helps to handle case when KK is an empty list (kernels<>()).
int unused[] = { 0, (pkg.include<KK>(), 0)... };
cv::util::suppress_unused_warning(unused);
return pkg;
}
template<typename... FF>
GKernelPackage kernels(FF&... functors)
{
GKernelPackage pkg;
int unused[] = { 0, (pkg.include(functors), 0)... };
cv::util::suppress_unused_warning(unused);
return pkg;
}
/** @} */
/**
* @brief Combines multiple G-API kernel packages into one
*
* @overload
*
* This function successively combines the passed kernel packages using a right fold.
* Calling `combine(a, b, c)` is equal to `combine(a, combine(b, c))`.
*
* @return The resulting kernel package
*/
template<typename... Ps>
cv::GKernelPackage combine(const cv::GKernelPackage &a, const cv::GKernelPackage &b, Ps&&... rest)
{
return combine(a, combine(b, rest...));
}
// NB(DM): Variadic-arg version in Python may require the same
// approach as used in GComputation::compile/apply.
/** \addtogroup gapi_compile_args
* @{
*/
/**
* @brief cv::gapi::use_only() is a special combinator which hints G-API to use only
* kernels specified in cv::GComputation::compile() (and not to extend kernels available by
* default with that package).
*/
struct GAPI_EXPORTS use_only
{
GKernelPackage pkg;
};
/** @} */
} // namespace gapi
namespace detail
{
template<> struct CompileArgTag<cv::GKernelPackage>
{
static const char* tag() { return "gapi.kernel_package"; }
};
template<> struct CompileArgTag<cv::gapi::use_only>
{
static const char* tag() { return "gapi.use_only"; }
};
} // namespace detail
} // namespace cv
#endif // OPENCV_GAPI_GKERNEL_HPP

View File

@ -1,292 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2020 Intel Corporation
#ifndef OPENCV_GAPI_GMAT_HPP
#define OPENCV_GAPI_GMAT_HPP
#include <ostream>
#include <memory> // std::shared_ptr
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/gcommon.hpp> // GShape
#include <opencv2/gapi/own/assert.hpp>
// TODO GAPI_EXPORTS or so
namespace cv
{
// Forward declaration; GNode and GOrigin are an internal
// (user-inaccessible) classes.
class GNode;
struct GOrigin;
/** \addtogroup gapi_data_objects
* @{
*
* @brief G-API data objects used to build G-API expressions.
*
* These objects do not own any particular data (except compile-time
* associated values like with cv::GScalar or `cv::GArray<T>`) and are
* used only to construct graphs.
*
* Every graph in G-API starts and ends with data objects.
*
* Once constructed and compiled, G-API operates with regular host-side
* data instead. Refer to the below table to find the mapping between
* G-API and regular data types when passing input and output data
* structures to G-API:
*
* G-API data type | I/O data type
* ------------------ | -------------
* cv::GMat | cv::Mat, cv::UMat, cv::RMat
* cv::GScalar | cv::Scalar
* `cv::GArray<T>` | std::vector<T>
* `cv::GOpaque<T>` | T
* cv::GFrame | cv::MediaFrame
*/
/**
* @brief GMat class represents image or tensor data in the
* graph.
*
* GMat doesn't store any data itself, instead it describes a
* functional relationship between operations consuming and producing
* GMat objects.
*
* GMat is a virtual counterpart of Mat and UMat, but it
* doesn't mean G-API use Mat or UMat objects internally to represent
* GMat objects -- the internal data representation may be
* backend-specific or optimized out at all.
*
* @sa Mat, GMatDesc
*/
class GAPI_EXPORTS_W_SIMPLE GMat
{
public:
/**
* @brief Constructs an empty GMat
*
* Normally, empty G-API data objects denote a starting point of
* the graph. When an empty GMat is assigned to a result of some
* operation, it obtains a functional link to this operation (and
* is not empty anymore).
*/
GAPI_WRAP GMat(); // Empty constructor
/**
* @brief Constructs a value-initialized GMat
*
* GMat may be associated with a buffer at graph construction time.
* It is useful when some operation has a Mat input which doesn't
* change during the program execution, and is set only once.
* In this case, there's no need to declare such GMat as graph input.
*
* @param m a cv::Mat buffer to associate with this GMat object.
*/
GAPI_WRAP explicit GMat(cv::Mat m); // Value-initialization constructor
/// @private
GMat(const GNode &n, std::size_t out); // Operation result constructor
/// @private
GOrigin& priv(); // Internal use only
/// @private
const GOrigin& priv() const; // Internal use only
private:
std::shared_ptr<GOrigin> m_priv;
};
class GAPI_EXPORTS GMatP : public GMat
{
public:
using GMat::GMat;
};
class RMat;
/** @} */
/**
* \addtogroup gapi_meta_args
* @{
*/
struct GAPI_EXPORTS_W_SIMPLE GMatDesc
{
// FIXME: Default initializers in C++14
GAPI_PROP int depth;
GAPI_PROP int chan;
GAPI_PROP cv::Size size; // NB.: no multi-dimensional cases covered yet
GAPI_PROP bool planar;
GAPI_PROP std::vector<int> dims; // FIXME: Maybe it's real questionable to have it here
GAPI_WRAP GMatDesc(int d, int c, cv::Size s, bool p = false)
: depth(d), chan(c), size(s), planar(p) {}
GAPI_WRAP GMatDesc(int d, const std::vector<int> &dd)
: depth(d), chan(-1), size{-1,-1}, planar(false), dims(dd) {}
GAPI_WRAP GMatDesc(int d, std::vector<int> &&dd)
: depth(d), chan(-1), size{-1,-1}, planar(false), dims(std::move(dd)) {}
GAPI_WRAP GMatDesc() : GMatDesc(-1, -1, {-1,-1}) {}
inline bool operator== (const GMatDesc &rhs) const
{
return depth == rhs.depth
&& chan == rhs.chan
&& size == rhs.size
&& planar == rhs.planar
&& dims == rhs.dims;
}
inline bool operator!= (const GMatDesc &rhs) const
{
return !(*this == rhs);
}
bool isND() const { return !dims.empty(); }
// Checks if the passed mat can be described by this descriptor
// (it handles the case when
// 1-channel mat can be reinterpreted as is (1-channel mat)
// and as a 3-channel planar mat with height divided by 3)
bool canDescribe(const cv::Mat& mat) const;
bool canDescribe(const cv::RMat& mat) const;
// Meta combinator: return a new GMatDesc which differs in size by delta
// (all other fields are taken unchanged from this GMatDesc)
// FIXME: a better name?
GAPI_WRAP GMatDesc withSizeDelta(cv::Size delta) const
{
GMatDesc desc(*this);
desc.size += delta;
return desc;
}
// Meta combinator: return a new GMatDesc which differs in size by delta
// (all other fields are taken unchanged from this GMatDesc)
//
// This is an overload.
GAPI_WRAP GMatDesc withSizeDelta(int dx, int dy) const
{
return withSizeDelta(cv::Size{dx,dy});
}
GAPI_WRAP GMatDesc withSize(cv::Size sz) const
{
GMatDesc desc(*this);
desc.size = sz;
return desc;
}
// Meta combinator: return a new GMatDesc with specified data depth.
// (all other fields are taken unchanged from this GMatDesc)
GAPI_WRAP GMatDesc withDepth(int ddepth) const
{
GAPI_Assert(CV_MAT_CN(ddepth) == 1 || ddepth == -1);
GMatDesc desc(*this);
if (ddepth != -1) desc.depth = ddepth;
return desc;
}
// Meta combinator: return a new GMatDesc with specified data depth
// and number of channels.
// (all other fields are taken unchanged from this GMatDesc)
GAPI_WRAP GMatDesc withType(int ddepth, int dchan) const
{
GAPI_Assert(CV_MAT_CN(ddepth) == 1 || ddepth == -1);
GMatDesc desc = withDepth(ddepth);
desc.chan = dchan;
return desc;
}
// Meta combinator: return a new GMatDesc with planar flag set
// (no size changes are performed, only channel interpretation is changed
// (interleaved -> planar)
GAPI_WRAP GMatDesc asPlanar() const
{
GAPI_Assert(planar == false);
GMatDesc desc(*this);
desc.planar = true;
return desc;
}
// Meta combinator: return a new GMatDesc
// reinterpreting 1-channel input as planar image
// (size height is divided by plane number)
GAPI_WRAP GMatDesc asPlanar(int planes) const
{
GAPI_Assert(planar == false);
GAPI_Assert(chan == 1);
GAPI_Assert(planes > 1);
GAPI_Assert(size.height % planes == 0);
GMatDesc desc(*this);
desc.size.height /= planes;
desc.chan = planes;
return desc.asPlanar();
}
// Meta combinator: return a new GMatDesc with planar flag set to false
// (no size changes are performed, only channel interpretation is changed
// (planar -> interleaved)
GAPI_WRAP GMatDesc asInterleaved() const
{
GAPI_Assert(planar == true);
GMatDesc desc(*this);
desc.planar = false;
return desc;
}
};
static inline GMatDesc empty_gmat_desc() { return GMatDesc{-1,-1,{-1,-1}}; }
namespace gapi { namespace detail {
/** Checks GMatDesc fields if the passed matrix is a set of n-dimentional points.
@param in GMatDesc to check.
@param n expected dimensionality.
@return the amount of points. In case input matrix can't be described as vector of points
of expected dimensionality, returns -1.
*/
int checkVector(const GMatDesc& in, const size_t n);
/** @overload
Checks GMatDesc fields if the passed matrix can be described as a set of points of any
dimensionality.
@return array of two elements in form of std::vector<int>: the amount of points
and their calculated dimensionality. In case input matrix can't be described as vector of points,
returns {-1, -1}.
*/
std::vector<int> checkVector(const GMatDesc& in);
}} // namespace gapi::detail
#if !defined(GAPI_STANDALONE)
GAPI_EXPORTS GMatDesc descr_of(const cv::UMat &mat);
#endif // !defined(GAPI_STANDALONE)
//Fwd declarations
namespace gapi { namespace own {
class Mat;
GAPI_EXPORTS GMatDesc descr_of(const Mat &mat);
}}//gapi::own
GAPI_EXPORTS GMatDesc descr_of(const RMat &mat);
#if !defined(GAPI_STANDALONE)
GAPI_EXPORTS GMatDesc descr_of(const cv::Mat &mat);
#else
using gapi::own::descr_of;
#endif
/** @} */
GAPI_EXPORTS std::ostream& operator<<(std::ostream& os, const cv::GMatDesc &desc);
} // namespace cv
#endif // OPENCV_GAPI_GMAT_HPP

View File

@ -1,80 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GMETAARG_HPP
#define OPENCV_GAPI_GMETAARG_HPP
#include <vector>
#include <type_traits>
#include <opencv2/gapi/util/util.hpp>
#include <opencv2/gapi/util/variant.hpp>
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/gscalar.hpp>
#include <opencv2/gapi/garray.hpp>
#include <opencv2/gapi/gopaque.hpp>
#include <opencv2/gapi/gframe.hpp>
namespace cv
{
// FIXME: Rename to GMeta?
// FIXME: user shouldn't deal with it - put to detail?
// GMetaArg is an union type over descriptions of G-types which can serve as
// GComputation's in/output slots.
//
// GMetaArg objects are passed as arguments to GComputation::compile()
// to specify which data a compiled computation should be specialized on.
// For manual compile(), user must supply this metadata, in case of apply()
// this metadata is taken from arguments computation should operate on.
//
// The first type (monostate) is equal to "uninitialized"/"unresolved" meta.
using GMetaArg = util::variant
< util::monostate
, GMatDesc
, GScalarDesc
, GArrayDesc
, GOpaqueDesc
, GFrameDesc
>;
GAPI_EXPORTS std::ostream& operator<<(std::ostream& os, const GMetaArg &);
using GMetaArgs = std::vector<GMetaArg>;
namespace detail
{
// These traits are used by GComputation::compile()
// FIXME: is_constructible<T> doesn't work as variant doesn't do any SFINAE
// in its current template constructor
template<typename T> struct is_meta_descr : std::false_type {};
template<> struct is_meta_descr<GMatDesc> : std::true_type {};
template<> struct is_meta_descr<GScalarDesc> : std::true_type {};
template<> struct is_meta_descr<GArrayDesc> : std::true_type {};
template<> struct is_meta_descr<GOpaqueDesc> : std::true_type {};
template<typename... Ts>
using are_meta_descrs = all_satisfy<is_meta_descr, Ts...>;
template<typename... Ts>
using are_meta_descrs_but_last = all_satisfy<is_meta_descr, typename all_but_last<Ts...>::type>;
} // namespace detail
// Note: descr_of(std::vector<..>) returns a GArrayDesc, while
// descrs_of(std::vector<..>) returns an array of Meta args!
class UMat;
GAPI_EXPORTS cv::GMetaArgs descrs_of(const std::vector<cv::Mat> &vec);
GAPI_EXPORTS cv::GMetaArgs descrs_of(const std::vector<cv::UMat> &vec);
namespace gapi { namespace own {
GAPI_EXPORTS cv::GMetaArgs descrs_of(const std::vector<Mat> &vec);
}} // namespace gapi::own
} // namespace cv
#endif // OPENCV_GAPI_GMETAARG_HPP

View File

@ -1,369 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019-2020 Intel Corporation
#ifndef OPENCV_GAPI_GOPAQUE_HPP
#define OPENCV_GAPI_GOPAQUE_HPP
#include <functional>
#include <ostream>
#include <memory>
#include <opencv2/gapi/own/exports.hpp>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/util/any.hpp>
#include <opencv2/gapi/util/variant.hpp>
#include <opencv2/gapi/util/throw.hpp>
#include <opencv2/gapi/util/type_traits.hpp>
#include <opencv2/gapi/own/assert.hpp>
#include <opencv2/gapi/gcommon.hpp> // OpaqueKind
#include <opencv2/gapi/garray.hpp> // TypeHintBase
namespace cv
{
// Forward declaration; GNode and GOrigin are an internal
// (user-inaccessible) classes.
class GNode;
struct GOrigin;
template<typename T> class GOpaque;
/**
* \addtogroup gapi_meta_args
* @{
*/
struct GAPI_EXPORTS_W_SIMPLE GOpaqueDesc
{
// FIXME: Body
// FIXME: Also implement proper operator== then
bool operator== (const GOpaqueDesc&) const { return true; }
};
template<typename U> GOpaqueDesc descr_of(const U &) { return {};}
GAPI_EXPORTS_W inline GOpaqueDesc empty_gopaque_desc() {return {}; }
/** @} */
std::ostream& operator<<(std::ostream& os, const cv::GOpaqueDesc &desc);
namespace detail
{
// ConstructOpaque is a callback which stores information about T and is used by
// G-API runtime to construct an object in host memory (T remains opaque for G-API).
// ConstructOpaque is carried into G-API internals by GOpaqueU.
// Currently it is suitable for Host (CPU) plugins only, real offload may require
// more information for manual memory allocation on-device.
class OpaqueRef;
using ConstructOpaque = std::function<void(OpaqueRef&)>;
// FIXME: garray.hpp already contains hint classes (for actual T type verification),
// need to think where it can be moved (currently opaque uses it from garray)
// This class strips type information from GOpaque<T> and makes it usable
// in the G-API graph compiler (expression unrolling, graph generation, etc).
// Part of GProtoArg.
class GAPI_EXPORTS GOpaqueU
{
public:
GOpaqueU(const GNode &n, std::size_t out); // Operation result constructor
template <typename T>
bool holds() const; // Check if was created from GOpaque<T>
GOrigin& priv(); // Internal use only
const GOrigin& priv() const; // Internal use only
protected:
GOpaqueU(); // Default constructor
template<class> friend class cv::GOpaque; // (available for GOpaque<T> only)
void setConstructFcn(ConstructOpaque &&cv); // Store T-aware constructor
template <typename T>
void specifyType(); // Store type of initial GOpaque<T>
template <typename T>
void storeKind();
void setKind(cv::detail::OpaqueKind);
std::shared_ptr<GOrigin> m_priv;
std::shared_ptr<TypeHintBase> m_hint;
};
template <typename T>
bool GOpaqueU::holds() const{
GAPI_Assert(m_hint != nullptr);
using U = util::decay_t<T>;
return dynamic_cast<TypeHint<U>*>(m_hint.get()) != nullptr;
}
template <typename T>
void GOpaqueU::specifyType(){
m_hint.reset(new TypeHint<util::decay_t<T>>);
}
template <typename T>
void GOpaqueU::storeKind(){
// FIXME: Add assert here on cv::Mat and cv::Scalar?
setKind(cv::detail::GOpaqueTraits<T>::kind);
}
// This class represents a typed object reference.
// Depending on origins, this reference may be either "just a" reference to
// an object created externally, OR actually own the underlying object
// (be value holder).
class BasicOpaqueRef
{
public:
cv::GOpaqueDesc m_desc;
virtual ~BasicOpaqueRef() {}
virtual void mov(BasicOpaqueRef &ref) = 0;
virtual const void* ptr() const = 0;
virtual void set(const cv::util::any &a) = 0;
};
template<typename T> class OpaqueRefT final: public BasicOpaqueRef
{
using empty_t = util::monostate;
using ro_ext_t = const T *;
using rw_ext_t = T *;
using rw_own_t = T ;
util::variant<empty_t, ro_ext_t, rw_ext_t, rw_own_t> m_ref;
inline bool isEmpty() const { return util::holds_alternative<empty_t>(m_ref); }
inline bool isROExt() const { return util::holds_alternative<ro_ext_t>(m_ref); }
inline bool isRWExt() const { return util::holds_alternative<rw_ext_t>(m_ref); }
inline bool isRWOwn() const { return util::holds_alternative<rw_own_t>(m_ref); }
void init(const T* obj = nullptr)
{
if (obj) m_desc = cv::descr_of(*obj);
}
public:
OpaqueRefT() { init(); }
virtual ~OpaqueRefT() {}
explicit OpaqueRefT(const T& obj) : m_ref(&obj) { init(&obj); }
explicit OpaqueRefT( T& obj) : m_ref(&obj) { init(&obj); }
explicit OpaqueRefT( T&& obj) : m_ref(std::move(obj)) { init(&obj); }
// Reset a OpaqueRefT. Called only for objects instantiated
// internally in G-API (e.g. temporary GOpaque<T>'s within a
// computation). Reset here means both initialization
// (creating an object) and reset (discarding its existing
// content before the next execution). Must never be called
// for external OpaqueRefTs.
void reset()
{
if (isEmpty())
{
T empty_obj{};
m_desc = cv::descr_of(empty_obj);
m_ref = std::move(empty_obj);
GAPI_Assert(isRWOwn());
}
else if (isRWOwn())
{
util::get<rw_own_t>(m_ref) = {};
}
else GAPI_Error("InternalError"); // shouldn't be called in *EXT modes
}
// Obtain a WRITE reference to underlying object
// Used by CPU kernel API wrappers when a kernel execution frame
// is created
T& wref()
{
GAPI_Assert(isRWExt() || isRWOwn());
if (isRWExt()) return *util::get<rw_ext_t>(m_ref);
if (isRWOwn()) return util::get<rw_own_t>(m_ref);
util::throw_error(std::logic_error("Impossible happened"));
}
// Obtain a READ reference to underlying object
// Used by CPU kernel API wrappers when a kernel execution frame
// is created
const T& rref() const
{
// ANY object can be accessed for reading, even if it declared for
// output. Example -- a GComputation from [in] to [out1,out2]
// where [out2] is a result of operation applied to [out1]:
//
// GComputation boundary
// . . . . . . .
// . .
// [in] ----> foo() ----> [out1]
// . . :
// . . . .:. . .
// . V .
// . bar() ---> [out2]
// . . . . . . . . . . . .
//
if (isROExt()) return *util::get<ro_ext_t>(m_ref);
if (isRWExt()) return *util::get<rw_ext_t>(m_ref);
if (isRWOwn()) return util::get<rw_own_t>(m_ref);
util::throw_error(std::logic_error("Impossible happened"));
}
virtual void mov(BasicOpaqueRef &v) override {
OpaqueRefT<T> *tv = dynamic_cast<OpaqueRefT<T>*>(&v);
GAPI_Assert(tv != nullptr);
wref() = std::move(tv->wref());
}
virtual const void* ptr() const override { return &rref(); }
virtual void set(const cv::util::any &a) override {
wref() = util::any_cast<T>(a);
}
};
// This class strips type information from OpaqueRefT<> and makes it usable
// in the G-API executables (carrying run-time data/information to kernels).
// Part of GRunArg.
// Its methods are typed proxies to OpaqueRefT<T>.
// OpaqueRef maintains "reference" semantics so two copies of OpaqueRef refer
// to the same underlying object.
class OpaqueRef
{
std::shared_ptr<BasicOpaqueRef> m_ref;
cv::detail::OpaqueKind m_kind = cv::detail::OpaqueKind::CV_UNKNOWN;
template<typename T> inline void check() const
{
GAPI_DbgAssert(dynamic_cast<OpaqueRefT<T>*>(m_ref.get()) != nullptr);
}
public:
OpaqueRef() = default;
template<
typename T,
typename = util::are_different_t<OpaqueRef, T>
>
// FIXME: probably won't work with const object
explicit OpaqueRef(T&& obj) :
m_ref(new OpaqueRefT<util::decay_t<T>>(std::forward<T>(obj))),
m_kind(GOpaqueTraits<util::decay_t<T>>::kind) {}
cv::detail::OpaqueKind getKind() const
{
return m_kind;
}
template<typename T> void reset()
{
if (!m_ref) m_ref.reset(new OpaqueRefT<T>());
check<T>();
storeKind<T>();
static_cast<OpaqueRefT<T>&>(*m_ref).reset();
}
template <typename T>
void storeKind()
{
m_kind = cv::detail::GOpaqueTraits<T>::kind;
}
template<typename T> T& wref()
{
check<T>();
return static_cast<OpaqueRefT<T>&>(*m_ref).wref();
}
template<typename T> const T& rref() const
{
check<T>();
return static_cast<OpaqueRefT<T>&>(*m_ref).rref();
}
void mov(OpaqueRef &v)
{
m_ref->mov(*v.m_ref);
}
cv::GOpaqueDesc descr_of() const
{
return m_ref->m_desc;
}
// May be used to uniquely identify this object internally
const void *ptr() const { return m_ref->ptr(); }
// Introduced for in-graph meta handling
OpaqueRef& operator= (const cv::util::any &a)
{
m_ref->set(a);
return *this;
}
};
} // namespace detail
/** \addtogroup gapi_data_objects
* @{
*/
/**
* @brief `cv::GOpaque<T>` template class represents an object of
* class `T` in the graph.
*
* `cv::GOpaque<T>` describes a functional relationship between operations
* consuming and producing object of class `T`. `cv::GOpaque<T>` is
* designed to extend G-API with user-defined data types, which are
* often required with user-defined operations. G-API can't apply any
* optimizations to user-defined types since these types are opaque to
* the framework. However, there is a number of G-API operations
* declared with `cv::GOpaque<T>` as a return type,
* e.g. cv::gapi::streaming::timestamp() or cv::gapi::streaming::size().
*
* @sa `cv::GArray<T>`
*/
template<typename T> class GOpaque
{
public:
// Host type (or Flat type) - the type this GOpaque is actually
// specified to.
/// @private
using HT = typename detail::flatten_g<util::decay_t<T>>::type;
/**
* @brief Constructs an empty `cv::GOpaque<T>`
*
* Normally, empty G-API data objects denote a starting point of
* the graph. When an empty `cv::GOpaque<T>` is assigned to a result
* of some operation, it obtains a functional link to this
* operation (and is not empty anymore).
*/
GOpaque() { putDetails(); } // Empty constructor
/// @private
explicit GOpaque(detail::GOpaqueU &&ref) // GOpaqueU-based constructor
: m_ref(ref) { putDetails(); } // (used by GCall, not for users)
/// @private
detail::GOpaqueU strip() const {
return m_ref;
}
/// @private
static void Ctor(detail::OpaqueRef& ref) {
ref.reset<HT>();
}
private:
void putDetails() {
m_ref.setConstructFcn(&Ctor);
m_ref.specifyType<HT>();
m_ref.storeKind<HT>();
}
detail::GOpaqueU m_ref;
};
/** @} */
} // namespace cv
#endif // OPENCV_GAPI_GOPAQUE_HPP

View File

@ -1,159 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GPROTO_HPP
#define OPENCV_GAPI_GPROTO_HPP
#include <type_traits>
#include <vector>
#include <ostream>
#include <opencv2/gapi/util/variant.hpp>
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/gscalar.hpp>
#include <opencv2/gapi/garray.hpp>
#include <opencv2/gapi/gopaque.hpp>
#include <opencv2/gapi/garg.hpp>
#include <opencv2/gapi/gmetaarg.hpp>
namespace cv {
// FIXME: user shouldn't deal with it - put to detail?
// GProtoArg is an union type over G-types which can serve as
// GComputation's in/output slots. In other words, GProtoArg
// wraps any type which can serve as G-API exchange type.
//
// In Runtime, GProtoArgs are substituted with appropriate GRunArgs.
//
// GProtoArg objects are constructed in-place when user describes
// (captures) computations, user doesn't interact with these types
// directly.
using GProtoArg = util::variant
< GMat
, GMatP
, GFrame
, GScalar
, detail::GArrayU // instead of GArray<T>
, detail::GOpaqueU // instead of GOpaque<T>
>;
using GProtoArgs = std::vector<GProtoArg>;
namespace detail
{
template<typename... Ts> inline GProtoArgs packArgs(Ts... args)
{
return GProtoArgs{ GProtoArg(wrap_gapi_helper<Ts>::wrap(args))... };
}
}
template<class Tag>
struct GIOProtoArgs
{
public:
// NB: Used by python wrapper
GIOProtoArgs() = default;
explicit GIOProtoArgs(const GProtoArgs& args) : m_args(args) {}
explicit GIOProtoArgs(GProtoArgs &&args) : m_args(std::move(args)) {}
GProtoArgs m_args;
// TODO: Think about the addition operator
/**
* @brief This operator allows to complement the proto vectors at runtime.
*
* It's an ordinary overload of addition assignment operator.
*
* Example of usage:
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/dynamic_graph_snippets.cpp GIOProtoArgs usage
*
*/
template<typename Tg>
friend GIOProtoArgs<Tg>& operator += (GIOProtoArgs<Tg> &lhs, const GIOProtoArgs<Tg> &rhs);
};
template<typename Tg>
cv::GIOProtoArgs<Tg>& operator += (cv::GIOProtoArgs<Tg> &lhs, const cv::GIOProtoArgs<Tg> &rhs)
{
lhs.m_args.reserve(lhs.m_args.size() + rhs.m_args.size());
lhs.m_args.insert(lhs.m_args.end(), rhs.m_args.begin(), rhs.m_args.end());
return lhs;
}
struct In_Tag{};
struct Out_Tag{};
using GProtoInputArgs = GIOProtoArgs<In_Tag>;
using GProtoOutputArgs = GIOProtoArgs<Out_Tag>;
// Perfect forwarding
template<typename... Ts> inline GProtoInputArgs GIn(Ts&&... ts)
{
return GProtoInputArgs(detail::packArgs(std::forward<Ts>(ts)...));
}
template<typename... Ts> inline GProtoOutputArgs GOut(Ts&&... ts)
{
return GProtoOutputArgs(detail::packArgs(std::forward<Ts>(ts)...));
}
namespace detail
{
// Extract elements form tuple
// FIXME: Someday utilize a generic tuple_to_vec<> routine
template<typename... Ts, int... Indexes>
static GProtoOutputArgs getGOut_impl(const std::tuple<Ts...>& ts, detail::Seq<Indexes...>)
{
return GProtoOutputArgs{ detail::packArgs(std::get<Indexes>(ts)...)};
}
}
template<typename... Ts> inline GProtoOutputArgs GOut(const std::tuple<Ts...>& ts)
{
// TODO: think of std::forward(ts)
return detail::getGOut_impl(ts, typename detail::MkSeq<sizeof...(Ts)>::type());
}
// Takes rvalue as input arg
template<typename... Ts> inline GProtoOutputArgs GOut(std::tuple<Ts...>&& ts)
{
// TODO: think of std::forward(ts)
return detail::getGOut_impl(ts, typename detail::MkSeq<sizeof...(Ts)>::type());
}
// Extract run-time arguments from node origin
// Can be used to extract constant values associated with G-objects
// (like GScalar) at graph construction time
GRunArg value_of(const GOrigin &origin);
// Transform run-time computation arguments into a collection of metadata
// extracted from that arguments
GMetaArg GAPI_EXPORTS descr_of(const GRunArg &arg );
GMetaArgs GAPI_EXPORTS descr_of(const GRunArgs &args);
// Transform run-time operation result argument into metadata extracted from that argument
// Used to compare the metadata, which generated at compile time with the metadata result operation in run time
GMetaArg GAPI_EXPORTS descr_of(const GRunArgP& argp);
// Checks if run-time computation argument can be described by metadata
bool GAPI_EXPORTS can_describe(const GMetaArg& meta, const GRunArg& arg);
bool GAPI_EXPORTS can_describe(const GMetaArgs& metas, const GRunArgs& args);
// Checks if run-time computation result argument can be described by metadata.
// Used to check if the metadata generated at compile time
// coincides with output arguments passed to computation in cpu and ocl backends
bool GAPI_EXPORTS can_describe(const GMetaArg& meta, const GRunArgP& argp);
// Validates input arguments
void GAPI_EXPORTS validate_input_arg(const GRunArg& arg);
void GAPI_EXPORTS validate_input_args(const GRunArgs& args);
} // namespace cv
#endif // OPENCV_GAPI_GPROTO_HPP

View File

@ -1,27 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GPU_CORE_API_HPP
#define OPENCV_GAPI_GPU_CORE_API_HPP
/** @file
* @deprecated Use <opencv2/gapi/ocl/core.hpp> instead.
*/
#include <opencv2/gapi/ocl/core.hpp>
namespace cv {
namespace gapi {
namespace core {
namespace gpu {
using namespace ocl;
} // namespace gpu
} // namespace core
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_GPU_CORE_API_HPP

View File

@ -1,18 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GGPUKERNEL_HPP
#define OPENCV_GAPI_GGPUKERNEL_HPP
/** @file
* @deprecated Use <opencv2/gapi/ocl/goclkernel.hpp> instead.
*/
#include <opencv2/gapi/ocl/goclkernel.hpp>
#define GAPI_GPU_KERNEL GAPI_OCL_KERNEL
#endif // OPENCV_GAPI_GGPUKERNEL_HPP

View File

@ -1,28 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GPU_IMGPROC_API_HPP
#define OPENCV_GAPI_GPU_IMGPROC_API_HPP
/** @file
* @deprecated Use <opencv2/gapi/ocl/imgproc.hpp> instead.
*/
#include <opencv2/gapi/ocl/imgproc.hpp>
namespace cv {
namespace gapi {
namespace imgproc {
namespace gpu {
using namespace ocl;
} // namespace gpu
} // namespace imgproc
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_GPU_IMGPROC_API_HPP

View File

@ -1,140 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GSCALAR_HPP
#define OPENCV_GAPI_GSCALAR_HPP
#include <ostream>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/gcommon.hpp> // GShape
#include <opencv2/gapi/util/optional.hpp>
namespace cv
{
// Forward declaration; GNode and GOrigin are an internal
// (user-inaccessible) classes.
class GNode;
struct GOrigin;
/** \addtogroup gapi_data_objects
* @{
*/
/**
* @brief GScalar class represents cv::Scalar data in the graph.
*
* GScalar may be associated with a cv::Scalar value, which becomes
* its constant value bound in graph compile time. cv::GScalar describes a
* functional relationship between operations consuming and producing
* GScalar objects.
*
* GScalar is a virtual counterpart of cv::Scalar, which is usually used
* to represent the GScalar data in G-API during the execution.
*
* @sa Scalar
*/
class GAPI_EXPORTS_W_SIMPLE GScalar
{
public:
/**
* @brief Constructs an empty GScalar
*
* Normally, empty G-API data objects denote a starting point of
* the graph. When an empty GScalar is assigned to a result of some
* operation, it obtains a functional link to this operation (and
* is not empty anymore).
*/
GAPI_WRAP GScalar();
/**
* @brief Constructs a value-initialized GScalar
*
* GScalars may have their values be associated at graph
* construction time. It is useful when some operation has a
* GScalar input which doesn't change during the program
* execution, and is set only once. In this case, there is no need
* to declare such GScalar as a graph input.
*
* @note The value of GScalar may be overwritten by assigning some
* other GScalar to the object using `operator=` -- on the
* assignment, the old GScalar value is discarded.
*
* @param s a cv::Scalar value to associate with this GScalar object.
*/
GAPI_WRAP
explicit GScalar(const cv::Scalar& s);
/**
* @overload
* @brief Constructs a value-initialized GScalar
*
* @param s a cv::Scalar value to associate with this GScalar object.
*/
explicit GScalar(cv::Scalar&& s); // Constant value move-constructor from cv::Scalar
/**
* @overload
* @brief Constructs a value-initialized GScalar
*
* @param v0 A `double` value to associate with this GScalar. Note
* that only the first component of a four-component cv::Scalar is
* set to this value, with others remain zeros.
*
* This constructor overload is not marked `explicit` and can be
* used in G-API expression code like this:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp gscalar_implicit
*
* Here operator+(GMat,GScalar) is used to wrap cv::gapi::addC()
* and a value-initialized GScalar is created on the fly.
*
* @overload
*/
GScalar(double v0); // Constant value constructor from double
/// @private
GScalar(const GNode &n, std::size_t out); // Operation result constructor
/// @private
GOrigin& priv(); // Internal use only
/// @private
const GOrigin& priv() const; // Internal use only
private:
std::shared_ptr<GOrigin> m_priv;
};
/** @} */
/**
* \addtogroup gapi_meta_args
* @{
*/
struct GAPI_EXPORTS_W_SIMPLE GScalarDesc
{
// NB.: right now it is empty
inline bool operator== (const GScalarDesc &) const
{
return true; // NB: implement this method if GScalar meta appears
}
inline bool operator!= (const GScalarDesc &rhs) const
{
return !(*this == rhs);
}
};
GAPI_EXPORTS_W inline GScalarDesc empty_scalar_desc() { return GScalarDesc(); }
GAPI_EXPORTS GScalarDesc descr_of(const cv::Scalar &scalar);
std::ostream& operator<<(std::ostream& os, const cv::GScalarDesc &desc);
} // namespace cv
#endif // OPENCV_GAPI_GSCALAR_HPP

View File

@ -1,430 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2021 Intel Corporation
#ifndef OPENCV_GAPI_GSTREAMING_COMPILED_HPP
#define OPENCV_GAPI_GSTREAMING_COMPILED_HPP
#include <memory>
#include <vector>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/own/assert.hpp>
#include <opencv2/gapi/util/optional.hpp>
#include <opencv2/gapi/garg.hpp>
#include <opencv2/gapi/streaming/source.hpp>
namespace cv {
template<class T> using optional = cv::util::optional<T>;
namespace detail {
template<typename T> struct wref_spec {
using type = T;
};
template<typename T> struct wref_spec<std::vector<T> > {
using type = T;
};
template<typename RefHolder>
struct OptRef {
struct OptHolder {
virtual void mov(RefHolder &h) = 0;
virtual void reset() = 0;
virtual ~OptHolder() = default;
using Ptr = std::shared_ptr<OptHolder>;
};
template<class T> struct Holder final: OptHolder {
std::reference_wrapper<cv::optional<T> > m_opt_ref;
explicit Holder(cv::optional<T>& opt) : m_opt_ref(std::ref(opt)) {
}
virtual void mov(RefHolder &h) override {
using U = typename wref_spec<T>::type;
m_opt_ref.get() = cv::util::make_optional(std::move(h.template wref<U>()));
}
virtual void reset() override {
m_opt_ref.get().reset();
}
};
template<class T>
explicit OptRef(cv::optional<T>& t) : m_opt{new Holder<T>(t)} {}
void mov(RefHolder &h) { m_opt->mov(h); }
void reset() { m_opt->reset();}
private:
typename OptHolder::Ptr m_opt;
};
using OptionalVectorRef = OptRef<cv::detail::VectorRef>;
using OptionalOpaqueRef = OptRef<cv::detail::OpaqueRef>;
} // namespace detail
// TODO: Keep it in sync with GRunArgP (derive the type automatically?)
using GOptRunArgP = util::variant<
optional<cv::Mat>*,
optional<cv::RMat>*,
optional<cv::MediaFrame>*,
optional<cv::Scalar>*,
cv::detail::OptionalVectorRef,
cv::detail::OptionalOpaqueRef
>;
using GOptRunArgsP = std::vector<GOptRunArgP>;
using GOptRunArg = util::variant<
optional<cv::Mat>,
optional<cv::RMat>,
optional<cv::MediaFrame>,
optional<cv::Scalar>,
optional<cv::detail::VectorRef>,
optional<cv::detail::OpaqueRef>
>;
using GOptRunArgs = std::vector<GOptRunArg>;
namespace detail {
template<typename T> inline GOptRunArgP wrap_opt_arg(optional<T>& arg) {
// By default, T goes to an OpaqueRef. All other types are specialized
return GOptRunArgP{OptionalOpaqueRef(arg)};
}
template<typename T> inline GOptRunArgP wrap_opt_arg(optional<std::vector<T> >& arg) {
return GOptRunArgP{OptionalVectorRef(arg)};
}
template<> inline GOptRunArgP wrap_opt_arg(optional<cv::Mat> &m) {
return GOptRunArgP{&m};
}
template<> inline GOptRunArgP wrap_opt_arg(optional<cv::RMat> &m) {
return GOptRunArgP{&m};
}
template<> inline GOptRunArgP wrap_opt_arg(optional<cv::MediaFrame> &f) {
return GOptRunArgP{&f};
}
template<> inline GOptRunArgP wrap_opt_arg(optional<cv::Scalar> &s) {
return GOptRunArgP{&s};
}
} // namespace detail
// Now cv::gout() may produce an empty vector (see "dynamic graphs"), so
// there may be a conflict between these two. State here that Opt version
// _must_ have at least one input for this overload
template<typename T, typename... Ts>
inline GOptRunArgsP gout(optional<T>&arg, optional<Ts>&... args)
{
return GOptRunArgsP{ detail::wrap_opt_arg(arg), detail::wrap_opt_arg(args)... };
}
/**
* \addtogroup gapi_main_classes
* @{
*/
/**
* @brief Represents a computation (graph) compiled for streaming.
*
* This class represents a product of graph compilation (calling
* cv::GComputation::compileStreaming()). Objects of this class
* actually do stream processing, and the whole pipeline execution
* complexity is incapsulated into objects of this class. Execution
* model has two levels: at the very top, the execution of a
* heterogeneous graph is aggressively pipelined; at the very bottom
* the execution of every internal block is determined by its
* associated backend. Backends are selected based on kernel packages
* passed via compilation arguments ( see @ref gapi_compile_args,
* GNetworkPackage, GKernelPackage for details).
*
* GStreamingCompiled objects have a "player" semantics -- there are
* methods like start() and stop(). GStreamingCompiled has a full
* control over a videostream and so is stateful. You need to specify the
* input stream data using setSource() and then call start() to
* actually start processing. After that, use pull() or try_pull() to
* obtain next processed data frame from the graph in a blocking or
* non-blocking way, respectively.
*
* Currently a single GStreamingCompiled can process only one video
* streat at time. Produce multiple GStreamingCompiled objects to run the
* same graph on multiple video streams.
*
* @sa GCompiled
*/
class GAPI_EXPORTS_W_SIMPLE GStreamingCompiled
{
public:
class GAPI_EXPORTS Priv;
GAPI_WRAP GStreamingCompiled();
// FIXME: More overloads?
/**
* @brief Specify the input data to GStreamingCompiled for
* processing, a generic version.
*
* Use gin() to create an input parameter vector.
*
* Input vectors must have the same number of elements as defined
* in the cv::GComputation protocol (at the moment of its
* construction). Shapes of elements also must conform to protocol
* (e.g. cv::Mat needs to be passed where cv::GMat has been
* declared as input, and so on). Run-time exception is generated
* on type mismatch.
*
* In contrast with regular GCompiled, user can also pass an
* object of type GVideoCapture for a GMat parameter of the parent
* GComputation. The compiled pipeline will start fetching data
* from that GVideoCapture and feeding it into the
* pipeline. Pipeline stops when a GVideoCapture marks end of the
* stream (or when stop() is called).
*
* Passing a regular Mat for a GMat parameter makes it "infinite"
* source -- pipeline may run forever feeding with this Mat until
* stopped explicitly.
*
* Currently only a single GVideoCapture is supported as input. If
* the parent GComputation is declared with multiple input GMat's,
* one of those can be specified as GVideoCapture but all others
* must be regular Mat objects.
*
* Throws if pipeline is already running. Use stop() and then
* setSource() to run the graph on a new video stream.
*
* @note This method is not thread-safe (with respect to the user
* side) at the moment. Protect the access if
* start()/stop()/setSource() may be called on the same object in
* multiple threads in your application.
*
* @param ins vector of inputs to process.
* @sa gin
*/
void setSource(GRunArgs &&ins);
/// @private -- Exclude this function from OpenCV documentation
GAPI_WRAP void setSource(const cv::detail::ExtractArgsCallback& callback);
/**
* @brief Specify an input video stream for a single-input
* computation pipeline.
*
* Throws if pipeline is already running. Use stop() and then
* setSource() to run the graph on a new video stream.
*
* @overload
* @param s a shared pointer to IStreamSource representing the
* input video stream.
*/
void setSource(const gapi::wip::IStreamSource::Ptr& s);
/**
* @brief Constructs and specifies an input video stream for a
* single-input computation pipeline with the given parameters.
*
* Throws if pipeline is already running. Use stop() and then
* setSource() to run the graph on a new video stream.
*
* @overload
* @param args arguments used to construct and initialize a stream
* source.
*/
template<typename T, typename... Args>
void setSource(Args&&... args) {
setSource(cv::gapi::wip::make_src<T>(std::forward<Args>(args)...));
}
/**
* @brief Start the pipeline execution.
*
* Use pull()/try_pull() to obtain data. Throws an exception if
* a video source was not specified.
*
* setSource() must be called first, even if the pipeline has been
* working already and then stopped (explicitly via stop() or due
* stream completion)
*
* @note This method is not thread-safe (with respect to the user
* side) at the moment. Protect the access if
* start()/stop()/setSource() may be called on the same object in
* multiple threads in your application.
*/
GAPI_WRAP void start();
/**
* @brief Get the next processed frame from the pipeline.
*
* Use gout() to create an output parameter vector.
*
* Output vectors must have the same number of elements as defined
* in the cv::GComputation protocol (at the moment of its
* construction). Shapes of elements also must conform to protocol
* (e.g. cv::Mat needs to be passed where cv::GMat has been
* declared as output, and so on). Run-time exception is generated
* on type mismatch.
*
* This method writes new data into objects passed via output
* vector. If there is no data ready yet, this method blocks. Use
* try_pull() if you need a non-blocking version.
*
* @param outs vector of output parameters to obtain.
* @return true if next result has been obtained,
* false marks end of the stream.
*/
bool pull(cv::GRunArgsP &&outs);
// NB: Used from python
/// @private -- Exclude this function from OpenCV documentation
GAPI_WRAP std::tuple<bool, cv::util::variant<cv::GRunArgs, cv::GOptRunArgs>> pull();
/**
* @brief Get some next available data from the pipeline.
*
* This method takes a vector of cv::optional object. An object is
* assigned to some value if this value is available (ready) at
* the time of the call, and resets the object to empty() if it is
* not.
*
* This is a blocking method which guarantees that some data has
* been written to the output vector on return.
*
* Using this method only makes sense if the graph has
* desynchronized parts (see cv::gapi::desync). If there is no
* desynchronized parts in the graph, the behavior of this
* method is identical to the regular pull() (all data objects are
* produced synchronously in the output vector).
*
* Use gout() to create an output parameter vector.
*
* Output vectors must have the same number of elements as defined
* in the cv::GComputation protocol (at the moment of its
* construction). Shapes of elements also must conform to protocol
* (e.g. cv::optional<cv::Mat> needs to be passed where cv::GMat
* has been declared as output, and so on). Run-time exception is
* generated on type mismatch.
*
* This method writes new data into objects passed via output
* vector. If there is no data ready yet, this method blocks. Use
* try_pull() if you need a non-blocking version.
*
* @param outs vector of output parameters to obtain.
* @return true if next result has been obtained,
* false marks end of the stream.
*
* @sa cv::gapi::desync
*/
bool pull(cv::GOptRunArgsP &&outs);
/**
* @brief Try to get the next processed frame from the pipeline.
*
* Use gout() to create an output parameter vector.
*
* This method writes new data into objects passed via output
* vector. If there is no data ready yet, the output vector
* remains unchanged and false is returned.
*
* @return true if data has been obtained, and false if it was
* not. Note: false here doesn't mark the end of the stream.
*/
bool try_pull(cv::GRunArgsP &&outs);
/**
* @brief Stop (abort) processing the pipeline.
*
* Note - it is not pause but a complete stop. Calling start()
* will cause G-API to start processing the stream from the early beginning.
*
* Throws if the pipeline is not running.
*/
GAPI_WRAP void stop();
/**
* @brief Test if the pipeline is running.
*
* @note This method is not thread-safe (with respect to the user
* side) at the moment. Protect the access if
* start()/stop()/setSource() may be called on the same object in
* multiple threads in your application.
*
* @return true if the current stream is not over yet.
*/
GAPI_WRAP bool running() const;
/// @private
Priv& priv();
/**
* @brief Check if compiled object is valid (non-empty)
*
* @return true if the object is runnable (valid), false otherwise
*/
explicit operator bool () const;
/**
* @brief Vector of metadata this graph was compiled for.
*
* @return Unless _reshape_ is not supported, return value is the
* same vector which was passed to cv::GComputation::compile() to
* produce this compiled object. Otherwise, it is the latest
* metadata vector passed to reshape() (if that call was
* successful).
*/
const GMetaArgs& metas() const; // Meta passed to compile()
/**
* @brief Vector of metadata descriptions of graph outputs
*
* @return vector with formats/resolutions of graph's output
* objects, auto-inferred from input metadata vector by
* operations which form this computation.
*
* @note GCompiled objects produced from the same
* cv::GComputiation graph with different input metas may return
* different values in this vector.
*/
const GMetaArgs& outMetas() const;
protected:
/// @private
std::shared_ptr<Priv> m_priv;
};
namespace gapi {
/**
* @brief This namespace contains G-API functions, structures, and
* symbols related to the Streaming execution mode.
*
* Some of the operations defined in this namespace (e.g. size(),
* BGR(), etc.) can be used in the traditional execution mode too.
*/
namespace streaming {
/**
* @brief Specify queue capacity for streaming execution.
*
* In the streaming mode the pipeline steps are connected with queues
* and this compile argument controls every queue's size.
*/
struct GAPI_EXPORTS_W_SIMPLE queue_capacity
{
GAPI_WRAP
explicit queue_capacity(size_t cap = 1) : capacity(cap) { }
GAPI_PROP_RW
size_t capacity;
};
} // namespace streaming
} // namespace gapi
namespace detail
{
template<> struct CompileArgTag<cv::gapi::streaming::queue_capacity>
{
static const char* tag() { return "gapi.queue_capacity"; }
};
}
/** @} gapi_main_classes */
}
#endif // OPENCV_GAPI_GSTREAMING_COMPILED_HPP

View File

@ -1,103 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019 Intel Corporation
#ifndef OPENCV_GAPI_GTRANSFORM_HPP
#define OPENCV_GAPI_GTRANSFORM_HPP
#include <functional>
#include <type_traits>
#include <utility>
#include <opencv2/gapi/gcommon.hpp>
#include <opencv2/gapi/util/util.hpp>
#include <opencv2/gapi/garg.hpp>
#include <opencv2/gapi/gtype_traits.hpp>
#include <opencv2/gapi/util/compiler_hints.hpp>
#include <opencv2/gapi/gcomputation.hpp>
namespace cv
{
struct GAPI_EXPORTS GTransform
{
// FIXME: consider another simplified
// class instead of GComputation
using F = std::function<GComputation()>;
std::string description;
F pattern;
F substitute;
GTransform(const std::string& d, const F &p, const F &s) : description(d), pattern(p), substitute(s) {}
};
namespace detail
{
template <typename, typename, typename>
struct TransHelper;
template <typename K, typename... Ins, typename Out>
struct TransHelper<K, std::tuple<Ins...>, Out>
{
template <typename Callable, int... IIs, int... OIs>
static GComputation invoke(Callable f, Seq<IIs...>, Seq<OIs...>)
{
const std::tuple<Ins...> ins;
const auto r = tuple_wrap_helper<Out>::get(f(std::get<IIs>(ins)...));
return GComputation(cv::GIn(std::get<IIs>(ins)...),
cv::GOut(std::get<OIs>(r)...));
}
static GComputation get_pattern()
{
return invoke(K::pattern, typename MkSeq<sizeof...(Ins)>::type(),
typename MkSeq<std::tuple_size<typename tuple_wrap_helper<Out>::type>::value>::type());
}
static GComputation get_substitute()
{
return invoke(K::substitute, typename MkSeq<sizeof...(Ins)>::type(),
typename MkSeq<std::tuple_size<typename tuple_wrap_helper<Out>::type>::value>::type());
}
};
} // namespace detail
template <typename, typename>
class GTransformImpl;
template <typename K, typename R, typename... Args>
class GTransformImpl<K, std::function<R(Args...)>> : public cv::detail::TransHelper<K, std::tuple<Args...>, R>,
public cv::detail::TransformTag
{
public:
// FIXME: currently there is no check that transformations' signatures are unique
// and won't be any intersection in graph compilation stage
using API = K;
static GTransform transformation()
{
return GTransform(K::descr(), &K::get_pattern, &K::get_substitute);
}
};
} // namespace cv
#define G_DESCR_HELPER_CLASS(Class) Class##DescrHelper
#define G_DESCR_HELPER_BODY(Class, Descr) \
namespace detail \
{ \
struct G_DESCR_HELPER_CLASS(Class) \
{ \
static constexpr const char *descr() { return Descr; } \
}; \
}
#define GAPI_TRANSFORM(Class, API, Descr) \
G_DESCR_HELPER_BODY(Class, Descr) \
struct Class final : public cv::GTransformImpl<Class, std::function API>, \
public detail::G_DESCR_HELPER_CLASS(Class)
#endif // OPENCV_GAPI_GTRANSFORM_HPP

View File

@ -1,242 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2020 Intel Corporation
#ifndef OPENCV_GAPI_GTYPE_TRAITS_HPP
#define OPENCV_GAPI_GTYPE_TRAITS_HPP
#include <vector>
#include <type_traits>
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/gscalar.hpp>
#include <opencv2/gapi/garray.hpp>
#include <opencv2/gapi/gopaque.hpp>
#include <opencv2/gapi/gframe.hpp>
#include <opencv2/gapi/streaming/source.hpp>
#include <opencv2/gapi/media.hpp>
#include <opencv2/gapi/gcommon.hpp>
#include <opencv2/gapi/util/util.hpp>
#include <opencv2/gapi/own/convert.hpp>
namespace cv
{
namespace detail
{
template<typename, typename = void>
struct contains_shape_field : std::false_type {};
template<typename TaggedTypeCandidate>
struct contains_shape_field<TaggedTypeCandidate,
void_t<decltype(TaggedTypeCandidate::shape)>> :
std::is_same<typename std::decay<decltype(TaggedTypeCandidate::shape)>::type, GShape>
{};
template<typename Type>
struct has_gshape : contains_shape_field<Type> {};
// FIXME: These traits and enum and possible numerous switch(kind)
// block may be replaced with a special Handler<T> object or with
// a double dispatch
enum class ArgKind: int
{
OPAQUE_VAL, // Unknown, generic, opaque-to-GAPI data type - STATIC
// Note: OPAQUE is sometimes defined in Win sys headers
#if !defined(OPAQUE) && !defined(CV_DOXYGEN)
OPAQUE = OPAQUE_VAL, // deprecated value used for compatibility, use OPAQUE_VAL instead
#endif
GOBJREF, // <internal> reference to object
GMAT, // a cv::GMat
GMATP, // a cv::GMatP
GFRAME, // a cv::GFrame
GSCALAR, // a cv::GScalar
GARRAY, // a cv::GArrayU (note - exactly GArrayU, not GArray<T>!)
GOPAQUE, // a cv::GOpaqueU (note - exactly GOpaqueU, not GOpaque<T>!)
};
// Describe G-API types (G-types) with traits. Mostly used by
// cv::GArg to store meta information about types passed into
// operation arguments. Please note that cv::GComputation is
// defined on GProtoArgs, not GArgs!
template<typename T> struct GTypeTraits;
template<typename T> struct GTypeTraits
{
static constexpr const ArgKind kind = ArgKind::OPAQUE_VAL;
static constexpr const OpaqueKind op_kind = OpaqueKind::CV_UNKNOWN;
};
template<> struct GTypeTraits<cv::GMat>
{
static constexpr const ArgKind kind = ArgKind::GMAT;
static constexpr const GShape shape = GShape::GMAT;
static constexpr const OpaqueKind op_kind = OpaqueKind::CV_UNKNOWN;
};
template<> struct GTypeTraits<cv::GMatP>
{
static constexpr const ArgKind kind = ArgKind::GMATP;
static constexpr const GShape shape = GShape::GMAT;
static constexpr const OpaqueKind op_kind = OpaqueKind::CV_UNKNOWN;
};
template<> struct GTypeTraits<cv::GFrame>
{
static constexpr const ArgKind kind = ArgKind::GFRAME;
static constexpr const GShape shape = GShape::GFRAME;
static constexpr const OpaqueKind op_kind = OpaqueKind::CV_UNKNOWN;
};
template<> struct GTypeTraits<cv::GScalar>
{
static constexpr const ArgKind kind = ArgKind::GSCALAR;
static constexpr const GShape shape = GShape::GSCALAR;
static constexpr const OpaqueKind op_kind = OpaqueKind::CV_UNKNOWN;
};
template<class T> struct GTypeTraits<cv::GArray<T> >
{
static constexpr const ArgKind kind = ArgKind::GARRAY;
static constexpr const GShape shape = GShape::GARRAY;
static constexpr const OpaqueKind op_kind = GOpaqueTraits<T>::kind;
using host_type = std::vector<T>;
using strip_type = cv::detail::VectorRef;
static cv::detail::GArrayU wrap_value(const cv::GArray<T> &t) { return t.strip();}
static cv::detail::VectorRef wrap_in (const std::vector<T> &t) { return detail::VectorRef(t); }
static cv::detail::VectorRef wrap_out ( std::vector<T> &t) { return detail::VectorRef(t); }
};
template<class T> struct GTypeTraits<cv::GOpaque<T> >
{
static constexpr const ArgKind kind = ArgKind::GOPAQUE;
static constexpr const GShape shape = GShape::GOPAQUE;
static constexpr const OpaqueKind op_kind = GOpaqueTraits<T>::kind;
using host_type = T;
using strip_type = cv::detail::OpaqueRef;
static cv::detail::GOpaqueU wrap_value(const cv::GOpaque<T> &t) { return t.strip();}
static cv::detail::OpaqueRef wrap_in (const T &t) { return detail::OpaqueRef(t); }
static cv::detail::OpaqueRef wrap_out ( T &t) { return detail::OpaqueRef(t); }
};
// Tests if Trait for type T requires extra marshalling ("custom wrap") or not.
// If Traits<T> has wrap_value() defined, it does.
template<class T> struct has_custom_wrap
{
template<class,class> class check;
template<typename C> static std::true_type test(check<C, decltype(&GTypeTraits<C>::wrap_value)> *);
template<typename C> static std::false_type test(...);
using type = decltype(test<T>(nullptr));
static const constexpr bool value = std::is_same<std::true_type, decltype(test<T>(nullptr))>::value;
};
// Resolve a Host type back to its associated G-Type.
// FIXME: Probably it can be avoided
// FIXME: GMatP is not present here.
// (Actually these traits is used only to check
// if associated G-type has custom wrap functions
// and GMat behavior is correct for GMatP)
template<typename T> struct GTypeOf;
#if !defined(GAPI_STANDALONE)
template<> struct GTypeOf<cv::UMat> { using type = cv::GMat; };
#endif // !defined(GAPI_STANDALONE)
template<> struct GTypeOf<cv::Mat> { using type = cv::GMat; };
template<> struct GTypeOf<cv::RMat> { using type = cv::GMat; };
template<> struct GTypeOf<cv::Scalar> { using type = cv::GScalar; };
template<typename U> struct GTypeOf<std::vector<U> > { using type = cv::GArray<U>; };
template<typename U> struct GTypeOf { using type = cv::GOpaque<U>;};
template<> struct GTypeOf<cv::MediaFrame> { using type = cv::GFrame; };
// FIXME: This is not quite correct since IStreamSource may
// produce not only Mat but also MediaFrame, Scalar and vector
// data. TODO: Extend the type dispatching on these types too.
template<> struct GTypeOf<cv::gapi::wip::IStreamSource::Ptr> { using type = cv::GMat;};
template<class T> using g_type_of_t = typename GTypeOf<T>::type;
// Marshalling helper for G-types and its Host types. Helps G-API
// to store G types in internal generic containers for further
// processing. Implements the following callbacks:
//
// * wrap() - converts user-facing G-type into an internal one
// for internal storage.
// Used when G-API operation is instantiated (G<Kernel>::on(),
// etc) during expressing a pipeline. Mostly returns input
// value "as is" except the case when G-type is a template. For
// template G-classes, calls custom wrap() from Traits.
// The value returned by wrap() is then wrapped into GArg() and
// stored in G-API metadata.
//
// Example:
// - cv::GMat arguments are passed as-is.
// - integers, pointers, STL containers, user types are passed as-is.
// - cv::GArray<T> is converted to cv::GArrayU.
//
// * wrap_in() / wrap_out() - convert Host type associated with
// G-type to internal representation type.
//
// - For "simple" (non-template) G-types, returns value as-is.
// Example: cv::GMat has host type cv::Mat, when user passes a
// cv::Mat, system stores it internally as cv::Mat.
//
// - For "complex" (template) G-types, utilizes custom
// wrap_in()/wrap_out() as described in Traits.
// Example: cv::GArray<T> has host type std::vector<T>, when
// user passes a std::vector<T>, system stores it
// internally as VectorRef (with <T> stripped away).
template<typename T, class Custom = void> struct WrapValue
{
static auto wrap(const T& t) ->
typename std::remove_reference<T>::type
{
return static_cast<typename std::remove_reference<T>::type>(t);
}
template<typename U> static U wrap_in (const U &u) { return u; }
template<typename U> static U* wrap_out(U &u) { return &u; }
};
template<typename T> struct WrapValue<T, typename std::enable_if<has_custom_wrap<T>::value>::type>
{
static auto wrap(const T& t) -> decltype(GTypeTraits<T>::wrap_value(t))
{
return GTypeTraits<T>::wrap_value(t);
}
template<typename U> static auto wrap_in (const U &u) -> typename GTypeTraits<T>::strip_type
{
static_assert(!(cv::detail::has_gshape<GTypeTraits<U>>::value
|| cv::detail::contains<typename std::decay<U>::type, GAPI_OWN_TYPES_LIST>::value),
"gin/gout must not be used with G* classes or cv::gapi::own::*");
return GTypeTraits<T>::wrap_in(u);
}
template<typename U> static auto wrap_out(U &u) -> typename GTypeTraits<T>::strip_type
{
static_assert(!(cv::detail::has_gshape<GTypeTraits<U>>::value
|| cv::detail::contains<typename std::decay<U>::type, GAPI_OWN_TYPES_LIST>::value),
"gin/gout must not be used with G* classes or cv::gapi::own::*");
return GTypeTraits<T>::wrap_out(u);
}
};
template<typename T> using wrap_gapi_helper = WrapValue<typename std::decay<T>::type>;
template<typename T> using wrap_host_helper = WrapValue<typename std::decay<g_type_of_t<T> >::type>;
// Union type for various user-defined type constructors (GArray<T>,
// GOpaque<T>, etc)
//
// TODO: Replace construct-only API with a more generic one (probably
// with bits of introspection)
//
// Not required for non-user-defined types (GMat, GScalar, etc)
using HostCtor = util::variant
< util::monostate
, detail::ConstructVec
, detail::ConstructOpaque
>;
template<typename T> struct GObtainCtor {
static HostCtor get() { return HostCtor{}; }
};
template<typename T> struct GObtainCtor<GArray<T> > {
static HostCtor get() { return HostCtor{ConstructVec{&GArray<T>::VCtor}}; }
};
template<typename T> struct GObtainCtor<GOpaque<T> > {
static HostCtor get() { return HostCtor{ConstructOpaque{&GOpaque<T>::Ctor}}; }
};
} // namespace detail
} // namespace cv
#endif // OPENCV_GAPI_GTYPE_TRAITS_HPP

View File

@ -1,246 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GTYPED_HPP
#define OPENCV_GAPI_GTYPED_HPP
#if !defined(GAPI_STANDALONE)
#include <vector>
#include <opencv2/gapi/gcomputation.hpp>
#include <opencv2/gapi/gcompiled.hpp>
#include <opencv2/gapi/gproto.hpp>
#include <opencv2/gapi/gcommon.hpp>
namespace cv {
namespace detail
{
// FIXME: How to prevent coolhackers from extending it by their own types?
// FIXME: ...Should we care?
template<typename T> struct ProtoToParam;
template<> struct ProtoToParam<cv::GMat> { using type = cv::Mat; };
template<> struct ProtoToParam<cv::GScalar> { using type = cv::Scalar; };
template<typename U> struct ProtoToParam<cv::GArray<U> > { using type = std::vector<U>; };
template<> struct ProtoToParam<cv::GArray<cv::GMat>> { using type = std::vector<cv::Mat>; };
template<typename U> struct ProtoToParam<cv::GOpaque<U> > { using type = U; };
template<typename T> using ProtoToParamT = typename ProtoToParam<T>::type;
template<typename T> struct ProtoToMeta;
template<> struct ProtoToMeta<cv::GMat> { using type = cv::GMatDesc; };
template<> struct ProtoToMeta<cv::GScalar> { using type = cv::GScalarDesc; };
template<typename U> struct ProtoToMeta<cv::GArray<U> > { using type = cv::GArrayDesc; };
template<typename U> struct ProtoToMeta<cv::GOpaque<U> > { using type = cv::GOpaqueDesc; };
template<typename T> using ProtoToMetaT = typename ProtoToMeta<T>::type;
//workaround for MSVC 19.0 bug
template <typename T>
auto make_default()->decltype(T{}) {return {};}
} // detail
/**
* @brief This class is a typed wrapper over a regular GComputation.
*
* `std::function<>`-like template parameter specifies the graph
* signature so methods so the object's constructor, methods like
* `apply()` and the derived `GCompiledT::operator()` also become
* typed.
*
* There is no need to use cv::gin() or cv::gout() modifiers with
* objects of this class. Instead, all input arguments are followed
* by all output arguments in the order from the template argument
* signature.
*
* Refer to the following example. Regular (untyped) code is written this way:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp Untyped_Example
*
* Here:
*
* - cv::GComputation object is created with a lambda constructor
* where it is defined as a two-input, one-output graph.
*
* - Its method `apply()` in fact takes arbitrary number of arguments
* (as vectors) so user can pass wrong number of inputs/outputs
* here. C++ compiler wouldn't notice that since the cv::GComputation
* API is polymorphic, and only a run-time error will be generated.
*
* Now the same code written with typed API:
*
* @snippet samples/cpp/tutorial_code/gapi/doc_snippets/api_ref_snippets.cpp Typed_Example
*
* The key difference is:
*
* - Now the constructor lambda *must take* parameters and *must
* return* values as defined in the `GComputationT<>` signature.
* - Its method `apply()` does not require any extra specifiers to
* separate input arguments from the output ones
* - A `GCompiledT` (compilation product) takes input/output
* arguments with no extra specifiers as well.
*/
template<typename> class GComputationT;
// Single return value implementation
template<typename R, typename... Args> class GComputationT<R(Args...)>
{
public:
typedef std::function<R(Args...)> Gen;
class GCompiledT
{
private:
friend class GComputationT<R(Args...)>;
cv::GCompiled m_comp;
explicit GCompiledT(const cv::GCompiled &comp) : m_comp(comp) {}
public:
GCompiledT() {}
void operator()(detail::ProtoToParamT<Args>... inArgs,
detail::ProtoToParamT<R> &outArg)
{
m_comp(cv::gin(inArgs...), cv::gout(outArg));
}
explicit operator bool() const
{
return static_cast<bool>(m_comp);
}
};
private:
typedef std::pair<R, GProtoInputArgs > Captured;
Captured capture(const Gen& g, Args... args)
{
return Captured(g(args...), cv::GIn(args...));
}
Captured m_capture;
cv::GComputation m_comp;
public:
GComputationT(const Gen &generator)
: m_capture(capture(generator, detail::make_default<Args>()...))
, m_comp(cv::GProtoInputArgs(std::move(m_capture.second)),
cv::GOut(m_capture.first))
{
}
void apply(detail::ProtoToParamT<Args>... inArgs,
detail::ProtoToParamT<R> &outArg,
GCompileArgs &&args)
{
m_comp.apply(cv::gin(inArgs...), cv::gout(outArg), std::move(args));
}
void apply(detail::ProtoToParamT<Args>... inArgs,
detail::ProtoToParamT<R> &outArg)
{
apply(inArgs..., outArg, GCompileArgs());
}
GCompiledT compile(detail::ProtoToMetaT<Args>... inDescs)
{
GMetaArgs inMetas = { GMetaArg(inDescs)... };
return GCompiledT(m_comp.compile(std::move(inMetas), GCompileArgs()));
}
GCompiledT compile(detail::ProtoToMetaT<Args>... inDescs, GCompileArgs &&args)
{
GMetaArgs inMetas = { GMetaArg(inDescs)... };
return GCompiledT(m_comp.compile(std::move(inMetas), std::move(args)));
}
};
// Multiple (fixed) return value implementation. FIXME: How to avoid copy-paste?
template<typename... R, typename... Args> class GComputationT<std::tuple<R...>(Args...)>
{
public:
typedef std::function<std::tuple<R...>(Args...)> Gen;
class GCompiledT
{
private:
friend class GComputationT<std::tuple<R...>(Args...)>;
cv::GCompiled m_comp;
explicit GCompiledT(const cv::GCompiled &comp) : m_comp(comp) {}
public:
GCompiledT() {}
void operator()(detail::ProtoToParamT<Args>... inArgs,
detail::ProtoToParamT<R>&... outArgs)
{
m_comp(cv::gin(inArgs...), cv::gout(outArgs...));
}
explicit operator bool() const
{
return static_cast<bool>(m_comp);
}
};
private:
typedef std::pair<GProtoArgs, GProtoArgs> Captured;
template<int... IIs>
Captured capture(GProtoArgs &&args, const std::tuple<R...> &rr, detail::Seq<IIs...>)
{
return Captured(cv::GOut(std::get<IIs>(rr)...).m_args, args);
}
Captured capture(const Gen& g, Args... args)
{
return capture(cv::GIn(args...).m_args, g(args...), typename detail::MkSeq<sizeof...(R)>::type());
}
Captured m_capture;
cv::GComputation m_comp;
public:
GComputationT(const Gen &generator)
: m_capture(capture(generator, detail::make_default<Args>()...))
, m_comp(cv::GProtoInputArgs(std::move(m_capture.second)),
cv::GProtoOutputArgs(std::move(m_capture.first)))
{
}
void apply(detail::ProtoToParamT<Args>... inArgs,
detail::ProtoToParamT<R>&... outArgs,
GCompileArgs &&args)
{
m_comp.apply(cv::gin(inArgs...), cv::gout(outArgs...), std::move(args));
}
void apply(detail::ProtoToParamT<Args>... inArgs,
detail::ProtoToParamT<R>&... outArgs)
{
apply(inArgs..., outArgs..., GCompileArgs());
}
GCompiledT compile(detail::ProtoToMetaT<Args>... inDescs)
{
GMetaArgs inMetas = { GMetaArg(inDescs)... };
return GCompiledT(m_comp.compile(std::move(inMetas), GCompileArgs()));
}
GCompiledT compile(detail::ProtoToMetaT<Args>... inDescs, GCompileArgs &&args)
{
GMetaArgs inMetas = { GMetaArg(inDescs)... };
return GCompiledT(m_comp.compile(std::move(inMetas), std::move(args)));
}
};
} // namespace cv
#endif // !defined(GAPI_STANDALONE)
#endif // OPENCV_GAPI_GTYPED_HPP

File diff suppressed because it is too large Load Diff

View File

@ -1,717 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019-2021 Intel Corporation
#ifndef OPENCV_GAPI_INFER_HPP
#define OPENCV_GAPI_INFER_HPP
// FIXME: Inference API is currently only available in full mode
#if !defined(GAPI_STANDALONE)
#include <functional>
#include <string> // string
#include <utility> // tuple
#include <type_traits> // is_same, false_type
#include <opencv2/gapi/util/util.hpp> // all_satisfy
#include <opencv2/gapi/util/any.hpp> // any<>
#include <opencv2/gapi/gkernel.hpp> // GKernelType[M], GBackend
#include <opencv2/gapi/garg.hpp> // GArg
#include <opencv2/gapi/gcommon.hpp> // CompileArgTag
#include <opencv2/gapi/gmetaarg.hpp> // GMetaArg
namespace cv {
template<typename, typename> class GNetworkType;
namespace detail {
// Infer ///////////////////////////////////////////////////////////////////////
template<typename T>
struct accepted_infer_types {
static constexpr const auto value =
std::is_same<typename std::decay<T>::type, cv::GMat>::value
|| std::is_same<typename std::decay<T>::type, cv::GFrame>::value;
};
template<typename... Ts>
using valid_infer_types = all_satisfy<accepted_infer_types, Ts...>;
// Infer2 //////////////////////////////////////////////////////////////////////
template<typename, typename>
struct valid_infer2_types;
// Terminal case 1 (50/50 success)
template<typename T>
struct valid_infer2_types< std::tuple<cv::GMat>, std::tuple<T> > {
// By default, Nets are limited to GMat argument types only
// for infer2, every GMat argument may translate to either
// GArray<GMat> or GArray<Rect>. GArray<> part is stripped
// already at this point.
static constexpr const auto value =
std::is_same<typename std::decay<T>::type, cv::GMat>::value
|| std::is_same<typename std::decay<T>::type, cv::Rect>::value;
};
// Terminal case 2 (100% failure)
template<typename... Ts>
struct valid_infer2_types< std::tuple<>, std::tuple<Ts...> >
: public std::false_type {
};
// Terminal case 3 (100% failure)
template<typename... Ns>
struct valid_infer2_types< std::tuple<Ns...>, std::tuple<> >
: public std::false_type {
};
// Recursion -- generic
template<typename... Ns, typename T, typename...Ts>
struct valid_infer2_types< std::tuple<cv::GMat,Ns...>, std::tuple<T,Ts...> > {
static constexpr const auto value =
valid_infer2_types< std::tuple<cv::GMat>, std::tuple<T> >::value
&& valid_infer2_types< std::tuple<Ns...>, std::tuple<Ts...> >::value;
};
// Struct stores network input/output names.
// Used by infer<Generic>
struct InOutInfo
{
std::vector<std::string> in_names;
std::vector<std::string> out_names;
};
template <typename OutT>
class GInferOutputsTyped
{
public:
GInferOutputsTyped() = default;
GInferOutputsTyped(std::shared_ptr<cv::GCall> call)
: m_priv(std::make_shared<Priv>(std::move(call)))
{
}
OutT at(const std::string& name)
{
auto it = m_priv->blobs.find(name);
if (it == m_priv->blobs.end()) {
// FIXME: Avoid modifying GKernel
auto shape = cv::detail::GTypeTraits<OutT>::shape;
auto kind = cv::detail::GTypeTraits<OutT>::op_kind;
m_priv->call->kernel().outShapes.push_back(shape);
m_priv->call->kernel().outCtors.emplace_back(cv::detail::GObtainCtor<OutT>::get());
m_priv->call->kernel().outKinds.emplace_back(kind);
auto out_idx = static_cast<int>(m_priv->blobs.size());
it = m_priv->blobs.emplace(name,
cv::detail::Yield<OutT>::yield(*(m_priv->call), out_idx)).first;
m_priv->info->out_names.push_back(name);
}
return it->second;
}
private:
struct Priv
{
Priv(std::shared_ptr<cv::GCall> c)
: call(std::move(c)), info(cv::util::any_cast<InOutInfo>(&call->params()))
{
}
std::shared_ptr<cv::GCall> call;
InOutInfo* info = nullptr;
std::unordered_map<std::string, OutT> blobs;
};
std::shared_ptr<Priv> m_priv;
};
template <typename... Ts>
class GInferInputsTyped
{
public:
GInferInputsTyped()
: m_priv(std::make_shared<Priv>())
{
}
template <typename U>
GInferInputsTyped<Ts...>& setInput(const std::string& name, U in)
{
m_priv->blobs.emplace(std::piecewise_construct,
std::forward_as_tuple(name),
std::forward_as_tuple(in));
return *this;
}
using StorageT = cv::util::variant<Ts...>;
StorageT& operator[](const std::string& name) {
return m_priv->blobs[name];
}
using Map = std::unordered_map<std::string, StorageT>;
const Map& getBlobs() const {
return m_priv->blobs;
}
private:
struct Priv
{
std::unordered_map<std::string, StorageT> blobs;
};
std::shared_ptr<Priv> m_priv;
};
template<typename InferT>
std::shared_ptr<cv::GCall> makeCall(const std::string &tag,
std::vector<cv::GArg> &&args,
std::vector<std::string> &&names,
cv::GKinds &&kinds) {
auto call = std::make_shared<cv::GCall>(GKernel{
InferT::id(),
tag,
InferT::getOutMeta,
{}, // outShape will be filled later
std::move(kinds),
{}, // outCtors will be filled later
{}, // outKinds will be filled later
});
call->setArgs(std::move(args));
call->params() = cv::detail::InOutInfo{std::move(names), {}};
return call;
}
} // namespace detail
// TODO: maybe tuple_wrap_helper from util.hpp may help with this.
// Multiple-return-value network definition (specialized base class)
template<typename K, typename... R, typename... Args>
class GNetworkType<K, std::function<std::tuple<R...>(Args...)> >
{
public:
using InArgs = std::tuple<Args...>;
using OutArgs = std::tuple<R...>;
using Result = OutArgs;
using API = std::function<Result(Args...)>;
using ResultL = std::tuple< cv::GArray<R>... >;
};
// Single-return-value network definition (specialized base class)
template<typename K, typename R, typename... Args>
class GNetworkType<K, std::function<R(Args...)> >
{
public:
using InArgs = std::tuple<Args...>;
using OutArgs = std::tuple<R>;
using Result = R;
using API = std::function<R(Args...)>;
using ResultL = cv::GArray<R>;
};
// InferAPI: Accepts either GMat or GFrame for very individual network's input
template<class Net, class... Ts>
struct InferAPI {
using type = typename std::enable_if
< detail::valid_infer_types<Ts...>::value
&& std::tuple_size<typename Net::InArgs>::value == sizeof...(Ts)
, std::function<typename Net::Result(Ts...)>
>::type;
};
// InferAPIRoi: Accepts a rectangle and either GMat or GFrame
template<class Net, class T>
struct InferAPIRoi {
using type = typename std::enable_if
< detail::valid_infer_types<T>::value
&& std::tuple_size<typename Net::InArgs>::value == 1u
, std::function<typename Net::Result(cv::GOpaque<cv::Rect>, T)>
>::type;
};
// InferAPIList: Accepts a list of rectangles and list of GMat/GFrames;
// crops every input.
template<class Net, class... Ts>
struct InferAPIList {
using type = typename std::enable_if
< detail::valid_infer_types<Ts...>::value
&& std::tuple_size<typename Net::InArgs>::value == sizeof...(Ts)
, std::function<typename Net::ResultL(cv::GArray<cv::Rect>, Ts...)>
>::type;
};
// APIList2 is also template to allow different calling options
// (GArray<cv::Rect> vs GArray<cv::GMat> per input)
template<class Net, typename T, class... Ts>
struct InferAPIList2 {
using type = typename std::enable_if
< detail::valid_infer_types<T>::value &&
cv::detail::valid_infer2_types< typename Net::InArgs
, std::tuple<Ts...> >::value,
std::function<typename Net::ResultL(T, cv::GArray<Ts>...)>
>::type;
};
// Base "Infer" kernel. Note - for whatever network, kernel ID
// is always the same. Different inference calls are distinguished by
// network _tag_ (an extra field in GCall)
//
// getOutMeta is a stub callback collected by G-API kernel subsystem
// automatically. This is a rare case when this callback is defined by
// a particular backend, not by a network itself.
struct GInferBase {
static constexpr const char * id() {
return "org.opencv.dnn.infer"; // Universal stub
}
static GMetaArgs getOutMeta(const GMetaArgs &, const GArgs &) {
return GMetaArgs{}; // One more universal stub
}
};
// Base "InferROI" kernel.
// All notes from "Infer" kernel apply here as well.
struct GInferROIBase {
static constexpr const char * id() {
return "org.opencv.dnn.infer-roi"; // Universal stub
}
static GMetaArgs getOutMeta(const GMetaArgs &, const GArgs &) {
return GMetaArgs{}; // One more universal stub
}
};
// Base "Infer list" kernel.
// All notes from "Infer" kernel apply here as well.
struct GInferListBase {
static constexpr const char * id() {
return "org.opencv.dnn.infer-roi-list-1"; // Universal stub
}
static GMetaArgs getOutMeta(const GMetaArgs &, const GArgs &) {
return GMetaArgs{}; // One more universal stub
}
};
// Base "Infer list 2" kernel.
// All notes from "Infer" kernel apply here as well.
struct GInferList2Base {
static constexpr const char * id() {
return "org.opencv.dnn.infer-roi-list-2"; // Universal stub
}
static GMetaArgs getOutMeta(const GMetaArgs &, const GArgs &) {
return GMetaArgs{}; // One more universal stub
}
};
// A generic inference kernel. API (::on()) is fully defined by the Net
// template parameter.
// Acts as a regular kernel in graph (via KernelTypeMedium).
template<typename Net, typename... Args>
struct GInfer final
: public GInferBase
, public detail::KernelTypeMedium< GInfer<Net, Args...>
, typename InferAPI<Net, Args...>::type > {
using GInferBase::getOutMeta; // FIXME: name lookup conflict workaround?
static constexpr const char* tag() { return Net::tag(); }
};
// A specific roi-inference kernel. API (::on()) is fixed here and
// verified against Net.
template<typename Net, typename T>
struct GInferROI final
: public GInferROIBase
, public detail::KernelTypeMedium< GInferROI<Net, T>
, typename InferAPIRoi<Net, T>::type > {
using GInferROIBase::getOutMeta; // FIXME: name lookup conflict workaround?
static constexpr const char* tag() { return Net::tag(); }
};
// A generic roi-list inference kernel. API (::on()) is derived from
// the Net template parameter (see more in infer<> overload).
template<typename Net, typename... Args>
struct GInferList final
: public GInferListBase
, public detail::KernelTypeMedium< GInferList<Net, Args...>
, typename InferAPIList<Net, Args...>::type > {
using GInferListBase::getOutMeta; // FIXME: name lookup conflict workaround?
static constexpr const char* tag() { return Net::tag(); }
};
// An even more generic roi-list inference kernel. API (::on()) is
// derived from the Net template parameter (see more in infer<>
// overload).
// Takes an extra variadic template list to reflect how this network
// was called (with Rects or GMats as array parameters)
template<typename Net, typename T, typename... Args>
struct GInferList2 final
: public GInferList2Base
, public detail::KernelTypeMedium< GInferList2<Net, T, Args...>
, typename InferAPIList2<Net, T, Args...>::type > {
using GInferList2Base::getOutMeta; // FIXME: name lookup conflict workaround?
static constexpr const char* tag() { return Net::tag(); }
};
/**
* @brief G-API object used to collect network inputs
*/
using GInferInputs = cv::detail::GInferInputsTyped<cv::GMat, cv::GFrame>;
/**
* @brief G-API object used to collect the list of network inputs
*/
using GInferListInputs = cv::detail::GInferInputsTyped<cv::GArray<cv::GMat>, cv::GArray<cv::Rect>>;
/**
* @brief G-API object used to collect network outputs
*/
using GInferOutputs = cv::detail::GInferOutputsTyped<cv::GMat>;
/**
* @brief G-API object used to collect the list of network outputs
*/
using GInferListOutputs = cv::detail::GInferOutputsTyped<cv::GArray<cv::GMat>>;
namespace detail {
void inline unpackBlobs(const cv::GInferInputs::Map& blobs,
std::vector<cv::GArg>& args,
std::vector<std::string>& names,
cv::GKinds& kinds)
{
for (auto&& p : blobs) {
names.emplace_back(p.first);
switch (p.second.index()) {
case cv::GInferInputs::StorageT::index_of<cv::GMat>():
args.emplace_back(cv::util::get<cv::GMat>(p.second));
kinds.emplace_back(cv::detail::OpaqueKind::CV_MAT);
break;
case cv::GInferInputs::StorageT::index_of<cv::GFrame>():
args.emplace_back(cv::util::get<cv::GFrame>(p.second));
kinds.emplace_back(cv::detail::OpaqueKind::CV_UNKNOWN);
break;
default:
GAPI_Error("InternalError");
}
}
}
template <typename InferType>
struct InferROITraits;
template <>
struct InferROITraits<GInferROIBase>
{
using outType = cv::GInferOutputs;
using inType = cv::GOpaque<cv::Rect>;
};
template <>
struct InferROITraits<GInferListBase>
{
using outType = cv::GInferListOutputs;
using inType = cv::GArray<cv::Rect>;
};
template<typename InferType>
typename InferROITraits<InferType>::outType
inferGenericROI(const std::string& tag,
const typename InferROITraits<InferType>::inType& in,
const cv::GInferInputs& inputs)
{
std::vector<cv::GArg> args;
std::vector<std::string> names;
cv::GKinds kinds;
args.emplace_back(in);
kinds.emplace_back(cv::detail::OpaqueKind::CV_RECT);
unpackBlobs(inputs.getBlobs(), args, names, kinds);
auto call = cv::detail::makeCall<InferType>(tag,
std::move(args),
std::move(names),
std::move(kinds));
return {std::move(call)};
}
} // namespace detail
} // namespace cv
// FIXME: Probably the <API> signature makes a function/tuple/function round-trip
#define G_API_NET(Class, API, Tag) \
struct Class final: public cv::GNetworkType<Class, std::function API> { \
static constexpr const char * tag() { return Tag; } \
}
namespace cv {
namespace gapi {
/** @brief Calculates response for the specified network (template
* parameter) for the specified region in the source image.
* Currently expects a single-input network only.
*
* @tparam A network type defined with G_API_NET() macro.
* @param in input image where to take ROI from.
* @param roi an object describing the region of interest
* in the source image. May be calculated in the same graph dynamically.
* @return an object of return type as defined in G_API_NET().
* If a network has multiple return values (defined with a tuple), a tuple of
* objects of appropriate type is returned.
* @sa G_API_NET()
*/
template<typename Net, typename T>
typename Net::Result infer(cv::GOpaque<cv::Rect> roi, T in) {
return GInferROI<Net, T>::on(roi, in);
}
/** @brief Calculates responses for the specified network (template
* parameter) for every region in the source image.
*
* @tparam A network type defined with G_API_NET() macro.
* @param roi a list of rectangles describing regions of interest
* in the source image. Usually an output of object detector or tracker.
* @param args network's input parameters as specified in G_API_NET() macro.
* NOTE: verified to work reliably with 1-input topologies only.
* @return a list of objects of return type as defined in G_API_NET().
* If a network has multiple return values (defined with a tuple), a tuple of
* GArray<> objects is returned with the appropriate types inside.
* @sa G_API_NET()
*/
template<typename Net, typename... Args>
typename Net::ResultL infer(cv::GArray<cv::Rect> roi, Args&&... args) {
return GInferList<Net, Args...>::on(roi, std::forward<Args>(args)...);
}
/** @brief Calculates responses for the specified network (template
* parameter) for every region in the source image, extended version.
*
* @tparam A network type defined with G_API_NET() macro.
* @param image A source image containing regions of interest
* @param args GArray<> objects of cv::Rect or cv::GMat, one per every
* network input:
* - If a cv::GArray<cv::Rect> is passed, the appropriate
* regions are taken from `image` and preprocessed to this particular
* network input;
* - If a cv::GArray<cv::GMat> is passed, the underlying data traited
* as tensor (no automatic preprocessing happen).
* @return a list of objects of return type as defined in G_API_NET().
* If a network has multiple return values (defined with a tuple), a tuple of
* GArray<> objects is returned with the appropriate types inside.
* @sa G_API_NET()
*/
template<typename Net, typename T, typename... Args>
typename Net::ResultL infer2(T image, cv::GArray<Args>... args) {
// FIXME: Declared as "2" because in the current form it steals
// overloads from the regular infer
return GInferList2<Net, T, Args...>::on(image, args...);
}
/**
* @brief Calculates response for the specified network (template
* parameter) given the input data.
*
* @tparam A network type defined with G_API_NET() macro.
* @param args network's input parameters as specified in G_API_NET() macro.
* @return an object of return type as defined in G_API_NET().
* If a network has multiple return values (defined with a tuple), a tuple of
* objects of appropriate type is returned.
* @sa G_API_NET()
*/
template<typename Net, typename... Args>
typename Net::Result infer(Args&&... args) {
return GInfer<Net, Args...>::on(std::forward<Args>(args)...);
}
/**
* @brief Generic network type: input and output layers are configured dynamically at runtime
*
* Unlike the network types defined with G_API_NET macro, this one
* doesn't fix number of network inputs and outputs at the compilation stage
* thus providing user with an opportunity to program them in runtime.
*/
struct Generic { };
/**
* @brief Calculates response for generic network
*
* @param tag a network tag
* @param inputs networks's inputs
* @return a GInferOutputs
*/
template<typename T = Generic> cv::GInferOutputs
infer(const std::string& tag, const cv::GInferInputs& inputs)
{
std::vector<cv::GArg> args;
std::vector<std::string> names;
cv::GKinds kinds;
cv::detail::unpackBlobs(inputs.getBlobs(), args, names, kinds);
auto call = cv::detail::makeCall<GInferBase>(tag,
std::move(args),
std::move(names),
std::move(kinds));
return cv::GInferOutputs{std::move(call)};
}
/** @brief Calculates response for the generic network
* for the specified region in the source image.
* Currently expects a single-input network only.
*
* @param tag a network tag
* @param roi a an object describing the region of interest
* in the source image. May be calculated in the same graph dynamically.
* @param inputs networks's inputs
* @return a cv::GInferOutputs
*/
template<typename T = Generic> cv::GInferOutputs
infer(const std::string& tag, const cv::GOpaque<cv::Rect>& roi, const cv::GInferInputs& inputs)
{
return cv::detail::inferGenericROI<GInferROIBase>(tag, roi, inputs);
}
/** @brief Calculates responses for the specified network
* for every region in the source image.
*
* @param tag a network tag
* @param rois a list of rectangles describing regions of interest
* in the source image. Usually an output of object detector or tracker.
* @param inputs networks's inputs
* @return a cv::GInferListOutputs
*/
template<typename T = Generic> cv::GInferListOutputs
infer(const std::string& tag, const cv::GArray<cv::Rect>& rois, const cv::GInferInputs& inputs)
{
return cv::detail::inferGenericROI<GInferListBase>(tag, rois, inputs);
}
/** @brief Calculates responses for the specified network
* for every region in the source image, extended version.
*
* @param tag a network tag
* @param in a source image containing regions of interest.
* @param inputs networks's inputs
* @return a cv::GInferListOutputs
*/
template<typename T = Generic, typename Input>
typename std::enable_if<cv::detail::accepted_infer_types<Input>::value, cv::GInferListOutputs>::type
infer2(const std::string& tag,
const Input& in,
const cv::GInferListInputs& inputs)
{
std::vector<cv::GArg> args;
std::vector<std::string> names;
cv::GKinds kinds;
args.emplace_back(in);
auto k = cv::detail::GOpaqueTraits<Input>::kind;
kinds.emplace_back(k);
for (auto&& p : inputs.getBlobs()) {
names.emplace_back(p.first);
switch (p.second.index()) {
case cv::GInferListInputs::StorageT::index_of<cv::GArray<cv::GMat>>():
args.emplace_back(cv::util::get<cv::GArray<cv::GMat>>(p.second));
kinds.emplace_back(cv::detail::OpaqueKind::CV_MAT);
break;
case cv::GInferListInputs::StorageT::index_of<cv::GArray<cv::Rect>>():
args.emplace_back(cv::util::get<cv::GArray<cv::Rect>>(p.second));
kinds.emplace_back(cv::detail::OpaqueKind::CV_RECT);
break;
default:
GAPI_Error("InternalError");
}
}
auto call = cv::detail::makeCall<GInferList2Base>(tag,
std::move(args),
std::move(names),
std::move(kinds));
return cv::GInferListOutputs{std::move(call)};
}
} // namespace gapi
} // namespace cv
#endif // GAPI_STANDALONE
namespace cv {
namespace gapi {
// Note: the below code _is_ part of STANDALONE build,
// just to make our compiler code compileable.
// A type-erased form of network parameters.
// Similar to how a type-erased GKernel is represented and used.
/// @private
struct GAPI_EXPORTS_W_SIMPLE GNetParam {
std::string tag; // FIXME: const?
GBackend backend; // Specifies the execution model
util::any params; // Backend-interpreted parameter structure
};
/** \addtogroup gapi_compile_args
* @{
*/
/**
* @brief A container class for network configurations. Similar to
* GKernelPackage. Use cv::gapi::networks() to construct this object.
*
* @sa cv::gapi::networks
*/
struct GAPI_EXPORTS_W_SIMPLE GNetPackage {
GAPI_WRAP GNetPackage() = default;
GAPI_WRAP explicit GNetPackage(std::vector<GNetParam> nets);
explicit GNetPackage(std::initializer_list<GNetParam> ii);
std::vector<GBackend> backends() const;
std::vector<GNetParam> networks;
};
/** @} gapi_compile_args */
} // namespace gapi
namespace detail {
template<typename T>
gapi::GNetParam strip(T&& t) {
return gapi::GNetParam { t.tag()
, t.backend()
, t.params()
};
}
template<> struct CompileArgTag<cv::gapi::GNetPackage> {
static const char* tag() { return "gapi.net_package"; }
};
} // namespace cv::detail
namespace gapi {
template<typename... Args>
cv::gapi::GNetPackage networks(Args&&... args) {
return cv::gapi::GNetPackage({ cv::detail::strip(args)... });
}
inline cv::gapi::GNetPackage& operator += ( cv::gapi::GNetPackage& lhs,
const cv::gapi::GNetPackage& rhs) {
lhs.networks.reserve(lhs.networks.size() + rhs.networks.size());
lhs.networks.insert(lhs.networks.end(), rhs.networks.begin(), rhs.networks.end());
return lhs;
}
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_INFER_HPP

View File

@ -1,70 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2020 Intel Corporation
#ifndef OPENCV_GAPI_INFER_BINDINGS_IE_HPP
#define OPENCV_GAPI_INFER_BINDINGS_IE_HPP
#include <opencv2/gapi/util/any.hpp>
#include "opencv2/gapi/own/exports.hpp" // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/infer/ie.hpp> // Params
#include <string>
namespace cv {
namespace gapi {
namespace ie {
// NB: Used by python wrapper
// This class can be marked as SIMPLE, because it's implemented as pimpl
class GAPI_EXPORTS_W_SIMPLE PyParams {
public:
GAPI_WRAP
PyParams() = default;
GAPI_WRAP
PyParams(const std::string &tag,
const std::string &model,
const std::string &weights,
const std::string &device);
GAPI_WRAP
PyParams(const std::string &tag,
const std::string &model,
const std::string &device);
GAPI_WRAP
PyParams& constInput(const std::string &layer_name,
const cv::Mat &data,
TraitAs hint = TraitAs::TENSOR);
GAPI_WRAP
PyParams& cfgNumRequests(size_t nireq);
GAPI_WRAP
PyParams& cfgBatchSize(const size_t size);
GBackend backend() const;
std::string tag() const;
cv::util::any params() const;
private:
std::shared_ptr<Params<cv::gapi::Generic>> m_priv;
};
GAPI_EXPORTS_W PyParams params(const std::string &tag,
const std::string &model,
const std::string &weights,
const std::string &device);
GAPI_EXPORTS_W PyParams params(const std::string &tag,
const std::string &model,
const std::string &device);
} // namespace ie
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_INFER_BINDINGS_IE_HPP

View File

@ -1,74 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level
// directory of this distribution and at http://opencv.org/license.html.
#ifndef OPENCV_GAPI_INFER_BINDINGS_ONNX_HPP
#define OPENCV_GAPI_INFER_BINDINGS_ONNX_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/infer/onnx.hpp> // Params
#include "opencv2/gapi/own/exports.hpp" // GAPI_EXPORTS
#include <opencv2/gapi/util/any.hpp>
#include <string>
namespace cv {
namespace gapi {
namespace onnx {
// NB: Used by python wrapper
// This class can be marked as SIMPLE, because it's implemented as pimpl
class GAPI_EXPORTS_W_SIMPLE PyParams {
public:
GAPI_WRAP
PyParams() = default;
GAPI_WRAP
PyParams(const std::string& tag, const std::string& model_path);
GAPI_WRAP
PyParams& cfgMeanStd(const std::string &layer_name,
const cv::Scalar &m,
const cv::Scalar &s);
GAPI_WRAP
PyParams& cfgNormalize(const std::string &layer_name, bool flag);
GAPI_WRAP
PyParams& cfgAddExecutionProvider(ep::OpenVINO ep);
GAPI_WRAP
PyParams& cfgAddExecutionProvider(ep::DirectML ep);
GAPI_WRAP
PyParams& cfgAddExecutionProvider(ep::CoreML ep);
GAPI_WRAP
PyParams& cfgAddExecutionProvider(ep::CUDA ep);
GAPI_WRAP
PyParams& cfgAddExecutionProvider(ep::TensorRT ep);
GAPI_WRAP
PyParams& cfgDisableMemPattern();
GAPI_WRAP
PyParams& cfgSessionOptions(const std::map<std::string, std::string>& options);
GAPI_WRAP
PyParams& cfgOptLevel(const int opt_level);
GBackend backend() const;
std::string tag() const;
cv::util::any params() const;
private:
std::shared_ptr<Params<cv::gapi::Generic>> m_priv;
};
GAPI_EXPORTS_W PyParams params(const std::string& tag, const std::string& model_path);
} // namespace onnx
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_INFER_BINDINGS_ONNX_HPP

View File

@ -1,128 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2023 Intel Corporation
#ifndef OPENCV_GAPI_INFER_BINDINGS_OV_HPP
#define OPENCV_GAPI_INFER_BINDINGS_OV_HPP
#include <opencv2/gapi/util/any.hpp>
#include "opencv2/gapi/own/exports.hpp" // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/infer/ov.hpp> // Params
#include <string>
namespace cv {
namespace gapi {
namespace ov {
// NB: Used by python wrapper
// This class can be marked as SIMPLE, because it's implemented as pimpl
class GAPI_EXPORTS_W_SIMPLE PyParams {
public:
GAPI_WRAP
PyParams() = default;
GAPI_WRAP
PyParams(const std::string &tag,
const std::string &model_path,
const std::string &bin_path,
const std::string &device);
GAPI_WRAP
PyParams(const std::string &tag,
const std::string &blob_path,
const std::string &device);
GAPI_WRAP
PyParams& cfgPluginConfig(
const std::map<std::string, std::string> &config);
GAPI_WRAP
PyParams& cfgInputTensorLayout(std::string tensor_layout);
GAPI_WRAP
PyParams& cfgInputTensorLayout(
std::map<std::string, std::string> layout_map);
GAPI_WRAP
PyParams& cfgInputModelLayout(std::string tensor_layout);
GAPI_WRAP
PyParams& cfgInputModelLayout(
std::map<std::string, std::string> layout_map);
GAPI_WRAP
PyParams& cfgOutputTensorLayout(std::string tensor_layout);
GAPI_WRAP
PyParams& cfgOutputTensorLayout(
std::map<std::string, std::string> layout_map);
GAPI_WRAP
PyParams& cfgOutputModelLayout(std::string tensor_layout);
GAPI_WRAP
PyParams& cfgOutputModelLayout(
std::map<std::string, std::string> layout_map);
GAPI_WRAP
PyParams& cfgOutputTensorPrecision(int precision);
GAPI_WRAP
PyParams& cfgOutputTensorPrecision(
std::map<std::string, int> precision_map);
GAPI_WRAP
PyParams& cfgReshape(std::vector<size_t> new_shape);
GAPI_WRAP
PyParams& cfgReshape(
std::map<std::string, std::vector<size_t>> new_shape_map);
GAPI_WRAP
PyParams& cfgNumRequests(const size_t nireq);
GAPI_WRAP
PyParams& cfgMean(std::vector<float> mean_values);
GAPI_WRAP
PyParams& cfgMean(
std::map<std::string, std::vector<float>> mean_map);
GAPI_WRAP
PyParams& cfgScale(std::vector<float> scale_values);
GAPI_WRAP
PyParams& cfgScale(
std::map<std::string, std::vector<float>> scale_map);
GAPI_WRAP
PyParams& cfgResize(int interpolation);
GAPI_WRAP
PyParams& cfgResize(std::map<std::string, int> interpolation);
GBackend backend() const;
std::string tag() const;
cv::util::any params() const;
private:
std::shared_ptr<Params<cv::gapi::Generic>> m_priv;
};
GAPI_EXPORTS_W PyParams params(const std::string &tag,
const std::string &model_path,
const std::string &weights,
const std::string &device);
GAPI_EXPORTS_W PyParams params(const std::string &tag,
const std::string &bin_path,
const std::string &device);
} // namespace ov
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_INFER_BINDINGS_OV_HPP

View File

@ -1,711 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019-2023 Intel Corporation
#ifndef OPENCV_GAPI_INFER_IE_HPP
#define OPENCV_GAPI_INFER_IE_HPP
#include <unordered_map>
#include <unordered_set>
#include <string>
#include <array>
#include <tuple> // tuple, tuple_size
#include <map>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/util/any.hpp>
#include <opencv2/core/cvdef.h> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/infer.hpp> // Generic
#include <opencv2/gapi/streaming/onevpl/accel_types.hpp> // Preproc Dev & Ctx
namespace cv {
namespace gapi {
// FIXME: introduce a new sub-namespace for NN?
/**
* @brief This namespace contains G-API OpenVINO backend functions,
* structures, and symbols.
*/
namespace ie {
GAPI_EXPORTS cv::gapi::GBackend backend();
/**
* Specifies how G-API and IE should trait input data
*
* In OpenCV, the same cv::Mat is used to represent both
* image and tensor data. Sometimes those are hardly distinguishable,
* so this extra parameter is used to give G-API a hint.
*
* This hint controls how G-API reinterprets the data when converting
* it to IE Blob format (and which layout/etc is assigned to this data).
*/
enum class TraitAs: int
{
TENSOR, //!< G-API traits an associated cv::Mat as a raw tensor and passes dimensions as-is
IMAGE //!< G-API traits an associated cv::Mat as an image so creates an "image" blob (NCHW/NHWC, etc)
};
using IEConfig = std::map<std::string, std::string>;
enum InferMode {Sync, Async};
namespace detail {
template <typename T>
using AttrMap = std::map<std::string, T>;
// NB: This type is used to hold in/out layers
// attributes such as precision, layout, shape etc.
//
// User can provide attributes either:
// 1. cv::util::monostate - No value specified explicitly.
// 2. Attr - value specified explicitly that should be broadcasted to all layers.
// 3. AttrMap[str->T] - map specifies value for particular layer.
template <typename Attr>
using LayerVariantAttr = cv::util::variant< cv::util::monostate
, AttrMap<Attr>
, Attr>;
struct ParamDesc {
std::string model_path;
std::string weights_path;
std::string device_id;
std::vector<std::string> input_names;
std::vector<std::string> output_names;
using ConstInput = std::pair<cv::Mat, TraitAs>;
std::unordered_map<std::string, ConstInput> const_inputs;
std::size_t num_in;
std::size_t num_out;
enum class Kind {Load, Import};
Kind kind;
bool is_generic;
IEConfig config;
std::map<std::string, std::vector<std::size_t>> reshape_table;
std::unordered_set<std::string> layer_names_to_reshape;
// NB: Number of asyncrhonious infer requests
size_t nireq;
// NB: An optional config to setup RemoteContext for IE
cv::util::any context_config;
// NB: batch_size can't be equal to 1 by default, because some of models
// have 2D (Layout::NC) input and if the first dimension not equal to 1
// net.setBatchSize(1) will overwrite it.
cv::optional<size_t> batch_size;
cv::optional<cv::gapi::wip::onevpl::Device> vpl_preproc_device;
cv::optional<cv::gapi::wip::onevpl::Context> vpl_preproc_ctx;
InferMode mode;
using PrecisionT = int;
using PrecisionMapT = std::unordered_map<std::string, PrecisionT>;
// NB: This parameter can contain:
// 1. cv::util::monostate - Don't specify precision, but use default from IR/Blob.
// 2. PrecisionT (CV_8U, CV_32F, ...) - Specifies precision for all output layers.
// 3. PrecisionMapT ({{"layer0", CV_32F}, {"layer1", CV_16F}} - Specifies precision for certain output layer.
// cv::util::monostate is default value that means precision wasn't specified.
using PrecisionVariantT = cv::util::variant<cv::util::monostate,
PrecisionT,
PrecisionMapT>;
PrecisionVariantT output_precision;
LayerVariantAttr<std::string> input_layout;
LayerVariantAttr<std::string> output_layout;
LayerVariantAttr<int> interpolation;
};
} // namespace detail
// FIXME: this is probably a shared (reusable) thing
template<typename Net>
struct PortCfg {
using In = std::array
< std::string
, std::tuple_size<typename Net::InArgs>::value >;
using Out = std::array
< std::string
, std::tuple_size<typename Net::OutArgs>::value >;
};
/**
* @brief This structure provides functions
* that fill inference parameters for "OpenVINO Toolkit" model.
*/
template<typename Net> class Params {
public:
/** @brief Class constructor.
Constructs Params based on model information and specifies default values for other
inference description parameters. Model is loaded and compiled using "OpenVINO Toolkit".
@param model Path to topology IR (.xml file).
@param weights Path to weights (.bin file).
@param device target device to use.
*/
Params(const std::string &model,
const std::string &weights,
const std::string &device)
: desc{ model, weights, device, {}, {}, {}
, std::tuple_size<typename Net::InArgs>::value // num_in
, std::tuple_size<typename Net::OutArgs>::value // num_out
, detail::ParamDesc::Kind::Load
, false
, {}
, {}
, {}
, 1u
, {}
, {}
, {}
, {}
, InferMode::Async
, {}
, {}
, {}
, {} } {
}
/** @overload
Use this constructor to work with pre-compiled network.
Model is imported from a pre-compiled blob.
@param model Path to model.
@param device target device to use.
*/
Params(const std::string &model,
const std::string &device)
: desc{ model, {}, device, {}, {}, {}
, std::tuple_size<typename Net::InArgs>::value // num_in
, std::tuple_size<typename Net::OutArgs>::value // num_out
, detail::ParamDesc::Kind::Import
, false
, {}
, {}
, {}
, 1u
, {}
, {}
, {}
, {}
, InferMode::Async
, {}
, {}
, {}
, {} } {
}
/** @brief Specifies sequence of network input layers names for inference.
The function is used to associate cv::gapi::infer<> inputs with the model inputs.
Number of names has to match the number of network inputs as defined in G_API_NET().
In case a network has only single input layer, there is no need to specify name manually.
@param layer_names std::array<std::string, N> where N is the number of inputs
as defined in the @ref G_API_NET. Contains names of input layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputLayers(const typename PortCfg<Net>::In &layer_names) {
desc.input_names.clear();
desc.input_names.reserve(layer_names.size());
std::copy(layer_names.begin(), layer_names.end(),
std::back_inserter(desc.input_names));
return *this;
}
/** @brief Specifies sequence of network output layers names for inference.
The function is used to associate cv::gapi::infer<> outputs with the model outputs.
Number of names has to match the number of network outputs as defined in G_API_NET().
In case a network has only single output layer, there is no need to specify name manually.
@param layer_names std::array<std::string, N> where N is the number of outputs
as defined in the @ref G_API_NET. Contains names of output layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgOutputLayers(const typename PortCfg<Net>::Out &layer_names) {
desc.output_names.clear();
desc.output_names.reserve(layer_names.size());
std::copy(layer_names.begin(), layer_names.end(),
std::back_inserter(desc.output_names));
return *this;
}
/** @brief Specifies a constant input.
The function is used to set a constant input. This input has to be
a preprocessed tensor if its type is TENSOR. Need to provide name of the
network layer which will receive provided data.
@param layer_name Name of network layer.
@param data cv::Mat that contains data which will be associated with network layer.
@param hint Input type @sa cv::gapi::ie::TraitAs.
@return reference to this parameter structure.
*/
Params<Net>& constInput(const std::string &layer_name,
const cv::Mat &data,
TraitAs hint = TraitAs::TENSOR) {
desc.const_inputs[layer_name] = {data, hint};
return *this;
}
/** @brief Specifies OpenVINO plugin configuration.
The function is used to set configuration for OpenVINO plugin. Some parameters
can be different for each plugin. Please follow https://docs.openvinotoolkit.org/latest/index.html
to check information about specific plugin.
@param cfg Map of pairs: (config parameter name, config parameter value).
@return reference to this parameter structure.
*/
Params& pluginConfig(const IEConfig& cfg) {
desc.config = cfg;
return *this;
}
/** @overload
Function with a rvalue parameter.
@param cfg rvalue map of pairs: (config parameter name, config parameter value).
@return reference to this parameter structure.
*/
Params& pluginConfig(IEConfig&& cfg) {
desc.config = std::move(cfg);
return *this;
}
/** @brief Specifies configuration for RemoteContext in InferenceEngine.
When RemoteContext is configured the backend imports the networks using the context.
It also expects cv::MediaFrames to be actually remote, to operate with blobs via the context.
@param ctx_cfg cv::util::any value which holds InferenceEngine::ParamMap.
@return reference to this parameter structure.
*/
Params& cfgContextParams(const cv::util::any& ctx_cfg) {
desc.context_config = ctx_cfg;
return *this;
}
/** @overload
Function with an rvalue parameter.
@param ctx_cfg cv::util::any value which holds InferenceEngine::ParamMap.
@return reference to this parameter structure.
*/
Params& cfgContextParams(cv::util::any&& ctx_cfg) {
desc.context_config = std::move(ctx_cfg);
return *this;
}
/** @brief Specifies number of asynchronous inference requests.
@param nireq Number of inference asynchronous requests.
@return reference to this parameter structure.
*/
Params& cfgNumRequests(size_t nireq) {
GAPI_Assert(nireq > 0 && "Number of infer requests must be greater than zero!");
desc.nireq = nireq;
return *this;
}
/** @brief Specifies new input shapes for the network inputs.
The function is used to specify new input shapes for the network inputs.
Follow https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1networkNetwork.html
for additional information.
@param reshape_table Map of pairs: name of corresponding data and its dimension.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputReshape(const std::map<std::string, std::vector<std::size_t>>& reshape_table) {
desc.reshape_table = reshape_table;
return *this;
}
/** @overload */
Params<Net>& cfgInputReshape(std::map<std::string, std::vector<std::size_t>>&& reshape_table) {
desc.reshape_table = std::move(reshape_table);
return *this;
}
/** @overload
@param layer_name Name of layer.
@param layer_dims New dimensions for this layer.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputReshape(const std::string& layer_name, const std::vector<size_t>& layer_dims) {
desc.reshape_table.emplace(layer_name, layer_dims);
return *this;
}
/** @overload */
Params<Net>& cfgInputReshape(std::string&& layer_name, std::vector<size_t>&& layer_dims) {
desc.reshape_table.emplace(layer_name, layer_dims);
return *this;
}
/** @overload
@param layer_names set of names of network layers that will be used for network reshape.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputReshape(const std::unordered_set<std::string>& layer_names) {
desc.layer_names_to_reshape = layer_names;
return *this;
}
/** @overload
@param layer_names rvalue set of the selected layers will be reshaped automatically
its input image size.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputReshape(std::unordered_set<std::string>&& layer_names) {
desc.layer_names_to_reshape = std::move(layer_names);
return *this;
}
/** @brief Specifies the inference batch size.
The function is used to specify inference batch size.
Follow https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1CNNNetwork.html#a8e9d19270a48aab50cb5b1c43eecb8e9 for additional information
@param size batch size which will be used.
@return reference to this parameter structure.
*/
Params<Net>& cfgBatchSize(const size_t size) {
desc.batch_size = cv::util::make_optional(size);
return *this;
}
Params<Net>& cfgPreprocessingParams(const cv::gapi::wip::onevpl::Device &device,
const cv::gapi::wip::onevpl::Context &ctx) {
desc.vpl_preproc_device = cv::util::make_optional(device);
desc.vpl_preproc_ctx = cv::util::make_optional(ctx);
return *this;
}
/** @brief Specifies which api will be used to run inference.
The function is used to specify mode for OpenVINO inference.
OpenVINO has two options to run inference:
1. Asynchronous (using StartAsync: https://docs.openvino.ai/latest/classInferenceEngine_1_1InferRequest.html#doxid-class-inference-engine-1-1-infer-request-1a405293e8423d82a5b45f642a3bef0d24)
2. Synchronous (using Infer: https://docs.openvino.ai/latest/classInferenceEngine_1_1InferRequest.html#doxid-class-inference-engine-1-1-infer-request-1a3391ce30894abde730523e9ca9371ce8)
By default asynchronous mode is used.
@param mode Inference mode which will be used.
@return reference to this parameter structure.
*/
Params<Net>& cfgInferMode(InferMode mode) {
desc.mode = mode;
return *this;
}
/** @brief Specifies the output precision for model.
The function is used to set an output precision for model.
@param precision Precision in OpenCV format (CV_8U, CV_32F, ...)
will be applied to all output layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgOutputPrecision(detail::ParamDesc::PrecisionT precision) {
desc.output_precision = precision;
return *this;
}
/** @overload
@param precision_map Map of pairs: name of corresponding output layer
and its precision in OpenCV format (CV_8U, CV_32F, ...)
@return reference to this parameter structure.
*/
Params<Net>&
cfgOutputPrecision(detail::ParamDesc::PrecisionMapT precision_map) {
desc.output_precision = precision_map;
return *this;
}
/** @brief Specifies the input layout for model.
The function is used to set an input layout for model.
@param layout Layout in string representation ("NCHW", "NHWC", etc)
will be applied to all input layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputLayout(std::string layout) {
desc.input_layout = std::move(layout);
return *this;
}
/** @overload
@param layout_map Map of pairs: name of corresponding input layer
and its layout in string representation ("NCHW", "NHWC", etc)
@return reference to this parameter structure.
*/
Params<Net>&
cfgInputLayout(detail::AttrMap<std::string> layout_map) {
desc.input_layout = std::move(layout_map);
return *this;
}
/** @brief Specifies the output layout for model.
The function is used to set an output layout for model.
@param layout Layout in string representation ("NCHW", "NHWC", etc)
will be applied to all output layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgOutputLayout(std::string layout) {
desc.output_layout = std::move(layout);
return *this;
}
/** @overload
@param layout_map Map of pairs: name of corresponding output layer
and its layout in string representation ("NCHW", "NHWC", etc)
@return reference to this parameter structure.
*/
Params<Net>&
cfgOutputLayout(detail::AttrMap<std::string> layout_map) {
desc.output_layout = std::move(layout_map);
return *this;
}
/** @brief Specifies resize interpolation algorithm.
*
The function is used to configure resize preprocessing for input layer.
@param interpolation Resize interpolation algorithm.
Supported algorithms: #INTER_LINEAR, #INTER_AREA.
@return reference to this parameter structure.
*/
Params<Net>& cfgResize(int interpolation) {
desc.interpolation = interpolation;
return *this;
}
/** @overload
@param interpolation Map of pairs: name of corresponding input layer
and its resize algorithm.
@return reference to this parameter structure.
*/
Params<Net>& cfgResize(detail::AttrMap<int> interpolation) {
desc.interpolation = std::move(interpolation);
return *this;
}
// BEGIN(G-API's network parametrization API)
GBackend backend() const { return cv::gapi::ie::backend(); }
std::string tag() const { return Net::tag(); }
cv::util::any params() const { return { desc }; }
// END(G-API's network parametrization API)
protected:
detail::ParamDesc desc;
};
/*
* @brief This structure provides functions for generic network type that
* fill inference parameters.
* @see struct Generic
*/
template<>
class Params<cv::gapi::Generic> {
public:
/** @brief Class constructor.
Constructs Params based on model information and sets default values for other
inference description parameters. Model is loaded and compiled using OpenVINO Toolkit.
@param tag string tag of the network for which these parameters are intended.
@param model path to topology IR (.xml file).
@param weights path to weights (.bin file).
@param device target device to use.
*/
Params(const std::string &tag,
const std::string &model,
const std::string &weights,
const std::string &device)
: desc{ model, weights, device, {}, {}, {}, 0u, 0u,
detail::ParamDesc::Kind::Load, true, {}, {}, {}, 1u,
{}, {}, {}, {}, InferMode::Async, {}, {}, {}, {} },
m_tag(tag) {
}
/** @overload
This constructor for pre-compiled networks. Model is imported from pre-compiled
blob.
@param tag string tag of the network for which these parameters are intended.
@param model path to model.
@param device target device to use.
*/
Params(const std::string &tag,
const std::string &model,
const std::string &device)
: desc{ model, {}, device, {}, {}, {}, 0u, 0u,
detail::ParamDesc::Kind::Import, true, {}, {}, {}, 1u,
{}, {}, {}, {}, InferMode::Async, {}, {}, {}, {} },
m_tag(tag) {
}
/** @see ie::Params::pluginConfig. */
Params& pluginConfig(const IEConfig& cfg) {
desc.config = cfg;
return *this;
}
/** @overload */
Params& pluginConfig(IEConfig&& cfg) {
desc.config = std::move(cfg);
return *this;
}
/** @see ie::Params::constInput. */
Params& constInput(const std::string &layer_name,
const cv::Mat &data,
TraitAs hint = TraitAs::TENSOR) {
desc.const_inputs[layer_name] = {data, hint};
return *this;
}
/** @see ie::Params::cfgNumRequests. */
Params& cfgNumRequests(size_t nireq) {
GAPI_Assert(nireq > 0 && "Number of infer requests must be greater than zero!");
desc.nireq = nireq;
return *this;
}
/** @see ie::Params::cfgInputReshape */
Params& cfgInputReshape(const std::map<std::string, std::vector<std::size_t>>&reshape_table) {
desc.reshape_table = reshape_table;
return *this;
}
/** @overload */
Params& cfgInputReshape(std::map<std::string, std::vector<std::size_t>> && reshape_table) {
desc.reshape_table = std::move(reshape_table);
return *this;
}
/** @overload */
Params& cfgInputReshape(std::string && layer_name, std::vector<size_t> && layer_dims) {
desc.reshape_table.emplace(layer_name, layer_dims);
return *this;
}
/** @overload */
Params& cfgInputReshape(const std::string & layer_name, const std::vector<size_t>&layer_dims) {
desc.reshape_table.emplace(layer_name, layer_dims);
return *this;
}
/** @overload */
Params& cfgInputReshape(std::unordered_set<std::string> && layer_names) {
desc.layer_names_to_reshape = std::move(layer_names);
return *this;
}
/** @overload */
Params& cfgInputReshape(const std::unordered_set<std::string>&layer_names) {
desc.layer_names_to_reshape = layer_names;
return *this;
}
/** @see ie::Params::cfgBatchSize */
Params& cfgBatchSize(const size_t size) {
desc.batch_size = cv::util::make_optional(size);
return *this;
}
/** @see ie::Params::cfgInferAPI */
Params& cfgInferMode(InferMode mode) {
desc.mode = mode;
return *this;
}
/** @see ie::Params::cfgOutputPrecision */
Params& cfgOutputPrecision(detail::ParamDesc::PrecisionT precision) {
desc.output_precision = precision;
return *this;
}
/** @overload */
Params&
cfgOutputPrecision(detail::ParamDesc::PrecisionMapT precision_map) {
desc.output_precision = precision_map;
return *this;
}
/** @see ie::Params::cfgInputLayout */
Params& cfgInputLayout(std::string layout) {
desc.input_layout = std::move(layout);
return *this;
}
/** @overload */
Params&
cfgInputLayout(detail::AttrMap<std::string> layout_map) {
desc.input_layout = std::move(layout_map);
return *this;
}
/** @see ie::Params::cfgOutputLayout */
Params& cfgOutputLayout(std::string layout) {
desc.output_layout = std::move(layout);
return *this;
}
/** @overload */
Params&
cfgOutputLayout(detail::AttrMap<std::string> layout_map) {
desc.output_layout = std::move(layout_map);
return *this;
}
/** @see ie::Params::cfgResize */
Params& cfgResize(int interpolation) {
desc.interpolation = interpolation;
return *this;
}
/** @overload */
Params& cfgResize(detail::AttrMap<int> interpolation) {
desc.interpolation = std::move(interpolation);
return *this;
}
// BEGIN(G-API's network parametrization API)
GBackend backend() const { return cv::gapi::ie::backend(); }
std::string tag() const { return m_tag; }
cv::util::any params() const { return { desc }; }
// END(G-API's network parametrization API)
protected:
detail::ParamDesc desc;
std::string m_tag;
};
} // namespace ie
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_INFER_IE_HPP

View File

@ -1,759 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2020-2021 Intel Corporation
#ifndef OPENCV_GAPI_INFER_ONNX_HPP
#define OPENCV_GAPI_INFER_ONNX_HPP
#include <unordered_map>
#include <string>
#include <array>
#include <tuple> // tuple, tuple_size
#include <map>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/util/any.hpp>
#include <opencv2/gapi/util/optional.hpp>
#include <opencv2/core/cvdef.h> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/infer.hpp> // Generic
namespace cv {
namespace gapi {
/**
* @brief This namespace contains G-API ONNX Runtime backend functions, structures, and symbols.
*/
namespace onnx {
/**
* @brief This namespace contains Execution Providers structures for G-API ONNX Runtime backend.
*/
namespace ep {
/**
* @brief This structure provides functions
* that fill inference options for ONNX CoreML Execution Provider.
* Please follow https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html#coreml-execution-provider
*/
struct GAPI_EXPORTS_W_SIMPLE CoreML {
/** @brief Class constructor.
Constructs CoreML parameters.
*/
GAPI_WRAP
CoreML() = default;
/** @brief Limit CoreML Execution Provider to run on CPU only.
This function is used to limit CoreML to run on CPU only.
Please follow: https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html#coreml_flag_use_cpu_only
@return reference to this parameter structure.
*/
GAPI_WRAP
CoreML& cfgUseCPUOnly() {
use_cpu_only = true;
return *this;
}
/** @brief Enable CoreML EP to run on a subgraph in the body of a control flow ONNX operator (i.e. a Loop, Scan or If operator).
This function is used to enable CoreML EP to run on
a subgraph of a control flow of ONNX operation.
Please follow: https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html#coreml_flag_enable_on_subgraph
@return reference to this parameter structure.
*/
GAPI_WRAP
CoreML& cfgEnableOnSubgraph() {
enable_on_subgraph = true;
return *this;
}
/** @brief Enable CoreML EP to run only on Apple Neural Engine.
This function is used to enable CoreML EP to run only on Apple Neural Engine.
Please follow: https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html#coreml_flag_only_enable_device_with_ane
@return reference to this parameter structure.
*/
GAPI_WRAP
CoreML& cfgEnableOnlyNeuralEngine() {
enable_only_ane = true;
return *this;
}
bool use_cpu_only = false;
bool enable_on_subgraph = false;
bool enable_only_ane = false;
};
/**
* @brief This structure provides functions
* that fill inference options for CUDA Execution Provider.
* Please follow https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#cuda-execution-provider
*/
struct GAPI_EXPORTS_W_SIMPLE CUDA {
// NB: Used from python.
/// @private -- Exclude this constructor from OpenCV documentation
GAPI_WRAP
CUDA() = default;
/** @brief Class constructor.
Constructs CUDA parameters based on device type information.
@param dev_id Target device id to use.
*/
GAPI_WRAP
explicit CUDA(const int dev_id)
: device_id(dev_id) {
}
int device_id;
};
/**
* @brief This structure provides functions
* that fill inference options for TensorRT Execution Provider.
* Please follow https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#tensorrt-execution-provider
*/
struct GAPI_EXPORTS_W_SIMPLE TensorRT {
// NB: Used from python.
/// @private -- Exclude this constructor from OpenCV documentation
GAPI_WRAP
TensorRT() = default;
/** @brief Class constructor.
Constructs TensorRT parameters based on device type information.
@param dev_id Target device id to use.
*/
GAPI_WRAP
explicit TensorRT(const int dev_id)
: device_id(dev_id) {
}
int device_id;
};
/**
* @brief This structure provides functions
* that fill inference options for ONNX OpenVINO Execution Provider.
* Please follow https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#summary-of-options
*/
struct GAPI_EXPORTS_W_SIMPLE OpenVINO {
// NB: Used from python.
/// @private -- Exclude this constructor from OpenCV documentation
GAPI_WRAP
OpenVINO() = default;
/** @brief Class constructor.
Constructs OpenVINO parameters based on device type information.
@param dev_type Target device type to use. ("CPU", "GPU", "GPU.0" etc)
*/
GAPI_WRAP
explicit OpenVINO(const std::string &dev_type)
: device_type(dev_type) {
}
/** @brief Class constructor.
Constructs OpenVINO parameters based on map of options passed.
* @param params A map of parameter names and their corresponding string values.
*/
GAPI_WRAP
explicit OpenVINO(const std::map<std::string, std::string>& params)
: params_map(params) {
}
/** @brief Specifies OpenVINO Execution Provider cache dir.
This function is used to explicitly specify the path to save and load
the blobs enabling model caching feature.
@param dir Path to the directory what will be used as cache.
@return reference to this parameter structure.
*/
GAPI_WRAP
OpenVINO& cfgCacheDir(const std::string &dir) {
if (!params_map.empty()) {
cv::util::throw_error(std::logic_error("ep::OpenVINO cannot be changed if"
"created from the parameters map."));
}
cache_dir = dir;
return *this;
}
/** @brief Specifies OpenVINO Execution Provider number of threads.
This function is used to override the accelerator default value
of number of threads with this value at runtime.
@param nthreads Number of threads.
@return reference to this parameter structure.
*/
GAPI_WRAP
OpenVINO& cfgNumThreads(size_t nthreads) {
if (!params_map.empty()) {
cv::util::throw_error(std::logic_error("ep::OpenVINO cannot be changed if"
"created from the parameters map."));
}
num_of_threads = nthreads;
return *this;
}
/** @brief Enables OpenVINO Execution Provider opencl throttling.
This function is used to enable OpenCL queue throttling for GPU devices
(reduces CPU utilization when using GPU).
@return reference to this parameter structure.
*/
GAPI_WRAP
OpenVINO& cfgEnableOpenCLThrottling() {
if (!params_map.empty()) {
cv::util::throw_error(std::logic_error("ep::OpenVINO cannot be changed if"
"created from the parameters map."));
}
enable_opencl_throttling = true;
return *this;
}
/** @brief Enables OpenVINO Execution Provider dynamic shapes.
This function is used to enable OpenCL queue throttling for GPU devices
(reduces CPU utilization when using GPU).
This function is used to enable work with dynamic shaped models
whose shape will be set dynamically based on the infer input
image/data shape at run time in CPU.
@return reference to this parameter structure.
*/
GAPI_WRAP
OpenVINO& cfgEnableDynamicShapes() {
if (!params_map.empty()) {
cv::util::throw_error(std::logic_error("ep::OpenVINO cannot be changed if"
"created from the parameters map."));
}
enable_dynamic_shapes = true;
return *this;
}
std::string device_type;
std::string cache_dir;
size_t num_of_threads = 0;
bool enable_opencl_throttling = false;
bool enable_dynamic_shapes = false;
std::map<std::string, std::string> params_map;
};
/**
* @brief This structure provides functions
* that fill inference options for ONNX DirectML Execution Provider.
* Please follow https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html#directml-execution-provider
*/
class GAPI_EXPORTS_W_SIMPLE DirectML {
public:
// NB: Used from python.
/// @private -- Exclude this constructor from OpenCV documentation
GAPI_WRAP
DirectML() = default;
/** @brief Class constructor.
Constructs DirectML parameters based on device id.
@param device_id Target device id to use. ("0", "1", etc)
*/
GAPI_WRAP
explicit DirectML(const int device_id) : ddesc(device_id) { };
/** @brief Class constructor.
Constructs DirectML parameters based on adapter name.
@param adapter_name Target adapter_name to use.
*/
GAPI_WRAP
explicit DirectML(const std::string &adapter_name) : ddesc(adapter_name) { };
using DeviceDesc = cv::util::variant<int, std::string>;
DeviceDesc ddesc;
};
using EP = cv::util::variant< cv::util::monostate
, OpenVINO
, DirectML
, CoreML
, CUDA
, TensorRT>;
} // namespace ep
GAPI_EXPORTS cv::gapi::GBackend backend();
enum class TraitAs: int {
TENSOR, //!< G-API traits an associated cv::Mat as a raw tensor
// and passes dimensions as-is
IMAGE //!< G-API traits an associated cv::Mat as an image so
// creates an "image" blob (NCHW/NHWC, etc)
};
using PostProc = std::function<void(const std::unordered_map<std::string, cv::Mat> &,
std::unordered_map<std::string, cv::Mat> &)>;
namespace detail {
/**
* @brief This structure contains description of inference parameters
* which is specific to ONNX models.
*/
struct ParamDesc {
std::string model_path; //!< Path to model.
// NB: nun_* may differ from topology's real input/output port numbers
// (e.g. topology's partial execution)
std::size_t num_in; //!< How many inputs are defined in the operation
std::size_t num_out; //!< How many outputs are defined in the operation
// NB: Here order follows the `Net` API
std::vector<std::string> input_names; //!< Names of input network layers.
std::vector<std::string> output_names; //!< Names of output network layers.
using ConstInput = std::pair<cv::Mat, TraitAs>;
std::unordered_map<std::string, ConstInput> const_inputs; //!< Map with pair of name of network layer and ConstInput which will be associated with this.
std::vector<cv::Scalar> mean; //!< Mean values for preprocessing.
std::vector<cv::Scalar> stdev; //!< Standard deviation values for preprocessing.
std::vector<cv::GMatDesc> out_metas; //!< Out meta information about your output (type, dimension).
PostProc custom_post_proc; //!< Post processing function.
std::vector<bool> normalize; //!< Vector of bool values that enabled or disabled normalize of input data.
std::vector<std::string> names_to_remap; //!< Names of output layers that will be processed in PostProc function.
bool is_generic;
// TODO: Needs to modify the rest of ParamDesc accordingly to support
// both generic and non-generic options without duplication
// (as it was done for the OV IE backend)
// These values are pushed into the respective vector<> fields above
// when the generic infer parameters are unpacked (see GONNXBackendImpl::unpackKernel)
std::unordered_map<std::string, std::pair<cv::Scalar, cv::Scalar> > generic_mstd;
std::unordered_map<std::string, bool> generic_norm;
std::map<std::string, std::string> session_options;
std::vector<cv::gapi::onnx::ep::EP> execution_providers;
bool disable_mem_pattern;
cv::util::optional<int> opt_level;
};
} // namespace detail
template<typename Net>
struct PortCfg {
using In = std::array
< std::string
, std::tuple_size<typename Net::InArgs>::value >;
using Out = std::array
< std::string
, std::tuple_size<typename Net::OutArgs>::value >;
using NormCoefs = std::array
< cv::Scalar
, std::tuple_size<typename Net::InArgs>::value >;
using Normalize = std::array
< bool
, std::tuple_size<typename Net::InArgs>::value >;
};
/**
* Contains description of inference parameters and kit of functions that
* fill this parameters.
*/
template<typename Net> class Params {
public:
/** @brief Class constructor.
Constructs Params based on model information and sets default values for other
inference description parameters.
@param model Path to model (.onnx file).
*/
Params(const std::string &model) {
desc.model_path = model;
desc.num_in = std::tuple_size<typename Net::InArgs>::value;
desc.num_out = std::tuple_size<typename Net::OutArgs>::value;
desc.is_generic = false;
desc.disable_mem_pattern = false;
}
/** @brief Specifies sequence of network input layers names for inference.
The function is used to associate data of graph inputs with input layers of
network topology. Number of names has to match the number of network inputs. If a network
has only one input layer, there is no need to call it as the layer is
associated with input automatically but this doesn't prevent you from
doing it yourself. Count of names has to match to number of network inputs.
@param layer_names std::array<std::string, N> where N is the number of inputs
as defined in the @ref G_API_NET. Contains names of input layers.
@return the reference on modified object.
*/
Params<Net>& cfgInputLayers(const typename PortCfg<Net>::In &layer_names) {
desc.input_names.assign(layer_names.begin(), layer_names.end());
return *this;
}
/** @brief Specifies sequence of output layers names for inference.
The function is used to associate data of graph outputs with output layers of
network topology. If a network has only one output layer, there is no need to call it
as the layer is associated with output automatically but this doesn't prevent
you from doing it yourself. Count of names has to match to number of network
outputs or you can set your own output but for this case you have to
additionally use @ref cfgPostProc function.
@param layer_names std::array<std::string, N> where N is the number of outputs
as defined in the @ref G_API_NET. Contains names of output layers.
@return the reference on modified object.
*/
Params<Net>& cfgOutputLayers(const typename PortCfg<Net>::Out &layer_names) {
desc.output_names.assign(layer_names.begin(), layer_names.end());
return *this;
}
/** @brief Sets a constant input.
The function is used to set constant input. This input has to be
a prepared tensor since preprocessing is disabled for this case. You should
provide name of network layer which will receive provided data.
@param layer_name Name of network layer.
@param data cv::Mat that contains data which will be associated with network layer.
@param hint Type of input (TENSOR).
@return the reference on modified object.
*/
Params<Net>& constInput(const std::string &layer_name,
const cv::Mat &data,
TraitAs hint = TraitAs::TENSOR) {
desc.const_inputs[layer_name] = {data, hint};
return *this;
}
/** @brief Specifies mean value and standard deviation for preprocessing.
The function is used to set mean value and standard deviation for preprocessing
of input data.
@param m std::array<cv::Scalar, N> where N is the number of inputs
as defined in the @ref G_API_NET. Contains mean values.
@param s std::array<cv::Scalar, N> where N is the number of inputs
as defined in the @ref G_API_NET. Contains standard deviation values.
@return the reference on modified object.
*/
Params<Net>& cfgMeanStd(const typename PortCfg<Net>::NormCoefs &m,
const typename PortCfg<Net>::NormCoefs &s) {
desc.mean.assign(m.begin(), m.end());
desc.stdev.assign(s.begin(), s.end());
return *this;
}
/** @brief Configures graph output and provides the post processing function from user.
The function is used when you work with networks with dynamic outputs.
Since we can't know dimensions of inference result needs provide them for
construction of graph output. This dimensions can differ from inference result.
So you have to provide @ref PostProc function that gets information from inference
result and fill output which is constructed by dimensions from out_metas.
@param out_metas Out meta information about your output (type, dimension).
@param remap_function Post processing function, which has two parameters. First is onnx
result, second is graph output. Both parameters is std::map that contain pair of
layer's name and cv::Mat.
@return the reference on modified object.
*/
Params<Net>& cfgPostProc(const std::vector<cv::GMatDesc> &out_metas,
const PostProc &remap_function) {
desc.out_metas = out_metas;
desc.custom_post_proc = remap_function;
return *this;
}
/** @overload
Function with a rvalue parameters.
@param out_metas rvalue out meta information about your output (type, dimension).
@param remap_function rvalue post processing function, which has two parameters. First is onnx
result, second is graph output. Both parameters is std::map that contain pair of
layer's name and cv::Mat.
@return the reference on modified object.
*/
Params<Net>& cfgPostProc(std::vector<cv::GMatDesc> &&out_metas,
PostProc &&remap_function) {
desc.out_metas = std::move(out_metas);
desc.custom_post_proc = std::move(remap_function);
return *this;
}
/** @overload
The function has additional parameter names_to_remap. This parameter provides
information about output layers which will be used for inference and post
processing function.
@param out_metas Out meta information.
@param remap_function Post processing function.
@param names_to_remap Names of output layers. network's inference will
be done on these layers. Inference's result will be processed in post processing
function using these names.
@return the reference on modified object.
*/
Params<Net>& cfgPostProc(const std::vector<cv::GMatDesc> &out_metas,
const PostProc &remap_function,
const std::vector<std::string> &names_to_remap) {
desc.out_metas = out_metas;
desc.custom_post_proc = remap_function;
desc.names_to_remap = names_to_remap;
return *this;
}
/** @overload
Function with a rvalue parameters and additional parameter names_to_remap.
@param out_metas rvalue out meta information.
@param remap_function rvalue post processing function.
@param names_to_remap rvalue names of output layers. network's inference will
be done on these layers. Inference's result will be processed in post processing
function using these names.
@return the reference on modified object.
*/
Params<Net>& cfgPostProc(std::vector<cv::GMatDesc> &&out_metas,
PostProc &&remap_function,
std::vector<std::string> &&names_to_remap) {
desc.out_metas = std::move(out_metas);
desc.custom_post_proc = std::move(remap_function);
desc.names_to_remap = std::move(names_to_remap);
return *this;
}
/** @brief Specifies normalize parameter for preprocessing.
The function is used to set normalize parameter for preprocessing of input data.
@param normalizations std::array<cv::Scalar, N> where N is the number of inputs
as defined in the @ref G_API_NET. Сontains bool values that enabled or disabled
normalize of input data.
@return the reference on modified object.
*/
Params<Net>& cfgNormalize(const typename PortCfg<Net>::Normalize &normalizations) {
desc.normalize.assign(normalizations.begin(), normalizations.end());
return *this;
}
/** @brief Adds execution provider for runtime.
The function is used to add ONNX Runtime OpenVINO Execution Provider options.
@param ep OpenVINO Execution Provider options.
@see cv::gapi::onnx::ep::OpenVINO.
@return the reference on modified object.
*/
Params<Net>& cfgAddExecutionProvider(ep::OpenVINO&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
return *this;
}
/** @brief Adds execution provider for runtime.
The function is used to add ONNX Runtime DirectML Execution Provider options.
@param ep DirectML Execution Provider options.
@see cv::gapi::onnx::ep::DirectML.
@return the reference on modified object.
*/
Params<Net>& cfgAddExecutionProvider(ep::DirectML&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
return *this;
}
/** @brief Adds execution provider for runtime.
The function is used to add ONNX Runtime CoreML Execution Provider options.
@param ep CoreML Execution Provider options.
@see cv::gapi::onnx::ep::CoreML.
@return the reference on modified object.
*/
Params<Net>& cfgAddExecutionProvider(ep::CoreML&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
return *this;
}
/** @brief Adds execution provider for runtime.
The function is used to add ONNX Runtime CUDA Execution Provider options.
@param ep CUDA Execution Provider options.
@see cv::gapi::onnx::ep::CUDA.
@return the reference on modified object.
*/
Params<Net>& cfgAddExecutionProvider(ep::CUDA&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
return *this;
}
/** @brief Adds execution provider for runtime.
The function is used to add ONNX Runtime TensorRT Execution Provider options.
@param ep TensorRT Execution Provider options.
@see cv::gapi::onnx::ep::TensorRT.
@return the reference on modified object.
*/
Params<Net>& cfgAddExecutionProvider(ep::TensorRT&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
return *this;
}
/** @brief Disables the memory pattern optimization.
@return the reference on modified object.
*/
Params<Net>& cfgDisableMemPattern() {
desc.disable_mem_pattern = true;
return *this;
}
/** @brief Configures session options for ONNX Runtime.
This function is used to set various session options for the ONNX Runtime
session by accepting a map of key-value pairs.
@param options A map of session option to be applied to the ONNX Runtime session.
@return the reference on modified object.
*/
Params<Net>& cfgSessionOptions(const std::map<std::string, std::string>& options) {
desc.session_options.insert(options.begin(), options.end());
return *this;
}
/** @brief Configures optimization level for ONNX Runtime.
@param opt_level [optimization level]: Valid values are 0 (disable), 1 (basic), 2 (extended), 99 (all).
Please see onnxruntime_c_api.h (enum GraphOptimizationLevel) for the full list of all optimization levels.
@return the reference on modified object.
*/
Params<Net>& cfgOptLevel(const int opt_level) {
desc.opt_level = cv::util::make_optional(opt_level);
return *this;
}
// BEGIN(G-API's network parametrization API)
GBackend backend() const { return cv::gapi::onnx::backend(); }
std::string tag() const { return Net::tag(); }
cv::util::any params() const { return { desc }; }
// END(G-API's network parametrization API)
protected:
detail::ParamDesc desc;
};
/*
* @brief This structure provides functions for generic network type that
* fill inference parameters.
* @see struct Generic
*/
template<>
class Params<cv::gapi::Generic> {
public:
/** @brief Class constructor.
Constructs Params based on input information and sets default values for other
inference description parameters.
@param tag string tag of the network for which these parameters are intended.
@param model_path path to model file (.onnx file).
*/
Params(const std::string& tag, const std::string& model_path)
: desc{ model_path, 0u, 0u, {}, {}, {}, {}, {}, {}, {}, {}, {}, true, {}, {}, {}, {}, false, {} }, m_tag(tag) {}
/** @see onnx::Params::cfgMeanStdDev. */
void cfgMeanStdDev(const std::string &layer,
const cv::Scalar &m,
const cv::Scalar &s) {
desc.generic_mstd[layer] = std::make_pair(m, s);
}
/** @see onnx::Params::cfgNormalize. */
void cfgNormalize(const std::string &layer, bool flag) {
desc.generic_norm[layer] = flag;
}
/** @see onnx::Params::cfgAddExecutionProvider. */
void cfgAddExecutionProvider(ep::OpenVINO&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
}
/** @see onnx::Params::cfgAddExecutionProvider. */
void cfgAddExecutionProvider(ep::DirectML&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
}
/** @see onnx::Params::cfgAddExecutionProvider. */
void cfgAddExecutionProvider(ep::CoreML&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
}
/** @see onnx::Params::cfgAddExecutionProvider. */
void cfgAddExecutionProvider(ep::CUDA&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
}
/** @see onnx::Params::cfgAddExecutionProvider. */
void cfgAddExecutionProvider(ep::TensorRT&& ep) {
desc.execution_providers.emplace_back(std::move(ep));
}
/** @see onnx::Params::cfgDisableMemPattern. */
void cfgDisableMemPattern() {
desc.disable_mem_pattern = true;
}
/** @see onnx::Params::cfgSessionOptions. */
void cfgSessionOptions(const std::map<std::string, std::string>& options) {
desc.session_options.insert(options.begin(), options.end());
}
/** @see onnx::Params::cfgOptLevel. */
void cfgOptLevel(const int opt_level) {
desc.opt_level = cv::util::make_optional(opt_level);
}
// BEGIN(G-API's network parametrization API)
GBackend backend() const { return cv::gapi::onnx::backend(); }
std::string tag() const { return m_tag; }
cv::util::any params() const { return { desc }; }
// END(G-API's network parametrization API)
protected:
detail::ParamDesc desc;
std::string m_tag;
};
} // namespace onnx
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_INFER_HPP

View File

@ -1,709 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2023 Intel Corporation
#ifndef OPENCV_GAPI_INFER_OV_HPP
#define OPENCV_GAPI_INFER_OV_HPP
#include <string>
#include <opencv2/gapi/util/any.hpp>
#include <opencv2/gapi/own/exports.hpp> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelType[M], GBackend
#include <opencv2/gapi/infer.hpp> // Generic
#include <map>
namespace cv {
namespace gapi {
/**
* @brief This namespace contains G-API OpenVINO 2.0 backend functions,
* structures, and symbols.
*/
namespace ov {
GAPI_EXPORTS cv::gapi::GBackend backend();
namespace detail {
template <typename T>
using AttrMap = std::map<std::string, T>;
// NB: This type is supposed to be used to hold in/out layers
// attributes such as precision, layout, shape etc.
//
// User can provide attributes either:
// 1. cv::util::monostate - No value specified explicitly.
// 2. Attr - value specified explicitly that should be broadcasted to all layers.
// 3. AttrMap[str->T] - map specifies value for particular layer.
template <typename Attr>
using LayerVariantAttr = cv::util::variant< cv::util::monostate
, AttrMap<Attr>
, Attr>;
struct ParamDesc {
struct Model {
Model(const std::string &model_path_,
const std::string &bin_path_)
: model_path(model_path_), bin_path(bin_path_) {
}
std::string model_path;
std::string bin_path;
LayerVariantAttr<std::string> input_tensor_layout;
LayerVariantAttr<std::string> input_model_layout;
LayerVariantAttr<std::string> output_tensor_layout;
LayerVariantAttr<std::string> output_model_layout;
LayerVariantAttr<int> output_tensor_precision;
LayerVariantAttr<std::vector<size_t>> new_shapes;
LayerVariantAttr<std::vector<float>> mean_values;
LayerVariantAttr<std::vector<float>> scale_values;
LayerVariantAttr<int> interpolation;
};
struct CompiledModel {
std::string blob_path;
};
using Kind = cv::util::variant<Model, CompiledModel>;
ParamDesc(Kind &&kind_,
const std::string &device_,
const bool is_generic_,
const size_t num_in_,
const size_t num_out_)
: kind(std::move(kind_)), device(device_),
is_generic(is_generic_),
num_in(num_in_), num_out(num_out_) {
}
Kind kind;
std::string device;
bool is_generic;
std::size_t num_in;
std::size_t num_out;
std::vector<std::string> input_names;
std::vector<std::string> output_names;
using PluginConfigT = std::map<std::string, std::string>;
PluginConfigT config;
size_t nireq = 1;
};
// NB: Just helper to avoid code duplication.
static detail::ParamDesc::Model&
getModelToSetAttrOrThrow(detail::ParamDesc::Kind &kind,
const std::string &attr_name) {
if (cv::util::holds_alternative<detail::ParamDesc::CompiledModel>(kind)) {
cv::util::throw_error(
std::logic_error("Specifying " + attr_name + " isn't"
" possible for compiled model."));
}
GAPI_Assert(cv::util::holds_alternative<detail::ParamDesc::Model>(kind));
return cv::util::get<detail::ParamDesc::Model>(kind);
}
} // namespace detail
/**
* @brief This structure provides functions
* that fill inference parameters for "OpenVINO Toolkit" model.
*/
template<typename Net> struct Params {
public:
/** @brief Class constructor.
Constructs Params based on model information and specifies default values for other
inference description parameters. Model is loaded and compiled using "OpenVINO Toolkit".
@param model_path Path to a model.
@param bin_path Path to a data file.
For IR format (*.bin):
If path is empty, will try to read a bin file with the same name as xml.
If the bin file with the same name is not found, will load IR without weights.
For PDPD (*.pdmodel) and ONNX (*.onnx) formats bin_path isn't used.
@param device target device to use.
*/
Params(const std::string &model_path,
const std::string &bin_path,
const std::string &device)
: m_desc( detail::ParamDesc::Kind{detail::ParamDesc::Model{model_path, bin_path}}
, device
, false /* is generic */
, std::tuple_size<typename Net::InArgs>::value
, std::tuple_size<typename Net::OutArgs>::value) {
}
/** @overload
Use this constructor to work with pre-compiled network.
Model is imported from a pre-compiled blob.
@param blob_path path to the compiled model (*.blob).
@param device target device to use.
*/
Params(const std::string &blob_path,
const std::string &device)
: m_desc( detail::ParamDesc::Kind{detail::ParamDesc::CompiledModel{blob_path}}
, device
, false /* is generic */
, std::tuple_size<typename Net::InArgs>::value
, std::tuple_size<typename Net::OutArgs>::value) {
}
/** @brief Specifies sequence of network input layers names for inference.
The function is used to associate cv::gapi::infer<> inputs with the model inputs.
Number of names has to match the number of network inputs as defined in G_API_NET().
In case a network has only single input layer, there is no need to specify name manually.
@param layer_names std::array<std::string, N> where N is the number of inputs
as defined in the @ref G_API_NET. Contains names of input layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputLayers(const std::vector<std::string> &layer_names) {
m_desc.input_names = layer_names;
return *this;
}
/** @brief Specifies sequence of network output layers names for inference.
The function is used to associate cv::gapi::infer<> outputs with the model outputs.
Number of names has to match the number of network outputs as defined in G_API_NET().
In case a network has only single output layer, there is no need to specify name manually.
@param layer_names std::array<std::string, N> where N is the number of outputs
as defined in the @ref G_API_NET. Contains names of output layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgOutputLayers(const std::vector<std::string> &layer_names) {
m_desc.output_names = layer_names;
return *this;
}
/** @brief Specifies OpenVINO plugin configuration.
The function is used to set configuration for OpenVINO plugin. Some parameters
can be different for each plugin. Please follow https://docs.openvinotoolkit.org/latest/index.html
to check information about specific plugin.
@param config Map of pairs: (config parameter name, config parameter value).
@return reference to this parameter structure.
*/
Params<Net>& cfgPluginConfig(const detail::ParamDesc::PluginConfigT &config) {
m_desc.config = config;
return *this;
}
/** @brief Specifies tensor layout for an input layer.
The function is used to set tensor layout for an input layer.
@param layout Tensor layout ("NCHW", "NWHC", etc)
will be applied to all input layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputTensorLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input tensor layout")
.input_tensor_layout = std::move(layout);
return *this;
}
/** @overload
@param layout_map Map of pairs: name of corresponding input layer
and its tensor layout represented in std::string ("NCHW", "NHWC", etc)
@return reference to this parameter structure.
*/
Params<Net>&
cfgInputTensorLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input tensor layout")
.input_tensor_layout = std::move(layout_map);
return *this;
}
/** @brief Specifies model layout for an input layer.
The function is used to set model layout for an input layer.
@param layout Model layout ("NCHW", "NHWC", etc)
will be applied to all input layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgInputModelLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input model layout")
.input_model_layout = std::move(layout);
return *this;
}
/** @overload
@param layout_map Map of pairs: name of corresponding input layer
and its model layout ("NCHW", "NHWC", etc)
@return reference to this parameter structure.
*/
Params<Net>&
cfgInputModelLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input model layout")
.input_model_layout = std::move(layout_map);
return *this;
}
/** @brief Specifies tensor layout for an output layer.
The function is used to set tensor layout for an output layer.
@param layout Tensor layout ("NCHW", "NWHC", etc)
will be applied to all output layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgOutputTensorLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor layout")
.output_tensor_layout = std::move(layout);
return *this;
}
/** @overload
@param layout_map Map of pairs: name of corresponding output layer
and its tensor layout represented in std::string ("NCHW", "NHWC", etc)
@return reference to this parameter structure.
*/
Params<Net>&
cfgOutputTensorLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor layout")
.output_tensor_layout = std::move(layout_map);
return *this;
}
/** @brief Specifies model layout for an output layer.
The function is used to set model layout for an output layer.
@param layout Model layout ("NCHW", "NHWC", etc)
will be applied to all output layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgOutputModelLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output model layout")
.output_model_layout = std::move(layout);
return *this;
}
/** @overload
@param layout_map Map of pairs: name of corresponding output layer
and its model layout ("NCHW", "NHWC", etc)
@return reference to this parameter structure.
*/
Params<Net>&
cfgOutputModelLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output model layout")
.output_model_layout = std::move(layout_map);
return *this;
}
/** @brief Specifies tensor precision for an output layer.
The function is used to set tensor precision for an output layer..
@param precision Precision in OpenCV format (CV_8U, CV_32F, ...)
will be applied to all output layers.
@return reference to this parameter structure.
*/
Params<Net>& cfgOutputTensorPrecision(int precision) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor precision")
.output_tensor_precision = precision;
return *this;
}
/** @overload
@param precision_map Map of pairs: name of corresponding output layer
and its precision in OpenCV format (CV_8U, CV_32F, ...)
@return reference to this parameter structure.
*/
Params<Net>&
cfgOutputTensorPrecision(detail::AttrMap<int> precision_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor precision")
.output_tensor_precision = std::move(precision_map);
return *this;
}
/** @brief Specifies the new shape for input layers.
The function is used to set new shape for input layers.
@param new_shape New shape will be applied to all input layers.
@return reference to this parameter structure.
*/
Params<Net>&
cfgReshape(std::vector<size_t> new_shape) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "reshape")
.new_shapes = std::move(new_shape);
return *this;
}
/** @overload
@param new_shape_map Map of pairs: name of corresponding output layer
and its new shape.
@return reference to this parameter structure.
*/
Params<Net>&
cfgReshape(detail::AttrMap<std::vector<size_t>> new_shape_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "reshape")
.new_shapes = std::move(new_shape_map);
return *this;
}
/** @brief Specifies number of asynchronous inference requests.
@param nireq Number of inference asynchronous requests.
@return reference to this parameter structure.
*/
Params<Net>& cfgNumRequests(const size_t nireq) {
if (nireq == 0) {
cv::util::throw_error(
std::logic_error("Number of inference requests"
" must be greater than zero."));
}
m_desc.nireq = nireq;
return *this;
}
/** @brief Specifies mean values for preprocessing.
*
The function is used to set mean values for input layer preprocessing.
@param mean_values Float vector contains mean values
@return reference to this parameter structure.
*/
Params<Net>& cfgMean(std::vector<float> mean_values) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "mean values")
.mean_values = std::move(mean_values);
return *this;
}
/** @overload
@param mean_map Map of pairs: name of corresponding input layer
and its mean values.
@return reference to this parameter structure.
*/
Params<Net>& cfgMean(detail::AttrMap<std::vector<float>> mean_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "mean values")
.mean_values = std::move(mean_map);
return *this;
}
/** @brief Specifies scale values for preprocessing.
*
The function is used to set scale values for input layer preprocessing.
@param scale_values Float vector contains scale values
@return reference to this parameter structure.
*/
Params<Net>& cfgScale(std::vector<float> scale_values) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "scale values")
.scale_values = std::move(scale_values);
return *this;
}
/** @overload
@param scale_map Map of pairs: name of corresponding input layer
and its mean values.
@return reference to this parameter structure.
*/
Params<Net>& cfgScale(detail::AttrMap<std::vector<float>> scale_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "scale values")
.scale_values = std::move(scale_map);
return *this;
}
/** @brief Specifies resize interpolation algorithm.
*
The function is used to configure resize preprocessing for input layer.
@param interpolation Resize interpolation algorithm.
Supported algorithms: #INTER_NEAREST, #INTER_LINEAR, #INTER_CUBIC.
@return reference to this parameter structure.
*/
Params<Net>& cfgResize(int interpolation) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "resize preprocessing")
.interpolation = std::move(interpolation);
return *this;
}
/** @overload
@param interpolation Map of pairs: name of corresponding input layer
and its resize algorithm.
@return reference to this parameter structure.
*/
Params<Net>& cfgResize(detail::AttrMap<int> interpolation) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "resize preprocessing")
.interpolation = std::move(interpolation);
return *this;
}
// BEGIN(G-API's network parametrization API)
GBackend backend() const { return cv::gapi::ov::backend(); }
std::string tag() const { return Net::tag(); }
cv::util::any params() const { return { m_desc }; }
// END(G-API's network parametrization API)
protected:
detail::ParamDesc m_desc;
};
/*
* @brief This structure provides functions for generic network type that
* fill inference parameters.
* @see struct Generic
*/
template<>
class Params<cv::gapi::Generic> {
public:
/** @brief Class constructor.
Constructs Params based on model information and specifies default values for other
inference description parameters. Model is loaded and compiled using "OpenVINO Toolkit".
@param tag string tag of the network for which these parameters are intended.
@param model_path Path to a model.
@param bin_path Path to a data file.
For IR format (*.bin):
If path is empty, will try to read a bin file with the same name as xml.
If the bin file with the same name is not found, will load IR without weights.
For PDPD (*.pdmodel) and ONNX (*.onnx) formats bin_path isn't used.
@param device target device to use.
*/
Params(const std::string &tag,
const std::string &model_path,
const std::string &bin_path,
const std::string &device)
: m_tag(tag),
m_desc( detail::ParamDesc::Kind{detail::ParamDesc::Model{model_path, bin_path}}
, device
, true /* is generic */
, 0u
, 0u) {
}
/** @overload
This constructor for pre-compiled networks. Model is imported from pre-compiled
blob.
@param tag string tag of the network for which these parameters are intended.
@param blob_path path to the compiled model (*.blob).
@param device target device to use.
*/
Params(const std::string &tag,
const std::string &blob_path,
const std::string &device)
: m_tag(tag),
m_desc( detail::ParamDesc::Kind{detail::ParamDesc::CompiledModel{blob_path}}
, device
, true /* is generic */
, 0u
, 0u) {
}
/** @see ov::Params::cfgPluginConfig. */
Params& cfgPluginConfig(const detail::ParamDesc::PluginConfigT &config) {
m_desc.config = config;
return *this;
}
/** @see ov::Params::cfgInputTensorLayout. */
Params& cfgInputTensorLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input tensor layout")
.input_tensor_layout = std::move(layout);
return *this;
}
/** @overload */
Params&
cfgInputTensorLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input tensor layout")
.input_tensor_layout = std::move(layout_map);
return *this;
}
/** @see ov::Params::cfgInputModelLayout. */
Params& cfgInputModelLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input model layout")
.input_model_layout = std::move(layout);
return *this;
}
/** @overload */
Params&
cfgInputModelLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "input model layout")
.input_model_layout = std::move(layout_map);
return *this;
}
/** @see ov::Params::cfgOutputTensorLayout. */
Params& cfgOutputTensorLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor layout")
.output_tensor_layout = std::move(layout);
return *this;
}
/** @overload */
Params&
cfgOutputTensorLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor layout")
.output_tensor_layout = std::move(layout_map);
return *this;
}
/** @see ov::Params::cfgOutputModelLayout. */
Params& cfgOutputModelLayout(std::string layout) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output model layout")
.output_model_layout = std::move(layout);
return *this;
}
/** @overload */
Params&
cfgOutputModelLayout(detail::AttrMap<std::string> layout_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output model layout")
.output_model_layout = std::move(layout_map);
return *this;
}
/** @see ov::Params::cfgOutputTensorPrecision. */
Params& cfgOutputTensorPrecision(int precision) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor precision")
.output_tensor_precision = precision;
return *this;
}
/** @overload */
Params&
cfgOutputTensorPrecision(detail::AttrMap<int> precision_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "output tensor precision")
.output_tensor_precision = std::move(precision_map);
return *this;
}
/** @see ov::Params::cfgReshape. */
Params& cfgReshape(std::vector<size_t> new_shape) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "reshape")
.new_shapes = std::move(new_shape);
return *this;
}
/** @overload */
Params&
cfgReshape(detail::AttrMap<std::vector<size_t>> new_shape_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "reshape")
.new_shapes = std::move(new_shape_map);
return *this;
}
/** @see ov::Params::cfgNumRequests. */
Params& cfgNumRequests(const size_t nireq) {
if (nireq == 0) {
cv::util::throw_error(
std::logic_error("Number of inference requests"
" must be greater than zero."));
}
m_desc.nireq = nireq;
return *this;
}
/** @see ov::Params::cfgMean. */
Params& cfgMean(std::vector<float> mean_values) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "mean values")
.mean_values = std::move(mean_values);
return *this;
}
/** @overload */
Params& cfgMean(detail::AttrMap<std::vector<float>> mean_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "mean values")
.mean_values = std::move(mean_map);
return *this;
}
/** @see ov::Params::cfgScale. */
Params& cfgScale(std::vector<float> scale_values) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "scale values")
.scale_values = std::move(scale_values);
return *this;
}
/** @overload */
Params& cfgScale(detail::AttrMap<std::vector<float>> scale_map) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "scale values")
.scale_values = std::move(scale_map);
return *this;
}
/** @see ov::Params::cfgResize. */
Params& cfgResize(int interpolation) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "resize preprocessing")
.interpolation = std::move(interpolation);
return *this;
}
/** @overload */
Params& cfgResize(detail::AttrMap<int> interpolation) {
detail::getModelToSetAttrOrThrow(m_desc.kind, "resize preprocessing")
.interpolation = std::move(interpolation);
return *this;
}
// BEGIN(G-API's network parametrization API)
GBackend backend() const { return cv::gapi::ov::backend(); }
std::string tag() const { return m_tag; }
cv::util::any params() const { return { m_desc }; }
// END(G-API's network parametrization API)
protected:
std::string m_tag;
detail::ParamDesc m_desc;
};
} // namespace ov
namespace wip { namespace ov {
/**
* @brief Ask G-API OpenVINO backend to run only inference of model provided.
*
* G-API OpenVINO backend will perform only the inference of the model provided
* without populating input and copying back output data.
* This mode is used to evaluate the pure inference performance of the model without
* taking into account the i/o data transfer.
*/
struct benchmark_mode { };
} // namespace ov
} // namespace wip
} // namespace gapi
namespace detail
{
template<> struct CompileArgTag<cv::gapi::wip::ov::benchmark_mode>
{
static const char* tag() { return "gapi.wip.ov.benchmark_mode"; }
};
}
} // namespace cv
#endif // OPENCV_GAPI_INFER_OV_HPP

View File

@ -1,138 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2020 Intel Corporation
#ifndef OPENCV_GAPI_PARSERS_HPP
#define OPENCV_GAPI_PARSERS_HPP
#include <utility> // std::tuple
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/gkernel.hpp>
namespace cv { namespace gapi {
namespace nn {
namespace parsers {
using GRects = GArray<Rect>;
using GDetections = std::tuple<GArray<Rect>, GArray<int>>;
G_TYPED_KERNEL(GParseSSDBL, <GDetections(GMat, GOpaque<Size>, float, int)>,
"org.opencv.nn.parsers.parseSSD_BL") {
static std::tuple<GArrayDesc,GArrayDesc> outMeta(const GMatDesc&, const GOpaqueDesc&, float, int) {
return std::make_tuple(empty_array_desc(), empty_array_desc());
}
};
G_TYPED_KERNEL(GParseSSD, <GRects(GMat, GOpaque<Size>, float, bool, bool)>,
"org.opencv.nn.parsers.parseSSD") {
static GArrayDesc outMeta(const GMatDesc&, const GOpaqueDesc&, float, bool, bool) {
return empty_array_desc();
}
};
G_TYPED_KERNEL(GParseYolo, <GDetections(GMat, GOpaque<Size>, float, float, std::vector<float>)>,
"org.opencv.nn.parsers.parseYolo") {
static std::tuple<GArrayDesc, GArrayDesc> outMeta(const GMatDesc&, const GOpaqueDesc&,
float, float, const std::vector<float>&) {
return std::make_tuple(empty_array_desc(), empty_array_desc());
}
static const std::vector<float>& defaultAnchors() {
static std::vector<float> anchors {
0.57273f, 0.677385f, 1.87446f, 2.06253f, 3.33843f, 5.47434f, 7.88282f, 3.52778f, 9.77052f, 9.16828f
};
return anchors;
}
};
} // namespace parsers
} // namespace nn
/** @brief Parses output of SSD network.
Extracts detection information (box, confidence, label) from SSD output and
filters it by given confidence and label.
@note Function textual ID is "org.opencv.nn.parsers.parseSSD_BL"
@param in Input CV_32F tensor with {1,1,N,7} dimensions.
@param inSz Size to project detected boxes to (size of the input image).
@param confidenceThreshold If confidence of the
detection is smaller than confidence threshold, detection is rejected.
@param filterLabel If provided (!= -1), only detections with
given label will get to the output.
@return a tuple with a vector of detected boxes and a vector of appropriate labels.
*/
GAPI_EXPORTS_W std::tuple<GArray<Rect>, GArray<int>> parseSSD(const GMat& in,
const GOpaque<Size>& inSz,
const float confidenceThreshold = 0.5f,
const int filterLabel = -1);
/** @brief Parses output of SSD network.
Extracts detection information (box, confidence) from SSD output and
filters it by given confidence and by going out of bounds.
@note Function textual ID is "org.opencv.nn.parsers.parseSSD"
@param in Input CV_32F tensor with {1,1,N,7} dimensions.
@param inSz Size to project detected boxes to (size of the input image).
@param confidenceThreshold If confidence of the
detection is smaller than confidence threshold, detection is rejected.
@param alignmentToSquare If provided true, bounding boxes are extended to squares.
The center of the rectangle remains unchanged, the side of the square is
the larger side of the rectangle.
@param filterOutOfBounds If provided true, out-of-frame boxes are filtered.
@return a vector of detected bounding boxes.
*/
GAPI_EXPORTS_W GArray<Rect> parseSSD(const GMat& in,
const GOpaque<Size>& inSz,
const float confidenceThreshold,
const bool alignmentToSquare,
const bool filterOutOfBounds);
/** @brief Parses output of Yolo network.
Extracts detection information (box, confidence, label) from Yolo output,
filters it by given confidence and performs non-maximum suppression for overlapping boxes.
@note Function textual ID is "org.opencv.nn.parsers.parseYolo"
@param in Input CV_32F tensor with {1,13,13,N} dimensions, N should satisfy:
\f[\texttt{N} = (\texttt{num_classes} + \texttt{5}) * \texttt{5},\f]
where num_classes - a number of classes Yolo network was trained with.
@param inSz Size to project detected boxes to (size of the input image).
@param confidenceThreshold If confidence of the
detection is smaller than confidence threshold, detection is rejected.
@param nmsThreshold Non-maximum suppression threshold which controls minimum
relative box intersection area required for rejecting the box with a smaller confidence.
If 1.f, nms is not performed and no boxes are rejected.
@param anchors Anchors Yolo network was trained with.
@note The default anchor values are specified for YOLO v2 Tiny as described in Intel Open Model Zoo
<a href="https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/yolo-v2-tiny-tf/yolo-v2-tiny-tf.md">documentation</a>.
@return a tuple with a vector of detected boxes and a vector of appropriate labels.
*/
GAPI_EXPORTS_W std::tuple<GArray<Rect>, GArray<int>> parseYolo(const GMat& in,
const GOpaque<Size>& inSz,
const float confidenceThreshold = 0.5f,
const float nmsThreshold = 0.5f,
const std::vector<float>& anchors
= nn::parsers::GParseYolo::defaultAnchors());
} // namespace gapi
} // namespace cv
// Reimport parseSSD & parseYolo under their initial namespace
namespace cv {
namespace gapi {
namespace streaming {
using cv::gapi::parseSSD;
using cv::gapi::parseYolo;
} // namespace streaming
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_PARSERS_HPP

View File

@ -1,258 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2020 Intel Corporation
#ifndef OPENCV_GAPI_MEDIA_HPP
#define OPENCV_GAPI_MEDIA_HPP
#include <memory> // unique_ptr<>, shared_ptr<>
#include <array> // array<>
#include <functional> // function<>
#include <utility> // forward<>()
#include <opencv2/gapi/gframe.hpp>
#include <opencv2/gapi/util/any.hpp>
// Forward declaration
namespace cv {
namespace gapi {
namespace s11n {
struct IOStream;
struct IIStream;
} // namespace s11n
} // namespace gapi
} // namespace cv
namespace cv {
/** \addtogroup gapi_data_structures
* @{
*
* @brief Extra G-API data structures used to pass input/output data
* to the graph for processing.
*/
/**
* @brief cv::MediaFrame class represents an image/media frame
* obtained from an external source.
*
* cv::MediaFrame represents image data as specified in
* cv::MediaFormat. cv::MediaFrame is designed to be a thin wrapper over some
* external memory of buffer; the class itself provides an uniform
* interface over such types of memory. cv::MediaFrame wraps data from
* a camera driver or from a media codec and provides an abstraction
* layer over this memory to G-API. MediaFrame defines a compact interface
* to access and manage the underlying data; the implementation is
* fully defined by the associated Adapter (which is usually
* user-defined).
*
* @sa cv::RMat
*/
class GAPI_EXPORTS MediaFrame {
public:
/// This enum defines different types of cv::MediaFrame provided
/// access to the underlying data. Note that different flags can't
/// be combined in this version.
enum class Access {
R, ///< Access data for reading
W, ///< Access data for writing
};
class IAdapter;
class View;
using AdapterPtr = std::unique_ptr<IAdapter>;
/**
* @brief Constructs an empty MediaFrame
*
* The constructed object has no any data associated with it.
*/
MediaFrame();
/**
* @brief Constructs a MediaFrame with the given
* Adapter. MediaFrame takes ownership over the passed adapter.
*
* @param p an unique pointer to instance of IAdapter derived class.
*/
explicit MediaFrame(AdapterPtr &&p);
/**
* @overload
* @brief Constructs a MediaFrame with the given parameters for
* the Adapter. The adapter of type `T` is costructed on the fly.
*
* @param args list of arguments to construct an adapter of type
* `T`.
*/
template<class T, class... Args> static cv::MediaFrame Create(Args&&... args);
/**
* @brief Obtain access to the underlying data with the given
* mode.
*
* Depending on the associated Adapter and the data wrapped, this
* method may be cheap (e.g., the underlying memory is local) or
* costly (if the underlying memory is external or device
* memory).
*
* @param mode an access mode flag
* @return a MediaFrame::View object. The views should be handled
* carefully, refer to the MediaFrame::View documentation for details.
*/
View access(Access mode) const;
/**
* @brief Returns a media frame descriptor -- the information
* about the media format, dimensions, etc.
* @return a cv::GFrameDesc
*/
cv::GFrameDesc desc() const;
// FIXME: design a better solution
// Should be used only if the actual adapter provides implementation
/// @private -- exclude from the OpenCV documentation for now.
cv::util::any blobParams() const;
/**
* @brief Casts and returns the associated MediaFrame adapter to
* the particular adapter type `T`, returns nullptr if the type is
* different.
*
* This method may be useful if the adapter type is known by the
* caller, and some lower level access to the memory is required.
* Depending on the memory type, it may be more efficient than
* access().
*
* @return a pointer to the adapter object, nullptr if the adapter
* type is different.
*/
template<typename T> T* get() const {
static_assert(std::is_base_of<IAdapter, T>::value,
"T is not derived from cv::MediaFrame::IAdapter!");
auto* adapter = getAdapter();
GAPI_Assert(adapter != nullptr);
return dynamic_cast<T*>(adapter);
}
/**
* @brief Serialize MediaFrame's data to a byte array.
*
* @note The actual logic is implemented by frame's adapter class.
* Does nothing by default.
*
* @param os Bytestream to store serialized MediaFrame data in.
*/
void serialize(cv::gapi::s11n::IOStream& os) const;
private:
struct Priv;
std::shared_ptr<Priv> m;
IAdapter* getAdapter() const;
};
template<class T, class... Args>
inline cv::MediaFrame cv::MediaFrame::Create(Args&&... args) {
std::unique_ptr<T> ptr(new T(std::forward<Args>(args)...));
return cv::MediaFrame(std::move(ptr));
}
/**
* @brief Provides access to the MediaFrame's underlying data.
*
* This object contains the necessary information to access the pixel
* data of the associated MediaFrame: arrays of pointers and strides
* (distance between every plane row, in bytes) for every image
* plane, as defined in cv::MediaFormat.
* There may be up to four image planes in MediaFrame.
*
* Depending on the MediaFrame::Access flag passed in
* MediaFrame::access(), a MediaFrame::View may be read- or
* write-only.
*
* Depending on the MediaFrame::IAdapter implementation associated
* with the parent MediaFrame, writing to memory with
* MediaFrame::Access::R flag may have no effect or lead to
* undefined behavior. Same applies to reading the memory with
* MediaFrame::Access::W flag -- again, depending on the IAdapter
* implementation, the host-side buffer the view provides access to
* may have no current data stored in (so in-place editing of the
* buffer contents may not be possible).
*
* MediaFrame::View objects must be handled carefully, as an external
* resource associated with MediaFrame may be locked for the time the
* MediaFrame::View object exists. Obtaining MediaFrame::View should
* be seen as "map" and destroying it as "unmap" in the "map/unmap"
* idiom (applicable to OpenCL, device memory, remote
* memory).
*
* When a MediaFrame buffer is accessed for writing, and the memory
* under MediaFrame::View::Ptrs is altered, the data synchronization
* of a host-side and device/remote buffer is not guaranteed until the
* MediaFrame::View is destroyed. In other words, the real data on the
* device or in a remote target may be updated at the MediaFrame::View
* destruction only -- but it depends on the associated
* MediaFrame::IAdapter implementation.
*/
class GAPI_EXPORTS MediaFrame::View final {
public:
static constexpr const size_t MAX_PLANES = 4;
using Ptrs = std::array<void*, MAX_PLANES>;
using Strides = std::array<std::size_t, MAX_PLANES>; // in bytes
using Callback = std::function<void()>;
/// @private
View(Ptrs&& ptrs, Strides&& strs, Callback &&cb = [](){});
/// @private
View(const View&) = delete;
/// @private
View(View&&) = default;
/// @private
View& operator = (const View&) = delete;
~View();
Ptrs ptr; ///< Array of image plane pointers
Strides stride; ///< Array of image plane strides, in bytes.
private:
Callback m_cb;
};
/**
* @brief An interface class for MediaFrame data adapters.
*
* Implement this interface to wrap media data in the MediaFrame. It
* makes sense to implement this class if there is a custom
* cv::gapi::wip::IStreamSource defined -- in this case, a stream
* source can produce MediaFrame objects with this adapter and the
* media data may be passed to graph without any copy. For example, a
* GStreamer-based stream source can implement an adapter over
* `GstBuffer` and G-API will transparently use it in the graph.
*/
class GAPI_EXPORTS MediaFrame::IAdapter {
public:
virtual ~IAdapter() = 0;
virtual cv::GFrameDesc meta() const = 0;
virtual MediaFrame::View access(MediaFrame::Access) = 0;
// FIXME: design a better solution
// The default implementation does nothing
virtual cv::util::any blobParams() const;
virtual void serialize(cv::gapi::s11n::IOStream&) {
GAPI_Error("Generic serialize method of MediaFrame::IAdapter does nothing by default. "
"Please, implement it in derived class to properly serialize the object.");
}
virtual void deserialize(cv::gapi::s11n::IIStream&) {
GAPI_Error("Generic deserialize method of MediaFrame::IAdapter does nothing by default. "
"Please, implement it in derived class to properly deserialize the object.");
}
};
/** @} */
} //namespace cv
#endif // OPENCV_GAPI_MEDIA_HPP

View File

@ -1,66 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2022 Intel Corporation
#ifndef OPENCV_GAPI_OAK_INFER_HPP
#define OPENCV_GAPI_OAK_INFER_HPP
#include <unordered_map>
#include <string>
#include <array>
#include <tuple>
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/util/any.hpp>
#include <opencv2/core/cvdef.h> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
namespace cv {
namespace gapi {
namespace oak {
namespace detail {
/**
* @brief This structure contains description of inference parameters
* which is specific to OAK models.
*/
struct ParamDesc {
std::string blob_file;
};
} // namespace detail
/**
* Contains description of inference parameters and kit of functions that
* fill this parameters.
*/
template<typename Net> class Params {
public:
/** @brief Class constructor.
Constructs Params based on model information and sets default values for other
inference description parameters.
@param model Path to model (.blob file)
*/
explicit Params(const std::string &model) {
desc.blob_file = model;
};
// BEGIN(G-API's network parametrization API)
GBackend backend() const { return cv::gapi::oak::backend(); }
std::string tag() const { return Net::tag(); }
cv::util::any params() const { return { desc }; }
// END(G-API's network parametrization API)
protected:
detail::ParamDesc desc;
};
} // namespace oak
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_OAK_INFER_HPP

View File

@ -1,158 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2021 Intel Corporation
#ifndef OPENCV_GAPI_OAK_HPP
#define OPENCV_GAPI_OAK_HPP
#include <opencv2/gapi/garg.hpp> // IStreamSource
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/gstreaming.hpp> // GOptRunArgsP
namespace cv {
namespace gapi {
namespace oak {
// FIXME: copypasted from dai library
struct EncoderConfig {
/**
* Rate control mode specifies if constant or variable bitrate should be used (H264 / H265)
*/
enum class RateControlMode: int { CBR, VBR };
/**
* Encoding profile, H264, H265 or MJPEG
*/
enum class Profile: int { H264_BASELINE, H264_HIGH, H264_MAIN, H265_MAIN, MJPEG };
/**
* Specifies preferred bitrate (kb) of compressed output bitstream
*/
std::int32_t bitrate = 8000;
/**
* Every x number of frames a keyframe will be inserted
*/
std::int32_t keyframeFrequency = 30;
/**
* Specifies maximum bitrate (kb) of compressed output bitstream
*/
std::int32_t maxBitrate = 8000;
/**
* Specifies number of B frames to be inserted
*/
std::int32_t numBFrames = 0;
/**
* This options specifies how many frames are available in this nodes pool (can help if
* receiver node is slow at consuming
*/
std::uint32_t numFramesPool = 4;
/**
* Encoding profile, H264, H265 or MJPEG
*/
Profile profile = Profile::H265_MAIN;
/**
* Value between 0-100% (approximates quality)
*/
std::int32_t quality = 80;
/**
* Lossless mode ([M]JPEG only)
*/
bool lossless = false;
/**
* Rate control mode specifies if constant or variable bitrate should be used (H264 / H265)
*/
RateControlMode rateCtrlMode = RateControlMode::CBR;
/**
* Input and compressed output frame width
*/
std::int32_t width = 1920;
/**
* Input and compressed output frame height
*/
std::int32_t height = 1080;
/**
* Frame rate
*/
float frameRate = 30.0f;
};
G_API_OP(GEncFrame, <GArray<uint8_t>(GFrame, EncoderConfig)>, "org.opencv.oak.enc_frame") {
static GArrayDesc outMeta(const GFrameDesc&, const EncoderConfig&) {
return cv::empty_array_desc();
}
};
G_API_OP(GSobelXY, <GFrame(GFrame, const cv::Mat&, const cv::Mat&)>, "org.opencv.oak.sobelxy") {
static GFrameDesc outMeta(const GFrameDesc& in, const cv::Mat&, const cv::Mat&) {
return in;
}
};
G_API_OP(GCopy, <GFrame(GFrame)>, "org.opencv.oak.copy") {
static GFrameDesc outMeta(const GFrameDesc& in) {
return in;
}
};
// FIXME: add documentation on operations below
GAPI_EXPORTS GArray<uint8_t> encode(const GFrame& in, const EncoderConfig&);
GAPI_EXPORTS GFrame sobelXY(const GFrame& in,
const cv::Mat& hk,
const cv::Mat& vk);
GAPI_EXPORTS GFrame copy(const GFrame& in);
// OAK backend & kernels ////////////////////////////////////////////////////////
GAPI_EXPORTS cv::gapi::GBackend backend();
GAPI_EXPORTS cv::gapi::GKernelPackage kernels();
// Camera object ///////////////////////////////////////////////////////////////
struct GAPI_EXPORTS ColorCameraParams {
/**
* Format of the frame one gets from the camera
*/
bool interleaved = false;
// FIXME: extend
enum class BoardSocket: int { RGB, BGR };
BoardSocket board_socket = BoardSocket::RGB;
// FIXME: extend
enum class Resolution: int { THE_1080_P };
Resolution resolution = Resolution::THE_1080_P;
};
class GAPI_EXPORTS ColorCamera: public cv::gapi::wip::IStreamSource {
cv::MediaFrame m_dummy;
ColorCameraParams m_params;
virtual bool pull(cv::gapi::wip::Data &data) override;
virtual GMetaArg descr_of() const override;
public:
ColorCamera();
explicit ColorCamera(const ColorCameraParams& params);
};
} // namespace oak
} // namespace gapi
namespace detail {
template<> struct CompileArgTag<gapi::oak::ColorCameraParams> {
static const char* tag() { return "gapi.oak.colorCameraParams"; }
};
template<> struct CompileArgTag<gapi::oak::EncoderConfig> {
static const char* tag() { return "gapi.oak.encoderConfig"; }
};
} // namespace detail
} // namespace cv
#endif // OPENCV_GAPI_OAK_HPP

View File

@ -1,27 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OCL_CORE_API_HPP
#define OPENCV_GAPI_OCL_CORE_API_HPP
#include <opencv2/core/cvdef.h> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
namespace cv {
namespace gapi {
namespace core {
namespace ocl {
GAPI_EXPORTS_W cv::GKernelPackage kernels();
} // namespace ocl
} // namespace core
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_OCL_CORE_API_HPP

View File

@ -1,260 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2020 Intel Corporation
#ifndef OPENCV_GAPI_GOCLKERNEL_HPP
#define OPENCV_GAPI_GOCLKERNEL_HPP
#include <vector>
#include <functional>
#include <map>
#include <unordered_map>
#include <opencv2/core/mat.hpp>
#include <opencv2/gapi/gcommon.hpp>
#include <opencv2/gapi/gkernel.hpp>
#include <opencv2/gapi/garg.hpp>
// FIXME: namespace scheme for backends?
namespace cv {
namespace gimpl
{
// Forward-declare an internal class
class GOCLExecutable;
} // namespace gimpl
namespace gapi
{
/**
* @brief This namespace contains G-API OpenCL backend functions, structures, and symbols.
*/
namespace ocl
{
/**
* \addtogroup gapi_std_backends G-API Standard Backends
* @{
*/
/**
* @brief Get a reference to OCL backend.
*
* At the moment, the OCL backend is built atop of OpenCV
* "Transparent API" (T-API), see cv::UMat for details.
*
* @sa gapi_std_backends
*/
GAPI_EXPORTS cv::gapi::GBackend backend();
/** @} */
} // namespace ocl
} // namespace gapi
// Represents arguments which are passed to a wrapped OCL function
// FIXME: put into detail?
class GAPI_EXPORTS GOCLContext
{
public:
// Generic accessor API
template<typename T>
const T& inArg(int input) { return m_args.at(input).get<T>(); }
// Syntax sugar
const cv::UMat& inMat(int input);
cv::UMat& outMatR(int output); // FIXME: Avoid cv::Mat m = ctx.outMatR()
const cv::Scalar& inVal(int input);
cv::Scalar& outValR(int output); // FIXME: Avoid cv::Scalar s = ctx.outValR()
template<typename T> std::vector<T>& outVecR(int output) // FIXME: the same issue
{
return outVecRef(output).wref<T>();
}
template<typename T> T& outOpaqueR(int output) // FIXME: the same issue
{
return outOpaqueRef(output).wref<T>();
}
protected:
detail::VectorRef& outVecRef(int output);
detail::OpaqueRef& outOpaqueRef(int output);
std::vector<GArg> m_args;
std::unordered_map<std::size_t, GRunArgP> m_results;
friend class gimpl::GOCLExecutable;
};
class GAPI_EXPORTS GOCLKernel
{
public:
// This function is kernel's execution entry point (does the processing work)
using F = std::function<void(GOCLContext &)>;
GOCLKernel();
explicit GOCLKernel(const F& f);
void apply(GOCLContext &ctx);
protected:
F m_f;
};
// FIXME: This is an ugly ad-hoc implementation. TODO: refactor
namespace detail
{
template<class T> struct ocl_get_in;
template<> struct ocl_get_in<cv::GMat>
{
static cv::UMat get(GOCLContext &ctx, int idx) { return ctx.inMat(idx); }
};
template<> struct ocl_get_in<cv::GScalar>
{
static cv::Scalar get(GOCLContext &ctx, int idx) { return ctx.inVal(idx); }
};
template<typename U> struct ocl_get_in<cv::GArray<U> >
{
static const std::vector<U>& get(GOCLContext &ctx, int idx) { return ctx.inArg<VectorRef>(idx).rref<U>(); }
};
template<> struct ocl_get_in<cv::GFrame>
{
static cv::MediaFrame get(GOCLContext &ctx, int idx) { return ctx.inArg<cv::MediaFrame>(idx); }
};
template<typename U> struct ocl_get_in<cv::GOpaque<U> >
{
static const U& get(GOCLContext &ctx, int idx) { return ctx.inArg<OpaqueRef>(idx).rref<U>(); }
};
template<class T> struct ocl_get_in
{
static T get(GOCLContext &ctx, int idx) { return ctx.inArg<T>(idx); }
};
struct tracked_cv_umat{
//TODO Think if T - API could reallocate UMat to a proper size - how do we handle this ?
//tracked_cv_umat(cv::UMat& m) : r{(m)}, original_data{m.getMat(ACCESS_RW).data} {}
tracked_cv_umat(cv::UMat& m) : r(m), original_data{ nullptr } {}
cv::UMat &r; // FIXME: It was a value (not a reference) before.
// Actually OCL backend should allocate its internal data!
uchar* original_data;
operator cv::UMat& (){ return r;}
void validate() const{
//if (r.getMat(ACCESS_RW).data != original_data)
//{
// util::throw_error
// (std::logic_error
// ("OpenCV kernel output parameter was reallocated. \n"
// "Incorrect meta data was provided ?"));
//}
}
};
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable: 4702) // unreachable code
#endif
template<typename... Outputs>
void postprocess_ocl(Outputs&... outs)
{
struct
{
void operator()(tracked_cv_umat* bm) { bm->validate(); }
void operator()(...) { }
} validate;
//dummy array to unfold parameter pack
int dummy[] = { 0, (validate(&outs), 0)... };
cv::util::suppress_unused_warning(dummy);
}
#ifdef _MSC_VER
#pragma warning(pop)
#endif
template<class T> struct ocl_get_out;
template<> struct ocl_get_out<cv::GMat>
{
static tracked_cv_umat get(GOCLContext &ctx, int idx)
{
auto& r = ctx.outMatR(idx);
return{ r };
}
};
template<> struct ocl_get_out<cv::GScalar>
{
static cv::Scalar& get(GOCLContext &ctx, int idx)
{
return ctx.outValR(idx);
}
};
template<typename U> struct ocl_get_out<cv::GArray<U> >
{
static std::vector<U>& get(GOCLContext &ctx, int idx) { return ctx.outVecR<U>(idx); }
};
template<typename U> struct ocl_get_out<cv::GOpaque<U> >
{
static U& get(GOCLContext &ctx, int idx) { return ctx.outOpaqueR<U>(idx); }
};
template<typename, typename, typename>
struct OCLCallHelper;
// FIXME: probably can be simplified with std::apply or analogue.
template<typename Impl, typename... Ins, typename... Outs>
struct OCLCallHelper<Impl, std::tuple<Ins...>, std::tuple<Outs...> >
{
template<typename... Inputs>
struct call_and_postprocess
{
template<typename... Outputs>
static void call(Inputs&&... ins, Outputs&&... outs)
{
//not using a std::forward on outs is deliberate in order to
//cause compilation error, by trying to bind rvalue references to lvalue references
Impl::run(std::forward<Inputs>(ins)..., outs...);
postprocess_ocl(outs...);
}
};
template<int... IIs, int... OIs>
static void call_impl(GOCLContext &ctx, detail::Seq<IIs...>, detail::Seq<OIs...>)
{
//TODO: Make sure that OpenCV kernels do not reallocate memory for output parameters
//by comparing it's state (data ptr) before and after the call.
//Convert own::Scalar to cv::Scalar before call kernel and run kernel
//convert cv::Scalar to own::Scalar after call kernel and write back results
call_and_postprocess<decltype(ocl_get_in<Ins>::get(ctx, IIs))...>::call(ocl_get_in<Ins>::get(ctx, IIs)..., ocl_get_out<Outs>::get(ctx, OIs)...);
}
static void call(GOCLContext &ctx)
{
call_impl(ctx,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<sizeof...(Outs)>::type());
}
};
} // namespace detail
template<class Impl, class K>
class GOCLKernelImpl: public cv::detail::OCLCallHelper<Impl, typename K::InArgs, typename K::OutArgs>,
public cv::detail::KernelTag
{
using P = detail::OCLCallHelper<Impl, typename K::InArgs, typename K::OutArgs>;
public:
using API = K;
static cv::gapi::GBackend backend() { return cv::gapi::ocl::backend(); }
static cv::GOCLKernel kernel() { return GOCLKernel(&P::call); }
};
#define GAPI_OCL_KERNEL(Name, API) struct Name: public cv::GOCLKernelImpl<Name, API>
} // namespace cv
#endif // OPENCV_GAPI_GOCLKERNEL_HPP

View File

@ -1,27 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OCL_IMGPROC_API_HPP
#define OPENCV_GAPI_OCL_IMGPROC_API_HPP
#include <opencv2/core/cvdef.h> // GAPI_EXPORTS
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
namespace cv {
namespace gapi {
namespace imgproc {
namespace ocl {
GAPI_EXPORTS GKernelPackage kernels();
} // namespace ocl
} // namespace imgproc
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_OCL_IMGPROC_API_HPP

View File

@ -1,42 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OPENCV_INCLUDES_HPP
#define OPENCV_GAPI_OPENCV_INCLUDES_HPP
#if !defined(GAPI_STANDALONE)
# include <opencv2/core/mat.hpp>
# include <opencv2/core/cvdef.h>
# include <opencv2/core/types.hpp>
# include <opencv2/core/base.hpp>
#define GAPI_OWN_TYPES_LIST cv::gapi::own::Rect, \
cv::gapi::own::Size, \
cv::gapi::own::Point, \
cv::gapi::own::Point2f, \
cv::gapi::own::Scalar, \
cv::gapi::own::Mat
#else // Without OpenCV
# include <opencv2/gapi/own/cvdefs.hpp>
# include <opencv2/gapi/own/types.hpp> // cv::gapi::own::Rect/Size/Point
# include <opencv2/gapi/own/scalar.hpp> // cv::gapi::own::Scalar
# include <opencv2/gapi/own/mat.hpp>
// replacement of cv's structures:
namespace cv {
using Rect = gapi::own::Rect;
using Size = gapi::own::Size;
using Point = gapi::own::Point;
using Point2f = gapi::own::Point2f;
using Point3f = gapi::own::Point3f;
using Scalar = gapi::own::Scalar;
using Mat = gapi::own::Mat;
} // namespace cv
#define GAPI_OWN_TYPES_LIST cv::gapi::own::VoidType
#endif // !defined(GAPI_STANDALONE)
#endif // OPENCV_GAPI_OPENCV_INCLUDES_HPP

View File

@ -1,70 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OPERATORS_HPP
#define OPENCV_GAPI_OPERATORS_HPP
#include <opencv2/gapi/gmat.hpp>
#include <opencv2/gapi/gscalar.hpp>
namespace cv
{
GAPI_EXPORTS cv::GMat operator+(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator+(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator+(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator-(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator-(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator-(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator*(const cv::GMat& lhs, float rhs);
GAPI_EXPORTS cv::GMat operator*(float lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator*(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator*(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator/(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator/(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator/(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator&(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator|(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator^(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator~(const cv::GMat& lhs);
GAPI_EXPORTS cv::GMat operator&(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator|(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator^(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator&(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator|(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator^(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator>(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator>=(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator<(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator<=(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator==(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator!=(const cv::GMat& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator>(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator>=(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator<(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator<=(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator==(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator!=(const cv::GMat& lhs, const cv::GScalar& rhs);
GAPI_EXPORTS cv::GMat operator>(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator>=(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator<(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator<=(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator==(const cv::GScalar& lhs, const cv::GMat& rhs);
GAPI_EXPORTS cv::GMat operator!=(const cv::GScalar& lhs, const cv::GMat& rhs);
} // cv
#endif // OPENCV_GAPI_OPERATORS_HPP

View File

@ -1,194 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2023 Intel Corporation
#ifndef OPENCV_GAPI_OT_HPP
#define OPENCV_GAPI_OT_HPP
#include <opencv2/gapi.hpp>
#include <opencv2/gapi/s11n.hpp>
#include <opencv2/gapi/gkernel.hpp>
namespace cv {
namespace gapi {
/**
* @brief This namespace contains G-API Operation Types for
* VAS Object Tracking module functionality.
*/
namespace ot {
/**
* @enum TrackingStatus
*
* Tracking status twin for vas::ot::TrackingStatus
*/
enum TrackingStatus
{
NEW = 0, /**< The object is newly added. */
TRACKED, /**< The object is being tracked. */
LOST /**< The object gets lost now. The object can be tracked again
by specifying detected object manually. */
};
struct GAPI_EXPORTS_W_SIMPLE ObjectTrackerParams
{
/**
* Maximum number of trackable objects in a frame.
* Valid range: 1 <= max_num_objects. Or it can be -1 if there is no limitation
* of maximum number in X86. KMB/TBH has limitation up to 1024.
* Default value is -1 which means there is no limitation in X86. KMB/TBH is -1 means 200.
*/
GAPI_PROP_RW int32_t max_num_objects = -1;
/**
* Input color format. Supports 0(BGR), 1(NV12), 2(BGRX) and 4(I420)
*/
GAPI_PROP_RW int32_t input_image_format = 0;
/**
* Specifies whether tracker to use detection class for keeping id of an object.
* If it is true, new detection will be associated from previous tracking only when
* those two have same class.
* class id of an object is fixed across video frames.
* If it is false, new detection can be associated across different-class objects.
* In this case, the class id of an object may change across video frames depending on the tracker input.
* It is recommended to turn this option off when it is likely that detector confuses the class of object.
* For example, when detector confuses bicycle and motorbike. Turning this option off will increase
* the tracking reliability as tracker will ignore the class label of detector.
* @n
* Default value is true.
*/
GAPI_PROP_RW bool tracking_per_class = true;
bool operator==(const ObjectTrackerParams& other) const
{
return max_num_objects == other.max_num_objects
&& input_image_format == other.input_image_format
&& tracking_per_class == other.tracking_per_class;
}
};
using GTrackedInfo = std::tuple<cv::GArray<cv::Rect>, cv::GArray<int32_t>, cv::GArray<uint64_t>, cv::GArray<int>>;
G_API_OP(GTrackFromMat, <GTrackedInfo(cv::GMat, cv::GArray<cv::Rect>, cv::GArray<int32_t>, float)>, "com.intel.track_from_mat")
{
static std::tuple<cv::GArrayDesc, cv::GArrayDesc,
cv::GArrayDesc, cv::GArrayDesc> outMeta(cv::GMatDesc, cv::GArrayDesc, cv::GArrayDesc, float)
{
return std::make_tuple(cv::empty_array_desc(), cv::empty_array_desc(),
cv::empty_array_desc(), cv::empty_array_desc());
}
};
G_API_OP(GTrackFromFrame, <GTrackedInfo(cv::GFrame, cv::GArray<cv::Rect>, cv::GArray<int32_t>, float)>, "com.intel.track_from_frame")
{
static std::tuple<cv::GArrayDesc, cv::GArrayDesc,
cv::GArrayDesc, cv::GArrayDesc> outMeta(cv::GFrameDesc, cv::GArrayDesc, cv::GArrayDesc, float)
{
return std::make_tuple(cv::empty_array_desc(), cv::empty_array_desc(),
cv::empty_array_desc(), cv::empty_array_desc());
}
};
/**
* @brief Tracks objects with video frames.
* If a detected object is overlapped enough with one of tracked object, the tracked object's
* informationis updated with the input detected object.
* On the other hand, if a detected object is overlapped with none of tracked objects,
* the detected object is newly added and ObjectTracker starts to track the object.
* In zero term tracking type, ObjectTracker clears tracked objects in case that empty
* list of detected objects is passed in.
*
* @param mat Input frame.
* @param detected_rects Detected objects rectangles in the input frame.
* @param detected_class_labels Detected objects class labels in the input frame.
* @param delta Frame_delta_t Delta time between two consecutive tracking in seconds.
* The valid range is [0.005 ~ 0.5].
* @return Tracking results of target objects.
* cv::GArray<cv::Rect> Array of rectangles for tracked objects.
* cv::GArray<int32_t> Array of detected objects labels.
* cv::GArray<uint64_t> Array of tracking IDs for objects.
* Numbering sequence starts from 1.
* The value 0 means the tracking ID of this object has
* not been assigned.
* cv::GArray<int> Array of tracking statuses for objects.
*/
GAPI_EXPORTS_W std::tuple<cv::GArray<cv::Rect>,
cv::GArray<int>,
cv::GArray<uint64_t>,
cv::GArray<int>>
track(const cv::GMat& mat,
const cv::GArray<cv::Rect>& detected_rects,
const cv::GArray<int>& detected_class_labels,
float delta);
/**
@overload
* @brief Tracks objects with video frames. Overload of track(...) for frame as GFrame.
*
* @param frame Input frame.
* @param detected_rects Detected objects rectangles in the input frame.
* @param detected_class_labels Detected objects class labels in the input frame.
* @param delta Frame_delta_t Delta time between two consecutive tracking in seconds.
* The valid range is [0.005 ~ 0.5].
* @return Tracking results of target objects.
* @return Tracking results of target objects.
* cv::GArray<cv::Rect> Array of rectangles for tracked objects.
* cv::GArray<int32_t> Array of detected objects labels.
* cv::GArray<uint64_t> Array of tracking IDs for objects.
* Numbering sequence starts from 1.
* The value 0 means the tracking ID of this object has
* not been assigned.
* cv::GArray<int> Array of tracking statuses for objects.
*/
GAPI_EXPORTS_W std::tuple<cv::GArray<cv::Rect>,
cv::GArray<int>,
cv::GArray<uint64_t>,
cv::GArray<int>>
track(const cv::GFrame& frame,
const cv::GArray<cv::Rect>& detected_rects,
const cv::GArray<int>& detected_class_labels,
float delta);
} // namespace ot
} // namespace gapi
} // namespace cv
// FIXME: move to a separate file?
namespace cv
{
namespace detail
{
template<> struct CompileArgTag<cv::gapi::ot::ObjectTrackerParams>
{
static const char* tag()
{
return "cv.gapi.ot.object_tracker_params";
}
};
} // namespace detail
namespace gapi
{
namespace s11n
{
namespace detail
{
template<> struct S11N<cv::gapi::ot::ObjectTrackerParams> {
static void serialize(IOStream &os, const cv::gapi::ot::ObjectTrackerParams &p) {
os << p. max_num_objects << p.input_image_format << p.tracking_per_class;
}
static cv::gapi::ot::ObjectTrackerParams deserialize(IIStream &is) {
cv::gapi::ot::ObjectTrackerParams p;
is >> p. max_num_objects >> p.input_image_format >> p.tracking_per_class;
return p;
}
};
} // namespace detail
} // namespace s11n
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_OT_HPP

View File

@ -1,60 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018-2020 Intel Corporation
#ifndef OPENCV_GAPI_OWN_ASSERT_HPP
#define OPENCV_GAPI_OWN_ASSERT_HPP
#include <opencv2/gapi/util/compiler_hints.hpp>
#define GAPI_DbgAssertNoOp(expr) { \
constexpr bool _assert_tmp = false && (expr); \
cv::util::suppress_unused_warning(_assert_tmp); \
}
#if !defined(GAPI_STANDALONE)
#include <opencv2/core/base.hpp>
#define GAPI_Assert CV_Assert
#if defined _DEBUG || defined CV_STATIC_ANALYSIS
# define GAPI_DbgAssert CV_DbgAssert
#else
# define GAPI_DbgAssert(expr) GAPI_DbgAssertNoOp(expr)
#endif
#define GAPI_Error(msg) CV_Error(cv::Error::StsError, msg)
#else
#include <stdexcept>
#include <sstream>
#include <opencv2/gapi/util/throw.hpp>
namespace detail
{
[[noreturn]] inline void assert_abort(const char* str, int line, const char* file, const char* func)
{
std::stringstream ss;
ss << file << ":" << line << ": Assertion " << str << " in function " << func << " failed\n";
cv::util::throw_error(std::logic_error(ss.str()));
}
}
#define GAPI_Assert(expr) \
{ if (!(expr)) ::detail::assert_abort(#expr, __LINE__, __FILE__, __func__); }
#ifdef NDEBUG
# define GAPI_DbgAssert(expr) GAPI_DbgAssertNoOp(expr)
#else
# define GAPI_DbgAssert(expr) GAPI_Assert(expr)
#endif
#define GAPI_Error(msg) { \
::detail::assert_abort(msg, __LINE__, __FILE__, __func__); \
}
#endif // GAPI_STANDALONE
#endif // OPENCV_GAPI_OWN_ASSERT_HPP

View File

@ -1,55 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OWN_CONVERT_HPP
#define OPENCV_GAPI_OWN_CONVERT_HPP
#if !defined(GAPI_STANDALONE)
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/own/mat.hpp>
namespace cv
{
template<typename T>
std::vector<T> to_own(const cv::MatSize &sz) {
std::vector<T> result(sz.dims());
for (int i = 0; i < sz.dims(); i++) {
// Note: cv::MatSize is not iterable
result[i] = static_cast<T>(sz[i]);
}
return result;
}
cv::gapi::own::Mat to_own(Mat&&) = delete;
inline cv::gapi::own::Mat to_own(Mat const& m) {
return (m.dims <= 2)
? cv::gapi::own::Mat{m.rows, m.cols, m.type(), m.data, m.step}
: cv::gapi::own::Mat{to_own<int>(m.size), m.type(), m.data};
}
namespace gapi
{
namespace own
{
inline cv::Mat to_ocv(Mat const& m) {
return m.dims.empty()
? cv::Mat{m.rows, m.cols, m.type(), m.data, m.step}
: cv::Mat{m.dims, m.type(), m.data};
}
cv::Mat to_ocv(Mat&&) = delete;
} // namespace own
} // namespace gapi
} // namespace cv
#endif // !defined(GAPI_STANDALONE)
#endif // OPENCV_GAPI_OWN_CONVERT_HPP

View File

@ -1,166 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_CV_DEFS_HPP
#define OPENCV_GAPI_CV_DEFS_HPP
#if defined(GAPI_STANDALONE)
// Simulate OpenCV definitions taken from various
// OpenCV interface headers if G-API is built in a
// standalone mode.
// interface.h:
typedef unsigned char uchar;
typedef char schar;
typedef unsigned short ushort;
#define CV_USRTYPE1 (void)"CV_USRTYPE1 support has been dropped in OpenCV 4.0"
#define CV_CN_MAX 512
#define CV_CN_SHIFT 3
#define CV_DEPTH_MAX (1 << CV_CN_SHIFT)
#define CV_8U 0
#define CV_8S 1
#define CV_16U 2
#define CV_16S 3
#define CV_32S 4
#define CV_32F 5
#define CV_64F 6
#define CV_16F 7
#define CV_MAT_DEPTH_MASK (CV_DEPTH_MAX - 1)
#define CV_MAT_DEPTH(flags) ((flags) & CV_MAT_DEPTH_MASK)
#define CV_MAKETYPE(depth,cn) (CV_MAT_DEPTH(depth) + (((cn)-1) << CV_CN_SHIFT))
#define CV_MAKE_TYPE CV_MAKETYPE
#define CV_8UC1 CV_MAKETYPE(CV_8U,1)
#define CV_8UC2 CV_MAKETYPE(CV_8U,2)
#define CV_8UC3 CV_MAKETYPE(CV_8U,3)
#define CV_8UC4 CV_MAKETYPE(CV_8U,4)
#define CV_8UC(n) CV_MAKETYPE(CV_8U,(n))
#define CV_8SC1 CV_MAKETYPE(CV_8S,1)
#define CV_8SC2 CV_MAKETYPE(CV_8S,2)
#define CV_8SC3 CV_MAKETYPE(CV_8S,3)
#define CV_8SC4 CV_MAKETYPE(CV_8S,4)
#define CV_8SC(n) CV_MAKETYPE(CV_8S,(n))
#define CV_16UC1 CV_MAKETYPE(CV_16U,1)
#define CV_16UC2 CV_MAKETYPE(CV_16U,2)
#define CV_16UC3 CV_MAKETYPE(CV_16U,3)
#define CV_16UC4 CV_MAKETYPE(CV_16U,4)
#define CV_16UC(n) CV_MAKETYPE(CV_16U,(n))
#define CV_16SC1 CV_MAKETYPE(CV_16S,1)
#define CV_16SC2 CV_MAKETYPE(CV_16S,2)
#define CV_16SC3 CV_MAKETYPE(CV_16S,3)
#define CV_16SC4 CV_MAKETYPE(CV_16S,4)
#define CV_16SC(n) CV_MAKETYPE(CV_16S,(n))
#define CV_32SC1 CV_MAKETYPE(CV_32S,1)
#define CV_32SC2 CV_MAKETYPE(CV_32S,2)
#define CV_32SC3 CV_MAKETYPE(CV_32S,3)
#define CV_32SC4 CV_MAKETYPE(CV_32S,4)
#define CV_32SC(n) CV_MAKETYPE(CV_32S,(n))
#define CV_16FC1 CV_MAKETYPE(CV_16F,1)
#define CV_16FC2 CV_MAKETYPE(CV_16F,2)
#define CV_16FC3 CV_MAKETYPE(CV_16F,3)
#define CV_16FC4 CV_MAKETYPE(CV_16F,4)
#define CV_16FC(n) CV_MAKETYPE(CV_16F,(n))
#define CV_32FC1 CV_MAKETYPE(CV_32F,1)
#define CV_32FC2 CV_MAKETYPE(CV_32F,2)
#define CV_32FC3 CV_MAKETYPE(CV_32F,3)
#define CV_32FC4 CV_MAKETYPE(CV_32F,4)
#define CV_32FC(n) CV_MAKETYPE(CV_32F,(n))
#define CV_64FC1 CV_MAKETYPE(CV_64F,1)
#define CV_64FC2 CV_MAKETYPE(CV_64F,2)
#define CV_64FC3 CV_MAKETYPE(CV_64F,3)
#define CV_64FC4 CV_MAKETYPE(CV_64F,4)
#define CV_64FC(n) CV_MAKETYPE(CV_64F,(n))
// cvdef.h:
#ifndef CV_ALWAYS_INLINE
# if defined(__GNUC__) && (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 1))
# define CV_ALWAYS_INLINE inline __attribute__((always_inline))
# elif defined(_MSC_VER)
# define CV_ALWAYS_INLINE __forceinline
# else
# define CV_ALWAYS_INLINE inline
# endif
#endif
#define CV_MAT_CN_MASK ((CV_CN_MAX - 1) << CV_CN_SHIFT)
#define CV_MAT_CN(flags) ((((flags) & CV_MAT_CN_MASK) >> CV_CN_SHIFT) + 1)
#define CV_MAT_TYPE_MASK (CV_DEPTH_MAX*CV_CN_MAX - 1)
#define CV_MAT_TYPE(flags) ((flags) & CV_MAT_TYPE_MASK)
#define CV_MAT_CONT_FLAG_SHIFT 14
#define CV_MAT_CONT_FLAG (1 << CV_MAT_CONT_FLAG_SHIFT)
#define CV_IS_MAT_CONT(flags) ((flags) & CV_MAT_CONT_FLAG)
#define CV_IS_CONT_MAT CV_IS_MAT_CONT
#define CV_SUBMAT_FLAG_SHIFT 15
#define CV_SUBMAT_FLAG (1 << CV_SUBMAT_FLAG_SHIFT)
#define CV_IS_SUBMAT(flags) ((flags) & CV_MAT_SUBMAT_FLAG)
//** Size of each channel item,
// 0x8442211 = 1000 0100 0100 0010 0010 0001 0001 ~ array of sizeof(arr_type_elem) */
#define CV_ELEM_SIZE1(type) \
((((sizeof(size_t)<<28)|0x8442211) >> CV_MAT_DEPTH(type)*4) & 15)
#define CV_MAT_TYPE(flags) ((flags) & CV_MAT_TYPE_MASK)
/** 0x3a50 = 11 10 10 01 01 00 00 ~ array of log2(sizeof(arr_type_elem)) */
#define CV_ELEM_SIZE(type) \
(CV_MAT_CN(type) << ((((sizeof(size_t)/4+1)*16384|0x3a50) >> CV_MAT_DEPTH(type)*2) & 3))
#ifndef CV_OVERRIDE
# define CV_OVERRIDE override
#endif
// base.h:
namespace cv
{
enum BorderTypes {
BORDER_CONSTANT = 0, //!< `iiiiii|abcdefgh|iiiiiii` with some specified `i`
BORDER_REPLICATE = 1, //!< `aaaaaa|abcdefgh|hhhhhhh`
BORDER_REFLECT = 2, //!< `fedcba|abcdefgh|hgfedcb`
BORDER_WRAP = 3, //!< `cdefgh|abcdefgh|abcdefg`
BORDER_REFLECT_101 = 4, //!< `gfedcb|abcdefgh|gfedcba`
BORDER_TRANSPARENT = 5, //!< `uvwxyz|abcdefgh|ijklmno`
BORDER_REFLECT101 = BORDER_REFLECT_101, //!< same as BORDER_REFLECT_101
BORDER_DEFAULT = BORDER_REFLECT_101, //!< same as BORDER_REFLECT_101
BORDER_ISOLATED = 16 //!< do not look outside of ROI
};
// imgproc.hpp:
enum InterpolationFlags{
INTER_NEAREST = 0,
INTER_LINEAR = 1,
INTER_CUBIC = 2,
INTER_AREA = 3,
INTER_LANCZOS4 = 4,
INTER_LINEAR_EXACT = 5,
INTER_MAX = 7,
};
} // namespace cv
static inline int cvFloor( double value )
{
int i = (int)value;
return i - (i > value);
}
#endif // defined(GAPI_STANDALONE)
#endif // OPENCV_GAPI_CV_DEFS_HPP

View File

@ -1,42 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OWN_TYPES_HPP
#define OPENCV_GAPI_OWN_TYPES_HPP
# if defined(__OPENCV_BUILD)
# include <opencv2/core/base.hpp>
# define GAPI_EXPORTS CV_EXPORTS
/* special informative macros for wrapper generators */
# define GAPI_PROP CV_PROP
# define GAPI_PROP_RW CV_PROP_RW
# define GAPI_WRAP CV_WRAP
# define GAPI_EXPORTS_W_SIMPLE CV_EXPORTS_W_SIMPLE
# define GAPI_EXPORTS_W CV_EXPORTS_W
# else
# define GAPI_PROP
# define GAPI_PROP_RW
# define GAPI_WRAP
# define GAPI_EXPORTS
# define GAPI_EXPORTS_W_SIMPLE
# define GAPI_EXPORTS_W
#if 0 // Note: the following version currently is not needed for non-OpenCV build
# if defined _WIN32
# define GAPI_EXPORTS __declspec(dllexport)
# elif defined __GNUC__ && __GNUC__ >= 4
# define GAPI_EXPORTS __attribute__ ((visibility ("default")))
# endif
# ifndef GAPI_EXPORTS
# define GAPI_EXPORTS
# endif
#endif
# endif
#endif // OPENCV_GAPI_OWN_TYPES_HPP

View File

@ -1,354 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OWN_MAT_HPP
#define OPENCV_GAPI_OWN_MAT_HPP
#include <opencv2/gapi/opencv_includes.hpp>
#include <opencv2/gapi/own/types.hpp>
#include <opencv2/gapi/own/scalar.hpp>
#include <opencv2/gapi/own/saturate.hpp>
#include <opencv2/gapi/own/assert.hpp>
#include <memory> //std::shared_ptr
#include <cstring> //std::memcpy
#include <numeric> //std::accumulate
#include <vector>
#include <opencv2/gapi/util/throw.hpp>
namespace cv { namespace gapi { namespace own {
namespace detail {
template <typename T, unsigned char channels>
void assign_row(void* ptr, int cols, Scalar const& s)
{
auto p = static_cast<T*>(ptr);
for (int c = 0; c < cols; c++)
{
for (int ch = 0; ch < channels; ch++)
{
p[c * channels + ch] = saturate<T>(s[ch], roundd);
}
}
}
inline size_t default_step(int type, int cols)
{
return CV_ELEM_SIZE(type) * cols;
}
//Matrix header, i.e. fields that are unique to each Mat object.
//Devoted class is needed to implement custom behavior on move (erasing state of moved from object)
struct MatHeader{
enum { AUTO_STEP = 0};
enum { TYPE_MASK = 0x00000FFF };
MatHeader() = default;
MatHeader(int _rows, int _cols, int type, void* _data, size_t _step)
: flags((type & TYPE_MASK)), rows(_rows), cols(_cols), data((uchar*)_data), step(_step == AUTO_STEP ? detail::default_step(type, _cols) : _step)
{}
MatHeader(const std::vector<int> &_dims, int type, void* _data)
: flags((type & TYPE_MASK)), data((uchar*)_data), step(0), dims(_dims)
{}
MatHeader(const MatHeader& ) = default;
MatHeader(MatHeader&& src) : MatHeader(src) // reuse copy constructor here
{
MatHeader empty; //give it a name to call copy(not move) assignment below
src = empty;
}
MatHeader& operator=(const MatHeader& ) = default;
MatHeader& operator=(MatHeader&& src)
{
*this = src; //calling a copy assignment here, not move one
MatHeader empty; //give it a name to call copy(not move) assignment below
src = empty;
return *this;
}
/*! includes several bit-fields:
- depth
- number of channels
*/
int flags = 0;
//! the number of rows and columns or (-1, -1) when the matrix has more than 2 dimensions
int rows = 0, cols = 0;
//! pointer to the data
uchar* data = nullptr;
size_t step = 0;
//! dimensions (ND-case)
std::vector<int> dims;
};
} // namespace detail
//concise version of cv::Mat suitable for GAPI needs (used when no dependence on OpenCV is required)
class Mat : public detail::MatHeader{
public:
Mat() = default;
/** @overload
@param _rows Number of rows in a 2D array.
@param _cols Number of columns in a 2D array.
@param _type Array type. Use CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or
CV_8UC(n), ..., CV_64FC(n) to create multi-channel (up to CV_CN_MAX channels) matrices.
@param _data Pointer to the user data. Matrix constructors that take data and step parameters do not
allocate matrix data. Instead, they just initialize the matrix header that points to the specified
data, which means that no data is copied. This operation is very efficient and can be used to
process external data using OpenCV functions. The external data is not automatically deallocated, so
you should take care of it.
@param _step Number of bytes each matrix row occupies. The value should include the padding bytes at
the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed
and the actual step is calculated as cols*elemSize(). See Mat::elemSize.
*/
Mat(int _rows, int _cols, int _type, void* _data, size_t _step = AUTO_STEP)
: MatHeader (_rows, _cols, _type, _data, _step)
{}
Mat(const std::vector<int> &_dims, int _type, void* _data)
: MatHeader (_dims, _type, _data)
{}
Mat(std::vector<int> &&_dims, int _type, void* _data)
: MatHeader (std::move(_dims), _type, _data)
{}
Mat(Mat const& src, const Rect& roi )
: Mat(src)
{
rows = roi.height;
cols = roi.width;
data = ptr(roi.y, roi.x);
}
Mat(Mat const& ) = default;
Mat(Mat&& ) = default;
Mat& operator=(Mat const& ) = default;
Mat& operator=(Mat&& ) = default;
/** @brief Sets all or some of the array elements to the specified value.
@param s Assigned scalar converted to the actual array type.
*/
Mat& operator = (const Scalar& s)
{
constexpr unsigned max_channels = 4; //Scalar can't fit more than 4
using func_p_t = void (*)(void*, int, Scalar const&);
using detail::assign_row;
#define TABLE_ENTRY(type) {assign_row<type, 1>, assign_row<type, 2>, assign_row<type, 3>, assign_row<type, 4>}
static constexpr func_p_t func_tbl[][max_channels] = {
TABLE_ENTRY(uchar),
TABLE_ENTRY(schar),
TABLE_ENTRY(ushort),
TABLE_ENTRY(short),
TABLE_ENTRY(int),
TABLE_ENTRY(float),
TABLE_ENTRY(double)
};
#undef TABLE_ENTRY
static_assert(CV_8U == 0 && CV_8S == 1 && CV_16U == 2 && CV_16S == 3
&& CV_32S == 4 && CV_32F == 5 && CV_64F == 6,
"OCV type ids used as indexes to array, thus exact numbers are important!"
);
const auto depth = static_cast<unsigned int>(this->depth());
GAPI_Assert(depth < sizeof(func_tbl)/sizeof(func_tbl[0]));
if (dims.empty())
{
const auto channels = static_cast<unsigned int>(this->channels());
GAPI_Assert(channels <= max_channels);
auto* f = func_tbl[depth][channels - 1];
for (int r = 0; r < rows; ++r)
{
(*f)(static_cast<void *>(ptr(r)), cols, s );
}
}
else
{
auto* f = func_tbl[depth][0];
// FIXME: better to refactor assign_row to use std::size_t by default
(*f)(static_cast<void *>(data), static_cast<int>(total()), s);
}
return *this;
}
/** @brief Returns the matrix element size in bytes.
The method returns the matrix element size in bytes. For example, if the matrix type is CV_16SC3 ,
the method returns 3\*sizeof(short) or 6.
*/
size_t elemSize() const
{
return CV_ELEM_SIZE(type());
}
/** @brief Returns the type of a matrix element.
The method returns a matrix element type. This is an identifier compatible with the CvMat type
system, like CV_16SC3 or 16-bit signed 3-channel array, and so on.
*/
int type() const {return CV_MAT_TYPE(flags);}
/** @brief Returns the depth of a matrix element.
The method returns the identifier of the matrix element depth (the type of each individual channel).
For example, for a 16-bit signed element array, the method returns CV_16S . A complete list of
matrix types contains the following values:
- CV_8U - 8-bit unsigned integers ( 0..255 )
- CV_8S - 8-bit signed integers ( -128..127 )
- CV_16U - 16-bit unsigned integers ( 0..65535 )
- CV_16S - 16-bit signed integers ( -32768..32767 )
- CV_32S - 32-bit signed integers ( -2147483648..2147483647 )
- CV_32F - 32-bit floating-point numbers ( -FLT_MAX..FLT_MAX, INF, NAN )
- CV_64F - 64-bit floating-point numbers ( -DBL_MAX..DBL_MAX, INF, NAN )
*/
int depth() const {return CV_MAT_DEPTH(flags);}
/** @brief Returns the number of matrix channels.
The method returns the number of matrix channels.
If matrix is N-dimensional, -1 is returned.
*/
int channels() const {return dims.empty() ? CV_MAT_CN(flags) : -1;}
/**
@param _rows New number of rows.
@param _cols New number of columns.
@param _type New matrix type.
*/
void create(int _rows, int _cols, int _type)
{
create(Size{_cols, _rows}, _type);
}
/** @overload
@param _size Alternative new matrix size specification: Size(cols, rows)
@param _type New matrix type.
*/
void create(Size _size, int _type)
{
GAPI_Assert(_size.height >= 0 && _size.width >= 0);
if (_size != Size{cols, rows} )
{
Mat tmp{_size.height, _size.width, _type, nullptr};
tmp.memory.reset(new uchar[ tmp.step * tmp.rows], [](uchar * p){delete[] p;});
tmp.data = tmp.memory.get();
*this = std::move(tmp);
}
}
void create(const std::vector<int> &_dims, int _type)
{
// FIXME: make a proper reallocation-on-demands
// WARNING: no tensor views, so no strides
Mat tmp{_dims, _type, nullptr};
// FIXME: this accumulate duplicates a lot
const auto sz = std::accumulate(_dims.begin(), _dims.end(), 1, std::multiplies<int>());
tmp.memory.reset(new uchar[CV_ELEM_SIZE(_type)*sz], [](uchar * p){delete[] p;});
tmp.data = tmp.memory.get();
*this = std::move(tmp);
}
/** @brief Creates a full copy of the matrix and the underlying data.
The method creates a full copy of the matrix. The original step[] is not taken into account.
So, the copy has a continuous buffer occupying total() * elemSize() bytes.
*/
Mat clone() const
{
Mat m;
copyTo(m);
return m;
}
/** @brief Copies the matrix to another one.
The method copies the matrix data to another matrix. Before copying the data, the method invokes :
@code
m.create(this->size(), this->type());
@endcode
so that the destination matrix is reallocated if needed. While m.copyTo(m); works flawlessly, the
function does not handle the case of a partial overlap between the source and the destination
matrices.
*/
void copyTo(Mat& dst) const
{
if (dims.empty())
{
dst.create(rows, cols, type());
for (int r = 0; r < rows; ++r)
{
std::copy_n(ptr(r), detail::default_step(type(),cols), dst.ptr(r));
}
}
else
{
dst.create(dims, depth());
std::copy_n(data, total()*elemSize(), data);
}
}
/** @brief Returns true if the array has no elements.
The method returns true if Mat::total() is 0 or if Mat::data is NULL. Because of pop_back() and
resize() methods `M.total() == 0` does not imply that `M.data == NULL`.
*/
bool empty() const
{
return data == 0 || total() == 0;
}
/** @brief Returns the total number of array elements.
The method returns the number of array elements (a number of pixels if the array represents an
image).
*/
size_t total() const
{
return dims.empty()
? (static_cast<std::size_t>(rows) * cols)
: std::accumulate(dims.begin(), dims.end(), static_cast<std::size_t>(1), std::multiplies<size_t>());
}
/** @overload
@param roi Extracted submatrix specified as a rectangle.
*/
Mat operator()( const Rect& roi ) const
{
return Mat{*this, roi};
}
/** @brief Returns a pointer to the specified matrix row.
The methods return `uchar*` or typed pointer to the specified matrix row. See the sample in
Mat::isContinuous to know how to use these methods.
@param row Index along the dimension 0
@param col Index along the dimension 1
*/
uchar* ptr(int row, int col = 0)
{
return const_cast<uchar*>(const_cast<const Mat*>(this)->ptr(row,col));
}
/** @overload */
const uchar* ptr(int row, int col = 0) const
{
return data + step * row + CV_ELEM_SIZE(type()) * col;
}
private:
//actual memory allocated for storage, or nullptr if object is non owning view to over memory
std::shared_ptr<uchar> memory;
};
} //namespace own
} //namespace gapi
} //namespace cv
#endif /* OPENCV_GAPI_OWN_MAT_HPP */

View File

@ -1,83 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_OWN_SATURATE_HPP
#define OPENCV_GAPI_OWN_SATURATE_HPP
#include <math.h>
#include <limits>
#include <opencv2/gapi/own/assert.hpp>
#include <opencv2/gapi/util/type_traits.hpp>
namespace cv { namespace gapi { namespace own {
//-----------------------------
//
// Numeric cast with saturation
//
//-----------------------------
template<typename DST, typename SRC,
typename = cv::util::enable_if_t<!std::is_same<DST, SRC>::value &&
std::is_integral<DST>::value &&
std::is_integral<SRC>::value> >
static CV_ALWAYS_INLINE DST saturate(SRC x)
{
if (sizeof(DST) > sizeof(SRC))
return static_cast<DST>(x);
// compiler must recognize this saturation,
// so compile saturate<s16>(a + b) with adds
// instruction (e.g.: _mm_adds_epi16 if x86)
return x < std::numeric_limits<DST>::min()?
std::numeric_limits<DST>::min():
x > std::numeric_limits<DST>::max()?
std::numeric_limits<DST>::max():
static_cast<DST>(x);
}
template<typename T>
static CV_ALWAYS_INLINE T saturate(T x)
{
return x;
}
template<typename DST, typename SRC, typename R,
cv::util::enable_if_t<std::is_floating_point<DST>::value, bool> = true >
static CV_ALWAYS_INLINE DST saturate(SRC x, R)
{
return static_cast<DST>(x);
}
template<typename DST, typename SRC, typename R,
cv::util::enable_if_t<std::is_integral<DST>::value &&
std::is_integral<SRC>::value , bool> = true >
static CV_ALWAYS_INLINE DST saturate(SRC x, R)
{
return saturate<DST>(x);
}
// Note, that OpenCV rounds differently:
// - like std::round() for add, subtract
// - like std::rint() for multiply, divide
template<typename DST, typename SRC, typename R,
cv::util::enable_if_t<std::is_integral<DST>::value &&
std::is_floating_point<SRC>::value, bool> = true >
static CV_ALWAYS_INLINE DST saturate(SRC x, R round)
{
int ix = static_cast<int>(round(x));
return saturate<DST>(ix);
}
// explicit suffix 'd' for double type
inline double ceild(double x) { return ceil(x); }
inline double floord(double x) { return floor(x); }
inline double roundd(double x) { return round(x); }
inline double rintd(double x) { return rint(x); }
} //namespace own
} //namespace gapi
} //namespace cv
#endif /* OPENCV_GAPI_OWN_SATURATE_HPP */

View File

@ -1,47 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_GAPI_OWN_SCALAR_HPP
#define OPENCV_GAPI_GAPI_OWN_SCALAR_HPP
#include <opencv2/gapi/own/exports.hpp>
namespace cv
{
namespace gapi
{
namespace own
{
class GAPI_EXPORTS Scalar
{
public:
Scalar() = default;
explicit Scalar(double v0) { val[0] = v0; }
Scalar(double v0, double v1, double v2 = 0, double v3 = 0)
: val{v0, v1, v2, v3}
{
}
const double& operator[](int i) const { return val[i]; }
double& operator[](int i) { return val[i]; }
static Scalar all(double v0) { return Scalar(v0, v0, v0, v0); }
double val[4] = {0};
};
inline bool operator==(const Scalar& lhs, const Scalar& rhs)
{
return std::equal(std::begin(lhs.val), std::end(lhs.val), std::begin(rhs.val));
}
} // namespace own
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_GAPI_OWN_SCALAR_HPP

View File

@ -1,162 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2018 Intel Corporation
#ifndef OPENCV_GAPI_TYPES_HPP
#define OPENCV_GAPI_TYPES_HPP
#include <algorithm> // std::max, std::min
#include <ostream>
namespace cv
{
namespace gapi
{
/**
* @brief This namespace contains G-API own data structures used in
* its standalone mode build.
*/
namespace own
{
class Point
{
public:
Point() = default;
Point(int _x, int _y) : x(_x), y(_y) {}
int x = 0;
int y = 0;
};
class Point2f
{
public:
Point2f() = default;
Point2f(float _x, float _y) : x(_x), y(_y) {}
float x = 0.f;
float y = 0.f;
};
class Point3f
{
public:
Point3f() = default;
Point3f(float _x, float _y, float _z) : x(_x), y(_y), z(_z) {}
float x = 0.f;
float y = 0.f;
float z = 0.f;
};
class Rect
{
public:
Rect() = default;
Rect(int _x, int _y, int _width, int _height) : x(_x), y(_y), width(_width), height(_height) {}
#if !defined(GAPI_STANDALONE)
Rect(const cv::Rect& other) : x(other.x), y(other.y), width(other.width), height(other.height) {}
inline Rect& operator=(const cv::Rect& other)
{
x = other.x;
y = other.x;
width = other.width;
height = other.height;
return *this;
}
#endif // !defined(GAPI_STANDALONE)
int x = 0; //!< x coordinate of the top-left corner
int y = 0; //!< y coordinate of the top-left corner
int width = 0; //!< width of the rectangle
int height = 0; //!< height of the rectangle
};
inline bool operator==(const Rect& lhs, const Rect& rhs)
{
return lhs.x == rhs.x && lhs.y == rhs.y && lhs.width == rhs.width && lhs.height == rhs.height;
}
inline bool operator!=(const Rect& lhs, const Rect& rhs)
{
return !(lhs == rhs);
}
inline Rect& operator&=(Rect& lhs, const Rect& rhs)
{
int x1 = std::max(lhs.x, rhs.x);
int y1 = std::max(lhs.y, rhs.y);
lhs.width = std::min(lhs.x + lhs.width, rhs.x + rhs.width) - x1;
lhs.height = std::min(lhs.y + lhs.height, rhs.y + rhs.height) - y1;
lhs.x = x1;
lhs.y = y1;
if( lhs.width <= 0 || lhs.height <= 0 )
lhs = Rect();
return lhs;
}
inline Rect operator&(const Rect& lhs, const Rect& rhs)
{
Rect result = lhs;
return result &= rhs;
}
inline std::ostream& operator<<(std::ostream& o, const Rect& rect)
{
return o << "[" << rect.width << " x " << rect.height << " from (" << rect.x << ", " << rect.y << ")]";
}
class Size
{
public:
Size() = default;
Size(int _width, int _height) : width(_width), height(_height) {}
#if !defined(GAPI_STANDALONE)
Size(const cv::Size& other) : width(other.width), height(other.height) {}
inline Size& operator=(const cv::Size& rhs)
{
width = rhs.width;
height = rhs.height;
return *this;
}
#endif // !defined(GAPI_STANDALONE)
int width = 0;
int height = 0;
};
inline Size& operator+=(Size& lhs, const Size& rhs)
{
lhs.width += rhs.width;
lhs.height += rhs.height;
return lhs;
}
inline bool operator==(const Size& lhs, const Size& rhs)
{
return lhs.width == rhs.width && lhs.height == rhs.height;
}
inline bool operator!=(const Size& lhs, const Size& rhs)
{
return !(lhs == rhs);
}
inline std::ostream& operator<<(std::ostream& o, const Size& s)
{
o << "[" << s.width << " x " << s.height << "]";
return o;
}
struct VoidType {};
} // namespace own
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_TYPES_HPP

View File

@ -1,20 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019 Intel Corporation
#ifndef OPENCV_GAPI_PLAIDML_CORE_HPP
#define OPENCV_GAPI_PLAIDML_CORE_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/own/exports.hpp> // GAPI_EXPORTS
namespace cv { namespace gapi { namespace core { namespace plaidml {
GAPI_EXPORTS cv::GKernelPackage kernels();
}}}}
#endif // OPENCV_GAPI_PLAIDML_CORE_HPP

View File

@ -1,140 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019 Intel Corporation
//
#ifndef OPENCV_GAPI_GPLAIDMLKERNEL_HPP
#define OPENCV_GAPI_GPLAIDMLKERNEL_HPP
#include <opencv2/gapi/gkernel.hpp>
#include <opencv2/gapi/garg.hpp>
namespace plaidml
{
namespace edsl
{
class Tensor;
} // namespace edsl
} // namespace plaidml
namespace cv
{
namespace gapi
{
namespace plaidml
{
GAPI_EXPORTS cv::gapi::GBackend backend();
} // namespace plaidml
} // namespace gapi
struct GPlaidMLContext
{
// Generic accessor API
template<typename T>
const T& inArg(int input) { return m_args.at(input).get<T>(); }
// Syntax sugar
const plaidml::edsl::Tensor& inTensor(int input)
{
return inArg<plaidml::edsl::Tensor>(input);
}
plaidml::edsl::Tensor& outTensor(int output)
{
return *(m_results.at(output).get<plaidml::edsl::Tensor*>());
}
std::vector<GArg> m_args;
std::unordered_map<std::size_t, GArg> m_results;
};
class GAPI_EXPORTS GPlaidMLKernel
{
public:
using F = std::function<void(GPlaidMLContext &)>;
GPlaidMLKernel() = default;
explicit GPlaidMLKernel(const F& f) : m_f(f) {}
void apply(GPlaidMLContext &ctx) const
{
GAPI_Assert(m_f);
m_f(ctx);
}
protected:
F m_f;
};
namespace detail
{
template<class T> struct plaidml_get_in;
template<> struct plaidml_get_in<cv::GMat>
{
static const plaidml::edsl::Tensor& get(GPlaidMLContext& ctx, int idx)
{
return ctx.inTensor(idx);
}
};
template<class T> struct plaidml_get_in
{
static T get(GPlaidMLContext &ctx, int idx) { return ctx.inArg<T>(idx); }
};
template<class T> struct plaidml_get_out;
template<> struct plaidml_get_out<cv::GMat>
{
static plaidml::edsl::Tensor& get(GPlaidMLContext& ctx, int idx)
{
return ctx.outTensor(idx);
}
};
template<typename, typename, typename>
struct PlaidMLCallHelper;
template<typename Impl, typename... Ins, typename... Outs>
struct PlaidMLCallHelper<Impl, std::tuple<Ins...>, std::tuple<Outs...> >
{
template<int... IIs, int... OIs>
static void call_impl(GPlaidMLContext &ctx, detail::Seq<IIs...>, detail::Seq<OIs...>)
{
Impl::run(plaidml_get_in<Ins>::get(ctx, IIs)..., plaidml_get_out<Outs>::get(ctx, OIs)...);
}
static void call(GPlaidMLContext& ctx)
{
call_impl(ctx,
typename detail::MkSeq<sizeof...(Ins)>::type(),
typename detail::MkSeq<sizeof...(Outs)>::type());
}
};
} // namespace detail
template<class Impl, class K>
class GPlaidMLKernelImpl: public cv::detail::PlaidMLCallHelper<Impl, typename K::InArgs, typename K::OutArgs>,
public cv::detail::KernelTag
{
using P = detail::PlaidMLCallHelper<Impl, typename K::InArgs, typename K::OutArgs>;
public:
using API = K;
static cv::gapi::GBackend backend() { return cv::gapi::plaidml::backend(); }
static cv::GPlaidMLKernel kernel() { return GPlaidMLKernel(&P::call); }
};
#define GAPI_PLAIDML_KERNEL(Name, API) struct Name: public cv::GPlaidMLKernelImpl<Name, API>
} // namespace cv
#endif // OPENCV_GAPI_GPLAIDMLKERNEL_HPP

View File

@ -1,53 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2019 Intel Corporation
#ifndef OPENCV_GAPI_PLAIDML_PLAIDML_HPP
#define OPENCV_GAPI_PLAIDML_PLAIDML_HPP
#include <string>
#include <opencv2/gapi/gcommon.hpp> // CompileArgTag
namespace cv
{
namespace gapi
{
/**
* @brief This namespace contains G-API PlaidML backend functions,
* structures, and symbols.
*/
namespace plaidml
{
/** \addtogroup gapi_compile_args
* @{
*/
/**
* @brief This structure represents the basic parameters for the experimental
* PlaidML backend.
*/
struct config
{
std::string dev_id; //!< Device ID. Refer to PlaidML documentation for details.
std::string trg_id; //!< Target ID. Refer to PlaidML documentation for details.
};
/** @} gapi_compile_args */
} // namespace plaidml
} // namespace gapi
namespace detail
{
template<> struct CompileArgTag<cv::gapi::plaidml::config>
{
static const char* tag() { return "gapi.plaidml.config"; }
};
} // namespace detail
} // namespace cv
#endif // OPENCV_GAPI_PLAIDML_PLAIDML_HPP

View File

@ -1,71 +0,0 @@
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2021 Intel Corporation
#ifndef OPENCV_GAPI_PYTHON_API_HPP
#define OPENCV_GAPI_PYTHON_API_HPP
#include <opencv2/gapi/gkernel.hpp> // GKernelPackage
#include <opencv2/gapi/own/exports.hpp> // GAPI_EXPORTS
namespace cv {
namespace gapi {
/**
* @brief This namespace contains G-API Python backend functions,
* structures, and symbols.
*
* This functionality is required to enable G-API custom operations
* and kernels when using G-API from Python, no need to use it in the
* C++ form.
*/
namespace python {
GAPI_EXPORTS cv::gapi::GBackend backend();
struct GPythonContext
{
const cv::GArgs &ins;
const cv::GMetaArgs &in_metas;
const cv::GTypesInfo &out_info;
cv::optional<cv::GArg> m_state;
};
using Impl = std::function<cv::GRunArgs(const GPythonContext&)>;
using Setup = std::function<cv::GArg(const GMetaArgs&, const GArgs&)>;
class GAPI_EXPORTS GPythonKernel
{
public:
GPythonKernel() = default;
GPythonKernel(Impl run, Setup setup);
Impl run;
Setup setup = nullptr;
bool is_stateful = false;
};
class GAPI_EXPORTS GPythonFunctor : public cv::gapi::GFunctor
{
public:
using Meta = cv::GKernel::M;
GPythonFunctor(const char* id, const Meta& meta, const Impl& impl,
const Setup& setup = nullptr);
GKernelImpl impl() const override;
gapi::GBackend backend() const override;
private:
GKernelImpl impl_;
};
} // namespace python
} // namespace gapi
} // namespace cv
#endif // OPENCV_GAPI_PYTHON_API_HPP

Some files were not shown because too many files have changed in this diff Show More