Commit Graph

64 Commits

Author SHA1 Message Date
Yuantao Feng
fa5ed62a66
Merge pull request #24694 from fengyuentau:matmul_refactor
dnn: refactor ONNX MatMul with fastGemm #24694

Done:
- [x] add backends
    - [x] CUDA
    - [x] OpenVINO
    - [x] CANN
    - [x] OpenCL
    - [x] Vulkan
- [x] add perf tests
- [x] const B case

### Benchmark

Tests are done on M1. All data is in milliseconds (ms).

| Configuration | MatMul (Prepacked) | MatMul | InnerProduct |
| - | - | - | - |
| A=[12, 197, 197], B=[12, 197, 64], trans_a=0, trans_b=0 | **0.39** | 0.41 | 1.33 |
| A=[12, 197, 64], B=[12, 64, 197], trans_a=0, trans_b=0  | **0.42** | 0.42 | 1.17 |
| A=[12, 50, 64], B=[12, 64, 50], trans_a=0, trans_b=0    | **0.13** | 0.15 | 0.33 |
| A=[12, 50, 50], B=[12, 50, 64], trans_a=0, trans_b=0    | **0.11** | 0.13 | 0.22 |
| A=[16, 197, 197], B=[16, 197, 64], trans_a=0, trans_b=0 | **0.46** | 0.54 | 1.46 |
| A=[16, 197, 64], B=[16, 64, 197], trans_a=0, trans_b=0  | **0.46** | 0.95 | 1.74 |
| A=[16, 50, 64], B=[16, 64, 50], trans_a=0, trans_b=0    | **0.18** | 0.32 | 0.43 |
| A=[16, 50, 50], B=[16, 50, 64], trans_a=0, trans_b=0    | **0.15** | 0.25 | 0.25 |

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
2023-12-19 19:36:41 +03:00
Yuantao Feng
ee0822dc4d
Merge pull request #24378 from fengyuentau:instance_norm
dnn onnx: add instance norm layer #24378

Resolves https://github.com/opencv/opencv/issues/24377
Relates https://github.com/opencv/opencv/pull/24092#discussion_r1349841644

| Perf | multi-thread | single-thread |
| - | - | - |
| x: [2, 64, 180, 240] | 3.95ms | 11.12ms |

Todo:

- [x] speed up by multi-threading
- [x] add perf
- [x] add backend: OpenVINO
- [x] add backend: CUDA
- [x] add backend: OpenCL (no fp16)
- [ ] add backend: CANN (will be done via https://github.com/opencv/opencv/pull/24462)


### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake

```
force_builders=Linux OpenCL,Win64 OpenCL,Custom
buildworker:Custom=linux-4
build_image:Custom=ubuntu:18.04
modules_filter:Custom=none
disable_ipp:Custom=ON
```
2023-11-07 12:59:10 +03:00
Aser Atawya
240b245105
Merge pull request #24092 from Aser-Abdelfatah:GSoC_Support_GatherElements_ONNX
GSoC Add ONNX Support for GatherElements #24092

Merge with: https://github.com/opencv/opencv_extra/pull/1082
Adds support to the ONNX operator GatherElements [operator docs](https://github.com/onnx/onnx/blob/main/docs/Operators.md#GatherElements)
Added tests to opencv_extra at pull request https://github.com/opencv/opencv_extra/pull/1082

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
2023-10-18 10:41:47 +03:00
Yuantao Feng
bb171a0c05
dnn: expand refactor with cv::broadcast for onnx models (#24295)
* add expand impl with cv::broadcast

* remove expandMid

* deduce shape from -1

* add constant folding

* handle input constant; handle input constant 1d

* add expand conformance tests; add checks to disallow shape of neg values; add early copy for unchanged total elements

* fix ExpandSubgraph

* dummy commit to trigger build

* dummy commit to trigger build 1

* remove conformance from test names
2023-09-27 09:28:52 +03:00
Abduragim Shtanchaev
865e7cacca
Merge pull request #24037 from Abdurrahheem:ash/dev_einsum
Add Support for Einsum Layer #24037

### This PR adding support for [Einsum Layer](https://pytorch.org/docs/stable/generated/torch.einsum.html) (in progress). 

This PR is currently not to be merged but only reviewed. Test cases are located in [#1090](https://github.com/opencv/opencv_extra/pull/1090)RP in OpenCV extra

**DONE**: 
 - [x] 2-5D GMM support added
 - [x] Matrix transpose support added
 - [x] Reduction type comupte  'ij->j'
 - [x] 2nd shape computation - during forward 

**Next PRs**:
- [ ] Broadcasting reduction "...ii ->...i"
- [ ] Add lazy shape deduction. "...ij, ...jk->...ik"
- [ ] Add implicit output computation support. "bij,bjk ->" (output subscripts should be "bik")
- [ ] Add support for CUDA backend 
- [ ] BatchWiseMultiply optimize

**Later in 5.x version (requires support for 1D matrices)**: 
- [ ] Add 1D vector multiplication support 
- [ ] Inter product "i, i" (problems with 1D shapes)

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
2023-09-22 11:25:02 +03:00
Yuantao Feng
8a96e34e33
dnn: add gemm_layer in place of fully_connected_layer for onnx models (#23897)
* first commit

* turned C from input to constant; force C constant in impl; better handling 0d/1d cases

* integrate with gemm from ficus nn

* fix const inputs

* adjust threshold for int8 tryQuantize

* adjust threshold for int8 quantized 2

* support batched gemm and matmul; tune threshold for rcnn_ilsvrc13; update googlenet

* add gemm perf against innerproduct

* add perf tests for innerproduct with bias

* fix perf

* add memset

* renamings for next step

* add dedicated perf gemm

* add innerproduct in perf_gemm

* remove gemm and innerproduct perf tests from perf_layer

* add perf cases for vit sizes; prepack constants

* remove batched gemm; fix wrong trans; optimize KC

* remove prepacking for const A; several fixes for const B prepacking

* add todos and gemm expression

* add optimized branch for avx/avx2

* trigger build

* update macros and signature

* update signature

* fix macro

* fix bugs for neon aarch64 & x64

* add backends: cuda, cann, inf_ngraph and vkcom

* fix cuda backend

* test commit for cuda

* test cuda backend

* remove debug message from cuda backend

* use cpu dispatcher

* fix neon macro undef in dispatcher

* fix dispatcher

* fix inner kernel for neon aarch64

* fix compiling issue on armv7; try fixing accuracy issue on other platforms

* broadcast C with beta multiplied; improve func namings

* fix bug for avx and avx2

* put all platform-specific kernels in dispatcher

* fix typos

* attempt to fix compile issues on x64

* run old gemm when neon, avx, avx2 are all not available; add kernel for armv7 neon

* fix typo

* quick fix: add macros for pack4

* quick fix: use vmlaq_f32 for armv7

* quick fix for missing macro of fast gemm pack f32 4

* disable conformance tests when optimized branches are not supported

* disable perf tests when optimized branches are not supported

* decouple cv_try_neon and cv_neon_aarch64

* drop googlenet_2023; add fastGemmBatched

* fix step in fastGemmBatched

* cpu: fix initialization ofb; gpu: support batch

* quick followup fix for cuda

* add default kernels

* quick followup fix to avoid macro redef

* optmized kernels for lasx

* resolve mis-alignment; remove comments

* tune performance for x64 platform

* tune performance for neon aarch64

* tune for armv7

* comment time consuming tests

* quick follow-up fix
2023-09-20 00:53:34 +03:00
Yuantao Feng
eefee8574a
dnn: refactor reduce (#23613)
* initial impl

* remove reduce in8; fix reduce importer

* fix bugs and add log sum exp

* remove unnecessary header and fix indentation
2023-05-17 10:03:45 +03:00
Dmitry Kurtaev
a8d3d1f6f9
Merge pull request #23604 from dkurt:dnn_no_protobuf
Build DNN without Protobuf

DNN module can be built without Protobuf for Darknet, TFLite, OpenVINO, Torch (not PyTorch) models.

```
cmake \
    -DCMAKE_BUILD_TYPE=Release \
    -DBUILD_LIST=dnn \
    -DWITH_PROTOBUF=OFF \
    -DWITH_OPENCL=OFF

7.1M    lib/libopencv_dnn.so.4.7.0
```


```
cmake \
    -DCMAKE_BUILD_TYPE=Release \
    -DBUILD_LIST=dnn \
    -DWITH_OPENCL=OFF

3.9M    lib/libopencv_dnn.so.4.7.0
```

### Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
      Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
2023-05-15 12:23:18 +03:00
Yuantao Feng
c2b7c1f13b
Merge pull request #23219 from fengyuentau:add_gelu
Add GELU layer for vision transformers

* add gelu and gelu approximation

* drop setKernelParams
2023-02-10 18:03:29 +00:00
Yuantao Feng
4d918ba40b
Merge pull request #23047 from fengyuentau:layer_norm
dnn: add layer normalization for vision transformers

* add layer norm onnx parser, impl and tests

* add onnx graph simplifier for layer norm expanded

* handle the case when constants are of type Initializer

* add test case for layer norm expanded with initializers

* use CV_Assert & CV_CheckType in place of CV_Assert_N; use forward_fallback for OCL_FP16

* use const ref / ref in parameters of invoker::run; extract inner const if from nested loop; use size_t in place of ull

* template hasBias

* remove trailing whitespace

* use pointer parameter with null check; move normSize division & mean_square division outside of loop; use std::max to ensure positive value before std::sqrt

* refactor implementation, optimize parallel_for

* disable layer norm expanded

* remove the removal of layer norm optional outputs
2023-01-27 16:35:59 +03:00
fengyuentau
441624a5fb tile impl 2022-11-29 11:15:38 +08:00
Alexander Smorkalov
ec7fc5adca
Merge pull request #22529 from fengyuentau:scatter_scatternd
DNN: supports Scatter and ScatterND from ONNX
2022-10-17 14:57:46 +03:00
fengyuentau
d24d8f2abe implementation of scatter and scatternd with conformance tests enabled 2022-10-17 11:30:32 +08:00
Alexander Alekhin
347246901e Merge pull request #21745 from alalek:dnn_plugin_openvino 2022-10-08 22:32:25 +00:00
Alexander Alekhin
43b2bb2c25 dnn: plugin support for OpenVINO 2022-10-07 16:57:31 +00:00
Egor Smirnov
65f71ce2eb add Gather implementation 2022-09-19 15:06:44 +03:00
rogday
ed69bcae2d
Merge pull request #21865 from rogday:nary_eltwise_layers
Reimplementation of Element-wise layers with broadcasting support

* init

* semi-working initial version

* add small_vector

* wip

* remove smallvec

* add nary function

* replace auto with Mat in lambda expr used in transform

* uncomment asserts

* autobuffer shape_buf & step_buf

* fix a missing bracket

* fixed a missing addLayer in parseElementWise

* solve one-dimensional broadcast

* remove pre_broadcast_transform for the case of two constants; fix missing constBlobsExtraInfo when addConstant is called

* one autobuffer for step & shape

* temporal fix for the missing original dimension information

* fix parseUnsqueeze when it gets a 1d tensor constant

* support sum/mean/min/max with only one input

* reuse old code to handle cases of two non-constant inputs

* add condition to handle div & mul of two non-constant inputs

* use || instead of or

* remove trainling spaces

* enlarge buf in binary_forward to contain other buffer

* use autobuffer in nary_forward

* generate data randomly and add more cases for perf

* add op and, or & xor

* update perf_dnn

* remove some comments

* remove legacy; add two ONNX conformance tests in filter

* move from cpu_denylist to all_denylist

* adjust parsing for inputs>=2

Co-authored-by: fengyuentau <yuantao.feng@opencv.org.cn>
2022-07-19 06:14:05 +03:00
zihaomu
e36948cfbc add ONNX OP sign, shrink and reciprocal 2022-04-07 15:32:12 +08:00
Zihao Mu
b6b5c27cec Support for some reduce layers for onnx 2022-03-18 10:19:13 +08:00
Smirnov Egor
71a22e45b0 add celu, hardsigmoid, selu, thresholdedrelu layers 2021-12-18 03:19:54 +03:00
Smirnov Egor
1bd382c1d0 Add acos, acosh, asin, asinh, atan, atanh, cos, cosh, erf, hardswish, sin, sinh, softplus, softsign, tan layers 2021-12-17 18:19:40 +03:00
Smirnov Egor
e608adea60 add ArgMax and ArgMin layers 2021-12-06 20:49:54 +03:00
Smirnov Egor
1feb3838b5 add Ceil, Floor, Log, Round, Sqrt, Not, Equal, Less, Greater 2021-10-15 16:02:46 +03:00
Jebastin Nadar
cce78cc5e2
Merge pull request #20535 from SamFC10:onnx-q
dnn : int8 quantized layers support in onnx importer

* added quantized layers support in onnx importer

* added more cases in eltwise node, some more checks

* added tests for quantized nodes

* relax thresholds for failed tests, address review comments

* refactoring based on review comments

* added support for unsupported cases and pre-quantized resnet50 test

* relax thresholds due to int8 resize layer
2021-10-04 18:07:38 +00:00
SamFC10
fa90e14b06 int8 layers and 8-bit quantization support 2021-08-19 09:56:47 +05:30
thezane
210bfaf8d6
Merge pull request #20483 from thezane:support-cumsum-layer-for-onnx
* Support cumsum layer for onnx

* Add unit tests

* Address review comments
2021-08-17 20:09:25 +03:00
Julia Bareeva
e1cafa3834
Merge pull request #20442 from JulieBar:gru_layer
* Add initialization and inference for GRU layer

* fix issues found on review
2021-08-07 10:07:37 +03:00
SamFC10
6111935835 Added exp layer 2021-02-20 22:16:00 +05:30
Alexander Alekhin
21e28adb87 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2020-05-22 19:50:14 +00:00
Liubov Batanina
d991c22090
Merge pull request #16575 from l-bat:flownet2
Support FlowNet2 model

* Support DataAugmentation layer

* Fix warnings

* Fix comments

* Support Correlation layer

* TEST

* Support Correlation layer

* Supported Accum and FlowWarp layers

* Supported ChannelNorm layer

* Supported Resample with inputs.size() > 1

* Fixed comments

* Refactoring

* Added tests

* Add resample test

* Added asserts in resize layer

* Updated DataAugmentation layer

* Update convolution layer

* Refactoring

* Fix data augmentation layer

* Fix caffe importer

* Fix resize

* Switch to Mat ptr

* Remove useless resize type

* Used ResizeLayer in Accum

* Split ChannelNormLayer

* Delete duplicate assert

* Add sample

* Fix sample

* Added colormap
2020-05-19 12:29:50 +00:00
Alexander Alekhin
b8579f12be Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2020-04-08 10:19:09 +00:00
Dmitry Kurtaev
8574a757f9 Case sensitive dnn layers types 2020-04-04 15:03:56 +03:00
Manjunath Bhat
78c5e41c23 Merge pull request #15808 from thebhatman:Mish_swish
* Added Swish and Mish activations

* Fixed whitespace errors

* Kernel implementation done

* Added function for launching kernel

* Changed type of 1.0

* Attempt to add test for Swish and Mish

* Resolving type mismatch for log

* exp from device

* Use log1pexp instead of adding 1

* Added openCL kernels
2019-12-02 00:06:17 +03:00
thebhatman
8a18d132fc Port Swish and Mish layers 2019-12-01 11:55:39 +03:00
Alexander Alekhin
e82e672a93 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-12-06 07:06:58 +00:00
Dmitry Kurtaev
c9e0c77d73 Concat layer from TensorFlow with constant inputs 2018-12-04 19:41:40 +03:00
Alexander Alekhin
dca657a2fd Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-09-10 00:10:21 +03:00
Hamdi Sahloul
a39e0daacf Utilize CV_UNUSED macro 2018-09-07 20:33:52 +09:00
Alexander Alekhin
f10fd64630 dnn: update "guard" inline namespace
- differ from 3.4 branch
2018-09-03 20:46:57 +00:00
Dmitry Kurtaev
e8e9d1d021 Implement Interp layer using Resize layer 2018-06-22 19:26:47 +03:00
Dmitry Kurtaev
4626246087 Add ShuffleChannel layer 2018-06-21 19:10:42 +03:00
David
7175f257b5 Added ResizeBilinear op for tf (#11050)
* Added ResizeBilinear op for tf

Combined ResizeNearestNeighbor and ResizeBilinear layers into Resize (with an interpolation param).

Minor changes to tf_importer and resize layer to save some code lines

Minor changes in init.cpp

Minor changes in tf_importer.cpp

* Replaced implementation of a custom ResizeBilinear layer to all layers

* Use Mat::ptr. Replace interpolation flags
2018-06-07 16:29:04 +03:00
Dmitry Kurtaev
bf87a43185 Faster-RCNN object detection models from TensorFlow 2018-05-30 17:12:36 +03:00
Dmitry Kurtaev
0ed2cbc931 R-FCN models support 2017-12-20 10:43:22 +03:00
Dmitry Kurtaev
08112f3821 Faster-RCNN models support 2017-12-15 12:16:21 +03:00
Dmitry Kurtaev
17dcf0e82d ROIPooling layer 2017-12-07 19:04:38 +03:00
Dmitry Kurtaev
99ed085752 Update PriorBox layer 2017-11-27 16:47:20 +03:00
Dmitry Kurtaev
905a9dada2 Removed LPNormalize layer. 2017-10-10 20:38:55 +03:00
Vadim Pisarevsky
b7ff9ddcdd Merge pull request #9705 from AlexeyAB:dnn_darknet_yolo_v2 2017-10-10 12:02:03 +00:00
AlexeyAB
ecc34dc521 Added DNN Darknet Yolo v2 for object detection 2017-10-09 21:08:44 +03:00