Add Support for Einsum Layer #24037
### This PR adding support for [Einsum Layer](https://pytorch.org/docs/stable/generated/torch.einsum.html) (in progress).
This PR is currently not to be merged but only reviewed. Test cases are located in [#1090](https://github.com/opencv/opencv_extra/pull/1090)RP in OpenCV extra
**DONE**:
- [x] 2-5D GMM support added
- [x] Matrix transpose support added
- [x] Reduction type comupte 'ij->j'
- [x] 2nd shape computation - during forward
**Next PRs**:
- [ ] Broadcasting reduction "...ii ->...i"
- [ ] Add lazy shape deduction. "...ij, ...jk->...ik"
- [ ] Add implicit output computation support. "bij,bjk ->" (output subscripts should be "bik")
- [ ] Add support for CUDA backend
- [ ] BatchWiseMultiply optimize
**Later in 5.x version (requires support for 1D matrices)**:
- [ ] Add 1D vector multiplication support
- [ ] Inter product "i, i" (problems with 1D shapes)
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
* attempt to add 0d/1d mat support to OpenCV
* revised the patch; now 1D mat is treated as 1xN 2D mat rather than Nx1.
* a step towards 'green' tests
* another little step towards 'green' tests
* calib test failures seem to be fixed now
* more fixes _core & _dnn
* another step towards green ci; even 0D mat's (a.k.a. scalars) are now partly supported!
* * fixed strange bug in aruco/charuco detector, not sure why it did not work
* also fixed a few remaining failures (hopefully) in dnn & core
* disabled failing GAPI tests - too complex to dig into this compiler pipeline
* hopefully fixed java tests
* trying to fix some more tests
* quick followup fix
* continue to fix test failures and warnings
* quick followup fix
* trying to fix some more tests
* partly fixed support for 0D/scalar UMat's
* use updated parseReduce() from upstream
* trying to fix the remaining test failures
* fixed [ch]aruco tests in Python
* still trying to fix tests
* revert "fix" in dnn's CUDA tensor
* trying to fix dnn+CUDA test failures
* fixed 1D umat creation
* hopefully fixed remaining cuda test failures
* removed training whitespaces
* first commit
* turned C from input to constant; force C constant in impl; better handling 0d/1d cases
* integrate with gemm from ficus nn
* fix const inputs
* adjust threshold for int8 tryQuantize
* adjust threshold for int8 quantized 2
* support batched gemm and matmul; tune threshold for rcnn_ilsvrc13; update googlenet
* add gemm perf against innerproduct
* add perf tests for innerproduct with bias
* fix perf
* add memset
* renamings for next step
* add dedicated perf gemm
* add innerproduct in perf_gemm
* remove gemm and innerproduct perf tests from perf_layer
* add perf cases for vit sizes; prepack constants
* remove batched gemm; fix wrong trans; optimize KC
* remove prepacking for const A; several fixes for const B prepacking
* add todos and gemm expression
* add optimized branch for avx/avx2
* trigger build
* update macros and signature
* update signature
* fix macro
* fix bugs for neon aarch64 & x64
* add backends: cuda, cann, inf_ngraph and vkcom
* fix cuda backend
* test commit for cuda
* test cuda backend
* remove debug message from cuda backend
* use cpu dispatcher
* fix neon macro undef in dispatcher
* fix dispatcher
* fix inner kernel for neon aarch64
* fix compiling issue on armv7; try fixing accuracy issue on other platforms
* broadcast C with beta multiplied; improve func namings
* fix bug for avx and avx2
* put all platform-specific kernels in dispatcher
* fix typos
* attempt to fix compile issues on x64
* run old gemm when neon, avx, avx2 are all not available; add kernel for armv7 neon
* fix typo
* quick fix: add macros for pack4
* quick fix: use vmlaq_f32 for armv7
* quick fix for missing macro of fast gemm pack f32 4
* disable conformance tests when optimized branches are not supported
* disable perf tests when optimized branches are not supported
* decouple cv_try_neon and cv_neon_aarch64
* drop googlenet_2023; add fastGemmBatched
* fix step in fastGemmBatched
* cpu: fix initialization ofb; gpu: support batch
* quick followup fix for cuda
* add default kernels
* quick followup fix to avoid macro redef
* optmized kernels for lasx
* resolve mis-alignment; remove comments
* tune performance for x64 platform
* tune performance for neon aarch64
* tune for armv7
* comment time consuming tests
* quick follow-up fix
Build Java without ANT #23724
### Pull Request Readiness Checklist
Enables a path of building Java bindings without ANT
* Able to build OpenCV JAR and Docs without ANT
```
-- Java:
-- ant: NO
-- JNI: /usr/lib/jvm/default-java/include /usr/lib/jvm/default-java/include/linux /usr/lib/jvm/default-java/include
-- Java wrappers: YES
-- Java tests: NO
```
* Possible to build OpenCV JAR without ANT but tests still require ANT
**Merge with**: https://github.com/opencv/opencv_contrib/pull/3502
Notes:
- Use `OPENCV_JAVA_IGNORE_ANT=1` to force "Java" flow for building Java bindings
- Java tests still require Apache ANT
- JAR doesn't include `.java` source code files.
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
DNN: Add New API blobFromImageParam #22750
The purpose of this PR:
1. Add new API `blobFromImageParam` to extend `blobFromImage` API. It can support the different data layout (NCHW or NHWC), and letter_box.
2. ~~`blobFromImage` can output `CV_16F`~~
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [ ] The feature is well documented and sample code can be built with the project CMake
dnn: Support more operators in CANN backend #23401
This PR adds the support of following layers:
- [x] Sub
- [x] PRelu
- [x] DeConv
- [x] Also warn users if backend is switched back to default if some of the layers are not supported.
- [ ] [Dropped] LSTM: some hacks (adding layers) were introduced which makes it even harder to build the graph for CANN backend.
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
Related issue: https://github.com/opencv/opencv_zoo/issues/136
Features added:
- Support operators with multiple output: ONNX Split.
- Support Slice without steps.
Bugs fixed:
- Wrong settings in ClipByValue (Relu6).
- Wrong calculation of pads in convolution layer (It is wrong generally but only fixed specifically for CANN for now).
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [x] There is a reference to the original bug report and related work
- [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
dnn: add layer normalization for vision transformers
* add layer norm onnx parser, impl and tests
* add onnx graph simplifier for layer norm expanded
* handle the case when constants are of type Initializer
* add test case for layer norm expanded with initializers
* use CV_Assert & CV_CheckType in place of CV_Assert_N; use forward_fallback for OCL_FP16
* use const ref / ref in parameters of invoker::run; extract inner const if from nested loop; use size_t in place of ull
* template hasBias
* remove trailing whitespace
* use pointer parameter with null check; move normSize division & mean_square division outside of loop; use std::max to ensure positive value before std::sqrt
* refactor implementation, optimize parallel_for
* disable layer norm expanded
* remove the removal of layer norm optional outputs
* cann backend impl v1
* cann backend impl v2: use opencv parsers to build models for cann
* adjust fc according to the new transA and transB
* put cann net in cann backend node and reuse forwardLayer
* use fork() to create a child process and compile cann model
* remove legacy code
* remove debug code
* fall bcak to CPU backend if there is one layer not supoorted by CANN backend
* fix netInput forward
DNN: let Quant and Dequant of ONNX_importer support the Constant input.
* let Quant and Dequant support the Constant input.
* fix negative value of axis.
Reimplementation of Element-wise layers with broadcasting support
* init
* semi-working initial version
* add small_vector
* wip
* remove smallvec
* add nary function
* replace auto with Mat in lambda expr used in transform
* uncomment asserts
* autobuffer shape_buf & step_buf
* fix a missing bracket
* fixed a missing addLayer in parseElementWise
* solve one-dimensional broadcast
* remove pre_broadcast_transform for the case of two constants; fix missing constBlobsExtraInfo when addConstant is called
* one autobuffer for step & shape
* temporal fix for the missing original dimension information
* fix parseUnsqueeze when it gets a 1d tensor constant
* support sum/mean/min/max with only one input
* reuse old code to handle cases of two non-constant inputs
* add condition to handle div & mul of two non-constant inputs
* use || instead of or
* remove trainling spaces
* enlarge buf in binary_forward to contain other buffer
* use autobuffer in nary_forward
* generate data randomly and add more cases for perf
* add op and, or & xor
* update perf_dnn
* remove some comments
* remove legacy; add two ONNX conformance tests in filter
* move from cpu_denylist to all_denylist
* adjust parsing for inputs>=2
Co-authored-by: fengyuentau <yuantao.feng@opencv.org.cn>
Add per_tensor_quantize to int8 quantize
* add per_tensor_quantize to dnn int8 module.
* change api flag from perTensor to perChannel, and recognize quantize type and onnx importer.
* change the default to hpp
* add apply softmax option to ClassificationModel
* remove default arguments of ClassificationModel::setSoftMax()
* fix build for python
* fix docs warning for setSoftMax()
* add impl for ClassficationModel()
* fix failed build for docs by trailing whitespace
* move to implement classify() to ClassificationModel_Impl
* move to implement softmax() to ClassificationModel_Impl
* remove softmax from public method in ClassificationModel
[GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN
* Add WebNN backend for OpenCV DNN Module
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Add WebNN head files into OpenCV 3rd partiy files
Create webnn.hpp
update cmake
Complete README and add OpenCVDetectWebNN.cmake file
add webnn.cpp
Modify webnn.cpp
Can successfully compile the codes for creating a MLContext
Update webnn.cpp
Update README.md
Update README.md
Update README.md
Update README.md
Update cmake files and
update README.md
Update OpenCVDetectWebNN.cmake and README.md
Update OpenCVDetectWebNN.cmake
Fix OpenCVDetectWebNN.cmake and update README.md
Add source webnn_cpp.cpp and libary libwebnn_proc.so
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
update dnn.cpp
update op_webnn
update op_webnn
Update op_webnn.hpp
update op_webnn.cpp & hpp
Update op_webnn.hpp
Update op_webnn
update the skeleton
Update op_webnn.cpp
Update op_webnn
Update op_webnn.cpp
Update op_webnn.cpp
Update op_webnn.hpp
update op_webnn
update op_webnn
Solved the problems of released variables.
Fixed the bugs in op_webnn.cpp
Implement op_webnn
Implement Relu by WebNN API
Update dnn.cpp for better test
Update elementwise_layers.cpp
Implement ReLU6
Update elementwise_layers.cpp
Implement SoftMax using WebNN API
Implement Reshape by WebNN API
Implement PermuteLayer by WebNN API
Implement PoolingLayer using WebNN API
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Implement poolingLayer by WebNN API and add more detailed logs
Update dnn.cpp
Update dnn.cpp
Remove redundant codes and add more logs for poolingLayer
Add more logs in the pooling layer implementation
Fix the indent issue and resolve the compiling issue
Fix the build problems
Fix the build issue
FIx the build issue
Update dnn.cpp
Update dnn.cpp
* Fix the build issue
* Implement BatchNorm Layer by WebNN API
* Update convolution_layer.cpp
This is a temporary file for Conv2d layer implementation
* Integrate some general functions into op_webnn.cpp&hpp
* Update const_layer.cpp
* Update convolution_layer.cpp
Still have some bugs that should be fixed.
* Update conv2d layer and fc layer
still have some problems to be fixed.
* update constLayer, conv layer, fc layer
There are still some bugs to be fixed.
* Fix the build issue
* Update concat_layer.cpp
Still have some bugs to be fixed.
* Update conv2d layer, fully connected layer and const layer
* Update convolution_layer.cpp
* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)
* Delete bib19450.aux
* Add WebNN backend for OpenCV DNN Module
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Add WebNN head files into OpenCV 3rd partiy files
Create webnn.hpp
update cmake
Complete README and add OpenCVDetectWebNN.cmake file
add webnn.cpp
Modify webnn.cpp
Can successfully compile the codes for creating a MLContext
Update webnn.cpp
Update README.md
Update README.md
Update README.md
Update README.md
Update cmake files and
update README.md
Update OpenCVDetectWebNN.cmake and README.md
Update OpenCVDetectWebNN.cmake
Fix OpenCVDetectWebNN.cmake and update README.md
Add source webnn_cpp.cpp and libary libwebnn_proc.so
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
Update dnn.cpp
update dnn.cpp
update op_webnn
update op_webnn
Update op_webnn.hpp
update op_webnn.cpp & hpp
Update op_webnn.hpp
Update op_webnn
update the skeleton
Update op_webnn.cpp
Update op_webnn
Update op_webnn.cpp
Update op_webnn.cpp
Update op_webnn.hpp
update op_webnn
update op_webnn
Solved the problems of released variables.
Fixed the bugs in op_webnn.cpp
Implement op_webnn
Implement Relu by WebNN API
Update dnn.cpp for better test
Update elementwise_layers.cpp
Implement ReLU6
Update elementwise_layers.cpp
Implement SoftMax using WebNN API
Implement Reshape by WebNN API
Implement PermuteLayer by WebNN API
Implement PoolingLayer using WebNN API
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Update pooling_layer.cpp
Implement poolingLayer by WebNN API and add more detailed logs
Update dnn.cpp
Update dnn.cpp
Remove redundant codes and add more logs for poolingLayer
Add more logs in the pooling layer implementation
Fix the indent issue and resolve the compiling issue
Fix the build problems
Fix the build issue
FIx the build issue
Update dnn.cpp
Update dnn.cpp
* Fix the build issue
* Implement BatchNorm Layer by WebNN API
* Update convolution_layer.cpp
This is a temporary file for Conv2d layer implementation
* Integrate some general functions into op_webnn.cpp&hpp
* Update const_layer.cpp
* Update convolution_layer.cpp
Still have some bugs that should be fixed.
* Update conv2d layer and fc layer
still have some problems to be fixed.
* update constLayer, conv layer, fc layer
There are still some bugs to be fixed.
* Update conv2d layer, fully connected layer and const layer
* Update convolution_layer.cpp
* Add OpenCV.js DNN module WebNN Backend (both using webnn-polyfill and electron)
* Update dnn.cpp
* Fix Error in dnn.cpp
* Resolve duplication in conditions in convolution_layer.cpp
* Fixed the issues in the comments
* Fix building issue
* Update tutorial
* Fixed comments
* Address the comments
* Update CMakeLists.txt
* Offer more accurate perf test on native
* Add better perf tests for both native and web
* Modify per tests for better results
* Use more latest version of Electron
* Support latest WebNN Clamp op
* Add definition of HAVE_WEBNN macro
* Support group convolution
* Implement Scale_layer using WebNN
* Add Softmax option for native classification example
* Fix comments
* Fix comments
dnn : int8 quantized layers support in onnx importer
* added quantized layers support in onnx importer
* added more cases in eltwise node, some more checks
* added tests for quantized nodes
* relax thresholds for failed tests, address review comments
* refactoring based on review comments
* added support for unsupported cases and pre-quantized resnet50 test
* relax thresholds due to int8 resize layer
ONNX diagnostic tool
* Final
* Add forgotten Normalize layer to the set of supported types
* ONNX diagnostic tool corrections
* Fixed CI test warnings
* Added code minor corrections
Co-authored-by: Sergey Slashchinin <sergei.slashchinin@xperience.ai>