Commit Graph

53 Commits

Author SHA1 Message Date
Dmitry Kurtaev
8ad5eb521a
Merge pull request #24120 from dkurt:actualize_dnn_links
OCL_FP16 MatMul with large batch

* Workaround FP16 MatMul with large batch

* Fix OCL reinitialization

* Higher thresholds for INT8 quantization

* Try fix gemm_buffer_NT for half (columns)

* Fix GEMM by rows

* Add batch dimension to InnerProduct layer test

* Fix Test_ONNX_conformance.Layer_Test/test_basic_conv_with_padding

* Batch 16

* Replace all vload4

* Version suffix for MobileNetSSD_deploy Caffe model
2023-08-16 15:46:11 +03:00
Maksim Shabunin
d35fbe6bfc dnn: updated YOLOv4-tiny model and tests 2022-12-22 15:49:21 +03:00
Zihao Mu
0a650b573b
Merge pull request #22840 from zihaomu:optimze_conv_memory_usage
DNN: reduce the memory used in convolution layer

* reduce the memory in winograd and disabel the test when usage memory is larger than 2gb.

* remove VERY_LOG tag
2022-12-08 12:57:13 +00:00
Alexander Alekhin
624d532000 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2020-12-17 21:05:34 +00:00
Alexander Alekhin
28aab134db dnn(test): update tests for OpenVINO 2021.2 2020-12-17 07:53:35 +00:00
Omar Alzaibaq
a316b11aaa
Merge pull request #18220 from Omar-AE:hddl-supported
* added HDDL VPU support

* changed to return True in one line if any device connected

* dnn: use releaseHDDLPlugin()

* dnn(hddl): fix conditions
2020-11-17 19:47:24 +00:00
Alexander Alekhin
6da05f7086 dnn(test): update tests for OpenVINO 2021.1 2020-10-08 10:22:31 +00:00
Alexander Alekhin
1c371d07b5 dnn(test): adjust tests for OpenVINO 2020.4 2020-07-15 23:47:40 +00:00
Alexander Alekhin
99c4b76a6d dnn(test): add YOLOv4-tiny tests 2020-07-06 21:36:19 +00:00
Alexander Alekhin
e58e545584 Merge pull request #17392 from alalek:dnn_test_yolov4 2020-05-28 22:52:21 +00:00
Dmitry Kurtaev
d9bada9867 dnn: EfficientDet 2020-05-28 17:23:42 +03:00
Alexander Alekhin
6b89154afd dnn(test): add YOLOv4 tests 2020-05-28 13:27:40 +00:00
Dmitry Kurtaev
d8e10f3a8d Enable MaxPooling with indices in Inference Engine 2019-12-04 19:14:55 +03:00
Lubov Batanina
7523c777c5 Merge pull request #15537 from l-bat:ngraph
* Support nGraph

* Fix resize
2019-12-02 16:16:06 +03:00
Dmitry Kurtaev
6193e403e7 Enable some tests for 2019R2 2019-08-07 09:07:53 +03:00
Dmitry Kurtaev
a0c3bb70a9 Modify SSD from TensorFlow graph generation script to enable MyriadX 2019-07-26 13:57:08 +03:00
Alexander Alekhin
416c693b3f dnn(test): OpenVINO 2019R2 2019-07-25 19:01:16 +03:00
Alexander Alekhin
13a782c039 test: fix usage of findDataFile()
misused 'optional' mode
2019-06-20 18:20:14 +03:00
Dmitry Kurtaev
9c0af1f675 Enable more deconvolution layer configurations with IE backend 2019-06-03 08:15:52 +03:00
Dmitry Kurtaev
44d21e5a79 Enable Slice layer on Inference Engine backend 2019-05-27 16:28:01 +03:00
Alexander Alekhin
cafa010389 dnn(test): skip tests 2019-04-03 17:49:05 +03:00
Lubov Batanina
7d3d6bc4e2 Merge pull request #13932 from l-bat:MyriadX_master_dldt
* Fix precision in tests for MyriadX

* Fix ONNX tests

* Add output range in ONNX tests

* Skip tests on Myriad OpenVINO 2018R5

* Add detect MyriadX

* Add detect MyriadX on OpenVINO R5

* Skip tests on Myriad next version of OpenVINO

* dnn(ie): VPU type from environment variable

* dnn(test): validate VPU type

* dnn(test): update DLIE test skip conditions
2019-03-29 16:42:58 +03:00
Dmitry Kurtaev
ed710eaa1c Make Inference Engine R3 as a minimal supported version 2019-02-21 09:32:26 +03:00
Liubov Batanina
183c0fcab1 Changed condition for resize and lrn layers 2019-02-14 13:11:14 +03:00
Dmitry Kurtaev
f0ddf302b2 Move Inference Engine to new API 2019-01-17 14:28:48 +03:00
Maksim Shabunin
fe459c82e5 Merge pull request #13332 from mshabunin:dnn-backends
DNN backends registry (#13332)

* Added dnn backends registry

* dnn: process DLIE/FPGA target
2018-12-05 18:11:45 +03:00
Dmitry Kurtaev
0d117312c9 DNN_TARGET_FPGA using Intel's Inference Engine 2018-11-19 11:41:43 +03:00
Alexander Alekhin
96c71dd3d2 dnn: reduce set of ignored warnings 2018-11-15 13:15:59 +03:00
Alexander Alekhin
c557193b8c dnn(test): use dnnBackendsAndTargets() param generator 2018-08-31 15:11:58 +03:00
Dmitry Kurtaev
8e034053af Faster-RCNN from TensorFlow on CPU with Intel's Inference Engine backend 2018-08-01 11:29:58 +03:00
Dmitry Kurtaev
2c291bc2fb Enable FastNeuralStyle and OpenFace networks with IE backend 2018-06-09 15:57:12 +03:00
Dmitry Kurtaev
40765c5f8d Enable SSD models from TensorFlow with OpenCL plugin of Intel's Inference Engine 2018-06-08 16:55:21 +03:00
David
7175f257b5 Added ResizeBilinear op for tf (#11050)
* Added ResizeBilinear op for tf

Combined ResizeNearestNeighbor and ResizeBilinear layers into Resize (with an interpolation param).

Minor changes to tf_importer and resize layer to save some code lines

Minor changes in init.cpp

Minor changes in tf_importer.cpp

* Replaced implementation of a custom ResizeBilinear layer to all layers

* Use Mat::ptr. Replace interpolation flags
2018-06-07 16:29:04 +03:00
Dmitry Kurtaev
f3a6ae5f00 Wrap Inference Engine init to try-catch 2018-06-07 12:55:52 +03:00
Vadim Pisarevsky
3cbd2e2764 Merge pull request #11650 from dkurt:dnn_default_backend 2018-06-06 09:30:39 +00:00
Alexander Alekhin
6816495bee dnn(test): reuse test/test_common.hpp, eliminate dead code warning 2018-06-05 12:52:53 +03:00
Dmitry Kurtaev
b781ac7346 Make Intel's Inference Engine backend is default if no preferable backend is specified. 2018-06-04 18:31:46 +03:00
Dmitry Kurtaev
f96f934426 Update Intel's Inference Engine deep learning backend (#11587)
* Update Intel's Inference Engine deep learning backend

* Remove cpu_extension dependency

* Update Darknet accuracy tests
2018-05-31 14:05:21 +03:00
Li Peng
1b517a45ae add fp16 accuracy and perf test
Signed-off-by: Li Peng <peng.li@intel.com>
2018-05-16 22:45:07 +08:00
Dmitry Kurtaev
bd77d100e1 Enable some tests for clDNN plugin from Intel's Inference Engine 2018-04-20 10:47:46 +03:00
Dmitry Kurtaev
97fec07d96 Support YOLOv3 model from Darknet 2018-04-16 18:44:12 +03:00
Dmitry Kurtaev
709cf5d038 OpenCL GPU target for Inference Engine deep learning backend
Enable FP16 GPU target for DL Inference Engine backend.
2018-04-09 17:21:35 +03:00
Dmitry Kurtaev
7972f47ed4 Load networks from intermediate representation of Intel's Deep learning deployment toolkit. 2018-03-26 07:24:21 +03:00
Dmitry Kurtaev
7fe97376c2 MobileNet-SSD from TensorFlow 1.3 and Inception-V2-SSD using Inference Engine backend 2018-02-09 13:45:45 +03:00
Dmitry Kurtaev
ed94136548 OpenCV face detection network using Inference Engine backend 2018-02-06 17:53:24 +03:00
Alexander Alekhin
2a1f46c42d Merge pull request #9770 from alalek:refactor_test_files 2018-02-06 09:33:58 +00:00
Dmitry Kurtaev
10e1de74d2 Intel Inference Engine deep learning backend (#10608)
* Intel Inference Engine deep learning backend.

* OpenFace network using Inference Engine backend
2018-02-06 11:57:35 +03:00
Alexander Alekhin
4a297a2443 ts: refactor OpenCV tests
- removed tr1 usage (dropped in C++17)
- moved includes of vector/map/iostream/limits into ts.hpp
- require opencv_test + anonymous namespace (added compile check)
- fixed norm() usage (must be from cvtest::norm for checks) and other conflict functions
- added missing license headers
2018-02-03 19:39:47 +00:00
Dmitry Kurtaev
6aabd6cc7a Remove cv::dnn::Importer 2017-12-18 18:08:28 +03:00
Dmitry Kurtaev
ef0650179b Fix conv/deconv/fc layers FLOPS computation 2017-12-07 11:42:04 +03:00