Lubov Batanina
60a841c797
Merge pull request #14255 from l-bat:networks_visualization
...
* Add networks visualization
* Disable CXX11
* Fixed multy inputs support
* Added output shapes
* Added color for DLIE/CPU
* Fixed graph colors
2019-04-12 19:31:07 +03:00
Dmitry Kurtaev
a2bbfa1db5
Enable some tests for Inference Engine 2019R1
2019-04-12 15:21:42 +03:00
Alexander Alekhin
c300070b8a
Merge pull request #14241 from alalek:openvino_2019R1
2019-04-03 19:26:46 +00:00
103yiran
4bb6edf176
Merge pull request #14117 from 103yiran:103yiran-patch-dnn
...
* Postpone variable definitions
* dnn: reduce scope of 'Mat image' variable
2019-04-03 22:13:11 +03:00
Alexander Alekhin
8483801eab
dnn: use OpenVINO 2019R1 defines
2019-04-03 15:39:47 +03:00
Lubov Batanina
7d3d6bc4e2
Merge pull request #13932 from l-bat:MyriadX_master_dldt
...
* Fix precision in tests for MyriadX
* Fix ONNX tests
* Add output range in ONNX tests
* Skip tests on Myriad OpenVINO 2018R5
* Add detect MyriadX
* Add detect MyriadX on OpenVINO R5
* Skip tests on Myriad next version of OpenVINO
* dnn(ie): VPU type from environment variable
* dnn(test): validate VPU type
* dnn(test): update DLIE test skip conditions
2019-03-29 16:42:58 +03:00
Dmitry Kurtaev
ca5976e3d4
Fix IE backend considering future changes.
2019-02-18 19:26:04 +03:00
Dmitry Kurtaev
0711dab09d
Fix Intel's Inference Engine backend from future. Second try.
2019-02-11 19:47:57 +03:00
Alexander Alekhin
3585522b24
Merge pull request #13692 from dkurt:dnn_do_not_crash_myriad_in_tests
2019-01-28 18:34:20 +00:00
Dmitry Kurtaev
3c3c5ef2b6
Fix a dnn bug with retrieving all the output blobs
2019-01-28 18:48:56 +03:00
Dmitry Kurtaev
ff775b2e54
Remove ASSERT_ANY_THROW checks fpr Myriad plugin and FP32 networks
2019-01-25 20:09:54 +03:00
Alexander Nesterov
97c3bcb1b7
Added fix for other size
2019-01-24 12:51:16 -01:00
Dmitry Kurtaev
f0ddf302b2
Move Inference Engine to new API
2019-01-17 14:28:48 +03:00
Alexander Alekhin
9ff1c39daa
dnn: fixup available backends/targets
2018-12-05 19:19:17 +03:00
Maksim Shabunin
fe459c82e5
Merge pull request #13332 from mshabunin:dnn-backends
...
DNN backends registry (#13332 )
* Added dnn backends registry
* dnn: process DLIE/FPGA target
2018-12-05 18:11:45 +03:00
Dmitry Kurtaev
0d117312c9
DNN_TARGET_FPGA using Intel's Inference Engine
2018-11-19 11:41:43 +03:00
Alexander Alekhin
f2bec05e6d
Merge pull request #12913 from dkurt:dnn_fix_ie_hyperparams
2018-11-16 18:36:12 +00:00
Dmitry Kurtaev
b5c54e447c
Extra hyperparameters for Intel's Inference Engine layers
2018-11-15 20:06:37 +03:00
Alexander Alekhin
96c71dd3d2
dnn: reduce set of ignored warnings
2018-11-15 13:15:59 +03:00
Dmitry Kurtaev
80265a0815
Fix a bug with OpenVINO backend
2018-11-14 13:42:06 +03:00
Alexander Alekhin
801c943009
fix coverity reports
2018-11-11 13:51:47 +00:00
Alexander Alekhin
0c261acf3a
Merge pull request #13065 from dkurt:dnn_update_tf_faster_rcnn
2018-11-08 16:31:39 +00:00
Dmitry Kurtaev
dc9e6d3af8
Update a script to generate text graphs for Faster-RCNN networks from TensorFlow
2018-11-07 18:33:01 +03:00
catree
eebf0dd7c9
Fix integer overflow when accumulating timing values.
2018-11-07 13:04:48 +01:00
Dmitry Kurtaev
dc3406eed9
Fix Pooling and Convolution layers from Intel's Inference Engine
2018-10-15 16:40:28 +03:00
Alexander Alekhin
26ba4f3c1d
Merge pull request #12754 from alalek:dnn_ocl4dnn_async_expressions
2018-10-08 15:22:24 +00:00
Alexander Alekhin
634dd656d5
dnn: don't use Mat expressions with async UMat functions
2018-10-05 17:09:50 +03:00
Alexander Alekhin
9d02d42afe
dnn(ocl4dnn): don't use getUMat()
...
especially in CPU only processing
2018-10-05 15:24:51 +03:00
Dmitry Kurtaev
24ab751547
Merge pull request #12565 from dkurt:dnn_non_intel_gpu
...
* Remove isIntel check from deep learning layers
* Remove fp16->fp32 fallbacks where it's not necessary
* Fix Kernel::run to prevent localsize > globalsize
2018-09-26 16:27:00 +03:00
Dmitry Kurtaev
f8398d80bc
add Net::getUnconnectedOutLayersNames method
2018-09-25 18:10:45 +03:00
Lubov Batanina
0c8590027f
Merge pull request #12071 from l-bat/l-bat:onnx_parser
...
* Add Squeezenet support in ONNX
* Add AlexNet support in ONNX
* Add Googlenet support in ONNX
* Add CaffeNet and RCNN support in ONNX
* Add VGG16 and VGG16 with batch normalization support in ONNX
* Add RCNN, ZFNet, ResNet18v1 and ResNet50v1 support in ONNX
* Add ResNet101_DUC_HDC
* Add Tiny Yolov2
* Add CNN_MNIST, MobileNetv2 and LResNet100 support in ONNX
* Add ONNX models for emotion recognition
* Add DenseNet121 support in ONNX
* Add Inception v1 support in ONNX
* Refactoring
* Fix tests
* Fix tests
* Skip unstable test
* Modify Reshape operation
2018-09-10 21:07:51 +03:00
Hamdi Sahloul
a39e0daacf
Utilize CV_UNUSED macro
2018-09-07 20:33:52 +09:00
Dmitry Kurtaev
d486204a0d
Merge pull request #12264 from dkurt:dnn_remove_forward_method
...
* Remove a forward method in dnn::Layer
* Add a test
* Fix tests
* Mark multiple dnn::Layer::finalize methods as deprecated
* Replace back dnn's inputBlobs to vector of pointers
* Remove Layer::forward_fallback from CV_OCL_RUN scopes
2018-09-06 13:26:47 +03:00
Dmitry Kurtaev
27a6be8763
Fix #12407
2018-09-04 17:48:52 +03:00
Dmitry Kurtaev
50bceea038
Include preprocessing nodes to object detection TensorFlow networks ( #12211 )
...
* Include preprocessing nodes to object detection TensorFlow networks
* Enable more fusion
* faster_rcnn_resnet50_coco_2018_01_28 test
2018-08-31 15:41:56 +03:00
Dmitry Kurtaev
3e027df583
Enable more deep learning tests using Intel's Inference Engine backend
2018-08-27 18:37:35 +03:00
Alexander Alekhin
096366738b
dnn(build): fix CV_Assert() usage
2018-08-22 16:04:40 +03:00
Alexander Alekhin
d2e08a524e
core: repair CV_Assert() messages
...
Multi-argument CV_Assert() is accessible via CV_Assert_N() (with malformed messages).
2018-08-15 17:43:10 +03:00
Vadim Pisarevsky
70b893333d
Merge pull request #12130 from dkurt:dnn_ie_mvn
2018-08-06 14:37:46 +00:00
Dmitry Kurtaev
be08730cd6
MVN layer using Intel's Inference Engine backend
2018-08-02 17:49:03 +03:00
Dmitry Kurtaev
8e034053af
Faster-RCNN from TensorFlow on CPU with Intel's Inference Engine backend
2018-08-01 11:29:58 +03:00
Alexander Alekhin
9137e2d635
Merge pull request #12060 from alalek:dnn_debug_layers
2018-07-26 15:14:32 +00:00
Dmitry Kurtaev
faa6c4e1e1
Faster-RCNN anf RFCN models on CPU using Intel's Inference Engine backend.
...
Enable Torch layers tests with Intel's Inference Engine backend.
2018-07-25 19:04:55 +03:00
Alexander Alekhin
45b5b3c13a
dnn: check layer output for NaN/Inf
2018-07-25 16:25:18 +03:00
Dmitry Kurtaev
070393dfda
uint8 inputs for deep learning networks
2018-07-19 14:37:33 +03:00
Alexander Alekhin
6c4f618db5
Merge pull request #11104 from asciian:reading_from_stream
2018-07-17 16:24:06 +00:00
Li Peng
f0cadaa6e3
enable concat layer fuse for OCL target
...
Signed-off-by: Li Peng <peng.li@intel.com>
2018-07-17 12:46:16 +08:00
Dmitry Kurtaev
8b5f061dae
Replace std::vector<char> to std::vector<uchar> for Java bindings of dnn importers
2018-07-11 18:58:56 +03:00
Dmitry Kurtaev
d57e5406f0
Add readNet* functions which parse models from byte arrays
2018-07-10 11:12:01 +03:00
Dmitry Kurtaev
362d4f5395
Replace convertFp16 from dnn::Net::setInput()
2018-07-09 14:35:54 +03:00