Alexander Alekhin
9ff1c39daa
dnn: fixup available backends/targets
2018-12-05 19:19:17 +03:00
Maksim Shabunin
fe459c82e5
Merge pull request #13332 from mshabunin:dnn-backends
...
DNN backends registry (#13332 )
* Added dnn backends registry
* dnn: process DLIE/FPGA target
2018-12-05 18:11:45 +03:00
Dmitry Kurtaev
0d117312c9
DNN_TARGET_FPGA using Intel's Inference Engine
2018-11-19 11:41:43 +03:00
Alexander Alekhin
f2bec05e6d
Merge pull request #12913 from dkurt:dnn_fix_ie_hyperparams
2018-11-16 18:36:12 +00:00
Dmitry Kurtaev
b5c54e447c
Extra hyperparameters for Intel's Inference Engine layers
2018-11-15 20:06:37 +03:00
Alexander Alekhin
96c71dd3d2
dnn: reduce set of ignored warnings
2018-11-15 13:15:59 +03:00
Dmitry Kurtaev
80265a0815
Fix a bug with OpenVINO backend
2018-11-14 13:42:06 +03:00
Alexander Alekhin
801c943009
fix coverity reports
2018-11-11 13:51:47 +00:00
Alexander Alekhin
0c261acf3a
Merge pull request #13065 from dkurt:dnn_update_tf_faster_rcnn
2018-11-08 16:31:39 +00:00
Dmitry Kurtaev
dc9e6d3af8
Update a script to generate text graphs for Faster-RCNN networks from TensorFlow
2018-11-07 18:33:01 +03:00
catree
eebf0dd7c9
Fix integer overflow when accumulating timing values.
2018-11-07 13:04:48 +01:00
Dmitry Kurtaev
dc3406eed9
Fix Pooling and Convolution layers from Intel's Inference Engine
2018-10-15 16:40:28 +03:00
Alexander Alekhin
26ba4f3c1d
Merge pull request #12754 from alalek:dnn_ocl4dnn_async_expressions
2018-10-08 15:22:24 +00:00
Alexander Alekhin
634dd656d5
dnn: don't use Mat expressions with async UMat functions
2018-10-05 17:09:50 +03:00
Alexander Alekhin
9d02d42afe
dnn(ocl4dnn): don't use getUMat()
...
especially in CPU only processing
2018-10-05 15:24:51 +03:00
Dmitry Kurtaev
24ab751547
Merge pull request #12565 from dkurt:dnn_non_intel_gpu
...
* Remove isIntel check from deep learning layers
* Remove fp16->fp32 fallbacks where it's not necessary
* Fix Kernel::run to prevent localsize > globalsize
2018-09-26 16:27:00 +03:00
Dmitry Kurtaev
f8398d80bc
add Net::getUnconnectedOutLayersNames method
2018-09-25 18:10:45 +03:00
Lubov Batanina
0c8590027f
Merge pull request #12071 from l-bat/l-bat:onnx_parser
...
* Add Squeezenet support in ONNX
* Add AlexNet support in ONNX
* Add Googlenet support in ONNX
* Add CaffeNet and RCNN support in ONNX
* Add VGG16 and VGG16 with batch normalization support in ONNX
* Add RCNN, ZFNet, ResNet18v1 and ResNet50v1 support in ONNX
* Add ResNet101_DUC_HDC
* Add Tiny Yolov2
* Add CNN_MNIST, MobileNetv2 and LResNet100 support in ONNX
* Add ONNX models for emotion recognition
* Add DenseNet121 support in ONNX
* Add Inception v1 support in ONNX
* Refactoring
* Fix tests
* Fix tests
* Skip unstable test
* Modify Reshape operation
2018-09-10 21:07:51 +03:00
Hamdi Sahloul
a39e0daacf
Utilize CV_UNUSED macro
2018-09-07 20:33:52 +09:00
Dmitry Kurtaev
d486204a0d
Merge pull request #12264 from dkurt:dnn_remove_forward_method
...
* Remove a forward method in dnn::Layer
* Add a test
* Fix tests
* Mark multiple dnn::Layer::finalize methods as deprecated
* Replace back dnn's inputBlobs to vector of pointers
* Remove Layer::forward_fallback from CV_OCL_RUN scopes
2018-09-06 13:26:47 +03:00
Dmitry Kurtaev
27a6be8763
Fix #12407
2018-09-04 17:48:52 +03:00
Dmitry Kurtaev
50bceea038
Include preprocessing nodes to object detection TensorFlow networks ( #12211 )
...
* Include preprocessing nodes to object detection TensorFlow networks
* Enable more fusion
* faster_rcnn_resnet50_coco_2018_01_28 test
2018-08-31 15:41:56 +03:00
Dmitry Kurtaev
3e027df583
Enable more deep learning tests using Intel's Inference Engine backend
2018-08-27 18:37:35 +03:00
Alexander Alekhin
096366738b
dnn(build): fix CV_Assert() usage
2018-08-22 16:04:40 +03:00
Alexander Alekhin
d2e08a524e
core: repair CV_Assert() messages
...
Multi-argument CV_Assert() is accessible via CV_Assert_N() (with malformed messages).
2018-08-15 17:43:10 +03:00
Vadim Pisarevsky
70b893333d
Merge pull request #12130 from dkurt:dnn_ie_mvn
2018-08-06 14:37:46 +00:00
Dmitry Kurtaev
be08730cd6
MVN layer using Intel's Inference Engine backend
2018-08-02 17:49:03 +03:00
Dmitry Kurtaev
8e034053af
Faster-RCNN from TensorFlow on CPU with Intel's Inference Engine backend
2018-08-01 11:29:58 +03:00
Alexander Alekhin
9137e2d635
Merge pull request #12060 from alalek:dnn_debug_layers
2018-07-26 15:14:32 +00:00
Dmitry Kurtaev
faa6c4e1e1
Faster-RCNN anf RFCN models on CPU using Intel's Inference Engine backend.
...
Enable Torch layers tests with Intel's Inference Engine backend.
2018-07-25 19:04:55 +03:00
Alexander Alekhin
45b5b3c13a
dnn: check layer output for NaN/Inf
2018-07-25 16:25:18 +03:00
Dmitry Kurtaev
070393dfda
uint8 inputs for deep learning networks
2018-07-19 14:37:33 +03:00
Alexander Alekhin
6c4f618db5
Merge pull request #11104 from asciian:reading_from_stream
2018-07-17 16:24:06 +00:00
Li Peng
f0cadaa6e3
enable concat layer fuse for OCL target
...
Signed-off-by: Li Peng <peng.li@intel.com>
2018-07-17 12:46:16 +08:00
Dmitry Kurtaev
8b5f061dae
Replace std::vector<char> to std::vector<uchar> for Java bindings of dnn importers
2018-07-11 18:58:56 +03:00
Dmitry Kurtaev
d57e5406f0
Add readNet* functions which parse models from byte arrays
2018-07-10 11:12:01 +03:00
Dmitry Kurtaev
362d4f5395
Replace convertFp16 from dnn::Net::setInput()
2018-07-09 14:35:54 +03:00
Vadim Pisarevsky
523b6f32ba
Merge pull request #11867 from dkurt:dnn_ie_layers
2018-07-06 13:13:20 +00:00
Dmitry Kurtaev
019c2f2115
Enable more deep learning tests
2018-07-05 14:23:15 +03:00
Dmitry Kurtaev
f25a01bb5a
Disable fusion to output layers
2018-07-04 15:53:47 +03:00
Alexander Alekhin
f40231af5d
Merge pull request #11851 from pengli:3.4
2018-06-29 15:01:20 +00:00
Li Peng
145eae321e
pooling ocl kernel optimization
...
set global size with real output size, also optimize
max pooling index computation if necessary.
Signed-off-by: Li Peng <peng.li@intel.com>
2018-06-29 15:22:49 +08:00
Dmitry Kurtaev
346871e27f
Set output layers names and types for models in DLDT's intermediate representation
2018-06-28 10:21:45 +03:00
Dmitry Kurtaev
40b85c1cd9
Remove undocumented feature to retreive layers outputs by indices
2018-06-20 14:44:21 +03:00
Alexander Alekhin
5fd7cfbcad
dnn: add runtime parameter OPENCV_DNN_BACKEND_DEFAULT
...
to control DNN_BACKEND_DEFAULT enumeration value behavior
2018-06-13 19:00:04 +03:00
Dmitry Kurtaev
f3a6ae5f00
Wrap Inference Engine init to try-catch
2018-06-07 12:55:52 +03:00
Vadim Pisarevsky
3cbd2e2764
Merge pull request #11650 from dkurt:dnn_default_backend
2018-06-06 09:30:39 +00:00
Dmitry Kurtaev
b781ac7346
Make Intel's Inference Engine backend is default if no preferable backend is specified.
2018-06-04 18:31:46 +03:00
Kuang Fangjun
9ae28415ec
fix doc.
2018-06-03 17:44:24 +08:00
Dmitry Kurtaev
32bab45f81
Fix Inference Engine graphs with fused output layers
2018-05-31 16:21:08 +03:00