Alexander Alekhin
ca4fd1e427
Merge pull request #13884 from dkurt:dnn_drop_ie_r1_r2
2019-02-22 11:21:43 +00:00
Dmitry Kurtaev
20400aa9f7
Import Upsample and Unsqueeze from ONNX
2019-02-21 20:17:28 +03:00
Dmitry Kurtaev
4cbd09c41c
Add extra limitations for LRN from Inference Engine backend
2019-02-21 14:20:24 +03:00
Alexander Alekhin
0e70363f4a
Merge pull request #13882 from dkurt:fix_13479
2019-02-21 09:38:26 +00:00
Dmitry Kurtaev
ed710eaa1c
Make Inference Engine R3 as a minimal supported version
2019-02-21 09:32:26 +03:00
Dmitry Kurtaev
bfd663c281
Add a test for grouped deconvolution from ONNX
2019-02-21 08:54:35 +03:00
Ayush Pandey
5c7fe0fe05
Fix Issue #13479
2019-02-21 08:36:16 +03:00
Dmitry Kurtaev
715f881dda
Replace default confidence threshold for dnn detections from -FLT_MAX to 0
2019-02-20 13:09:09 +03:00
Alexander Alekhin
8bde6aea4b
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2019-02-19 19:49:13 +00:00
Alexander Alekhin
8cedc052ca
Merge pull request #13841 from dkurt:dnn_ie_future_3
2019-02-19 14:19:36 +00:00
Dmitry Kurtaev
ca5976e3d4
Fix IE backend considering future changes.
2019-02-18 19:26:04 +03:00
Alexander Alekhin
9d3d5e9d65
Merge pull request #13774 from l-bat:fix-IE-tests
2019-02-15 13:54:24 +00:00
Liubov Batanina
183c0fcab1
Changed condition for resize and lrn layers
2019-02-14 13:11:14 +03:00
Alexander Alekhin
dfef04b325
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2019-02-12 17:54:40 +03:00
Alexander Alekhin
9e7014b59f
Merge pull request #13799 from dkurt:dnn_ie_future_2
2019-02-12 14:07:42 +00:00
Dmitry Kurtaev
1606137df2
Read raw floats data from Caffe models
2019-02-11 20:08:17 +03:00
Dmitry Kurtaev
0711dab09d
Fix Intel's Inference Engine backend from future. Second try.
2019-02-11 19:47:57 +03:00
klemens
5d9c6723ee
spelling fixes
...
backport 997b7b18af
2019-02-11 15:35:10 +03:00
Liubov Batanina
6b4becfd03
Enabled tests on IE backend
2019-02-11 12:39:28 +03:00
klemens
997b7b18af
spelling fixes
2019-02-09 22:29:54 +01:00
Alexander Alekhin
f414c16c13
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2019-02-08 17:18:56 +00:00
Alexander Nesterov
9cbdb48d6d
Fix change step
2019-02-07 11:14:20 -01:00
Liubov Batanina
b068d26fad
Using IE backend for normalize layer tests
2019-02-07 11:52:27 +03:00
Alexander Alekhin
f67b197d49
Merge pull request #13738 from dkurt:dnn_ie_lock_shared_plugins
2019-02-06 12:09:58 +00:00
Dmitry Kurtaev
bc4e471847
Add a mutex for shared Inference Engine plugins
2019-02-05 19:26:58 +03:00
Alexander Alekhin
fcec053d59
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2019-02-05 19:12:41 +03:00
Alexander Alekhin
eab6744ac7
dnn(ocl): use compile-time LOCAL_SIZE parameter
...
instead of get_local_size(0) and dynamic local memory allocation
2019-02-05 15:51:16 +03:00
Alexander Alekhin
665408e57f
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2019-02-01 13:17:32 +03:00
Alexander Alekhin
a42bbc9722
Merge pull request #13736 from dkurt:dnn_ie_future
2019-02-01 10:01:39 +00:00
Dmitry Kurtaev
c918ac298c
Fix IE tests
2019-01-31 14:14:38 +03:00
Dmitry Kurtaev
ac262f5b5d
Clone convolution layer weights only for fusion
2019-01-29 14:29:47 +03:00
Alexander Alekhin
3585522b24
Merge pull request #13692 from dkurt:dnn_do_not_crash_myriad_in_tests
2019-01-28 18:34:20 +00:00
Dmitry Kurtaev
3c3c5ef2b6
Fix a dnn bug with retrieving all the output blobs
2019-01-28 18:48:56 +03:00
Dmitry Kurtaev
ff775b2e54
Remove ASSERT_ANY_THROW checks fpr Myriad plugin and FP32 networks
2019-01-25 20:09:54 +03:00
Alexander Nesterov
97c3bcb1b7
Added fix for other size
2019-01-24 12:51:16 -01:00
Alexander Alekhin
631b246881
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2019-01-22 18:00:34 +00:00
Alexander Alekhin
400fd5c3ec
Merge pull request #13539 from dkurt:ie_graph_api
2019-01-18 16:00:18 +00:00
Lee Jaehwan
3721c8bb06
Merge pull request #13586 from eightco:Core_bugfix3
...
* Add Operator override for multi-channel Mat with literal constant.
* simple test
* Operator overloading channel constraint for primitive types
* fix some test for #13586
2019-01-17 17:23:09 +03:00
Dmitry Kurtaev
f0ddf302b2
Move Inference Engine to new API
2019-01-17 14:28:48 +03:00
Raphael Graf
82c77fa244
dnn: remove malloc.h include
2019-01-10 13:07:36 +01:00
Alexander Alekhin
7e2ebecd52
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2019-01-10 12:29:41 +03:00
Dmitry Kurtaev
d0504c95f4
Add a text message for Convolution layer's input channels check
2019-01-09 13:10:19 +03:00
WuZhiwen
3d44e9ad92
Merge pull request #13520 from wzw-intel:hang
...
* dnn/Vulkan: fix GPU hang for heavy convolution tasks
Intel i915 driver will declare GPU hang if the compute shader
takes too long to complete. See
https://bugs.freedesktop.org/show_bug.cgi?id=108947 for details.
The idea in this commit is to divide heavy task into several light
ones and run compute shader multiple times to make each run take
short time enough.
TODO: Add more efficient compute shader
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: add a more efficient conv shader
2018-12-27 15:06:44 +03:00
Alexander Alekhin
6142b21dd5
Merge pull request #13518 from wzw-intel:test_vulkan
2018-12-24 16:49:45 +00:00
Wu Zhiwen
dae03273cd
dnn: fixup missing vkcom/vulkan combination of backend/target in dnn test
2018-12-24 11:00:45 +08:00
Wu Zhiwen
be6a837e15
dnn: add Vulkan device check for BackendRegistry
2018-12-24 10:41:58 +08:00
Alexander Alekhin
1dee705074
Merge branch '3.4' into merge-3.4
2018-12-22 05:40:15 +00:00
Alexander Alekhin
14633bc857
Merge pull request #13497 from dkurt:dnn_torch_bn_train
2018-12-21 14:29:10 +00:00
Dmitry Kurtaev
840c892abd
Batch normalization in training phase from Torch
2018-12-21 14:36:55 +03:00
Dmitry Kurtaev
59ce1d80a5
Fix dnn tests for Inference Engine R5
2018-12-21 12:33:30 +03:00
Dmitry Kurtaev
257f60582a
Add serialize method for IE net wrapper
...
backport 4ba4901ca9
2018-12-21 05:52:27 +00:00
Alexander Alekhin
bbdc987fc6
dnn: add OpenVINO 2018R5 defines
...
https://software.intel.com/en-us/openvino-toolkit
2018-12-21 05:52:27 +00:00
Alexander Alekhin
0c16d8f6c3
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-12-13 15:12:26 +03:00
Alexander Alekhin
a9771078df
Merge pull request #13427 from dkurt:dnn_onnx_dynamic_reshape
2018-12-13 11:15:51 +00:00
Dmitry Kurtaev
e71758cfdf
Operate with shapes in ONNX models
2018-12-12 18:34:22 +03:00
Alexander Alekhin
d8583b2c7a
dnn: fix vulkan backend builds with Clang
2018-12-12 15:25:39 +03:00
Dmitry Kurtaev
53f6198f27
Minor fixes in IE backend tests
2018-12-10 20:08:13 +03:00
Alexander Alekhin
ea64e860de
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-12-09 13:21:58 +00:00
Dmitry Kurtaev
8422dda2c7
Element-wise subtraction from TensorFlow
2018-12-07 13:38:05 +03:00
Alexander Alekhin
cab15f6c5e
Merge pull request #13377 from dkurt:hotfix_dnn_ie_master
2018-12-06 15:11:09 +00:00
Alexander Alekhin
492a072ea8
Merge pull request #13376 from dkurt:hotfix_dnn_ie
2018-12-06 15:09:51 +00:00
Alexander Alekhin
e82e672a93
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-12-06 07:06:58 +00:00
Dmitry Kurtaev
93971a53d9
Exclude Input layer from list of outputs for IE networks
2018-12-06 09:12:05 +03:00
Dmitry Kurtaev
3868cb44f1
Exclude Input layer from list of outputs for IE networks
2018-12-06 09:08:50 +03:00
Alexander Alekhin
6fbf6f8bea
Merge pull request #13359 from dkurt:dnn_keras_pad_concat
2018-12-05 19:48:58 +00:00
Alexander Alekhin
9ff1c39daa
dnn: fixup available backends/targets
2018-12-05 19:19:17 +03:00
Maksim Shabunin
fe459c82e5
Merge pull request #13332 from mshabunin:dnn-backends
...
DNN backends registry (#13332 )
* Added dnn backends registry
* dnn: process DLIE/FPGA target
2018-12-05 18:11:45 +03:00
Dmitry Kurtaev
c9e0c77d73
Concat layer from TensorFlow with constant inputs
2018-12-04 19:41:40 +03:00
Dmitry Kurtaev
4ba4901ca9
Add serialize method for IE net wrapper
2018-11-27 12:02:00 +03:00
Alexander Alekhin
8f4e5c2fb8
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-11-26 15:37:45 +03:00
Dmitry Kurtaev
84ce2cc211
Enable some dnn tests according to the new Intel's Inference Engine release (R4)
2018-11-26 13:02:24 +03:00
Wu Zhiwen
4e65283081
dnn/Vulkan: make thread safe
...
Use a global dedicated mutex to make sure initialize once and
protect command buffer pool and queue.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
2018-11-26 14:08:37 +08:00
Dmitry Kurtaev
2f6f52d644
Fix ONNX's emotion_ferplus model.
...
Reduce input size for OpenPose tests
2018-11-23 19:00:17 +03:00
WuZhiwen
02cc1cd6e6
Merge pull request #13244 from wzw-intel:init_vulkan
...
* dnn/Vulkan: don't init Vulkan runtime if using other backend/target
Don't need to explictly call a init API but will automatically
init Vulkan environment the first time to use an VkCom object.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
* dnn/Vulkan: depress compilier warning for "-Wsign-promo"
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
2018-11-22 19:46:30 +03:00
Alexander Alekhin
7fa7fa0226
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-11-21 08:33:39 +00:00
Alexander Alekhin
eaf39f6b6b
Merge pull request #13213 from alalek:fix_format
2018-11-20 14:53:20 +00:00
Alexander Alekhin
d7272f76fb
dnn: fix format
2018-11-19 19:33:56 +00:00
Dmitry Kurtaev
0d117312c9
DNN_TARGET_FPGA using Intel's Inference Engine
2018-11-19 11:41:43 +03:00
Alexander Alekhin
22dbcf98c5
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-11-17 14:17:35 +00:00
Alexander Alekhin
dd3398416b
experimental version++
2018-11-17 10:22:17 +00:00
Christopher Gundler
b58a8729c2
Merge pull request #13131 from Christopher22:add_transposedConv_onnx
...
* Add support for ConvTranspose when parsing ONNX.
* Add support for ConvTranspose when parsing ONNX.
* Add test for Deconvolution
2018-11-16 22:50:40 +03:00
Alexander Alekhin
f2bec05e6d
Merge pull request #12913 from dkurt:dnn_fix_ie_hyperparams
2018-11-16 18:36:12 +00:00
Dmitry Kurtaev
b5c54e447c
Extra hyperparameters for Intel's Inference Engine layers
2018-11-15 20:06:37 +03:00
Dmitry Kurtaev
ef5d921eac
Fix Vulkan's max pooling in case of no output indices
2018-11-15 14:10:54 +03:00
Alexander Alekhin
96c71dd3d2
dnn: reduce set of ignored warnings
2018-11-15 13:15:59 +03:00
Alexander Alekhin
8409aa9eba
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-11-14 19:41:09 +00:00
catree
10b482ff1e
Fix code and missing intrin header. Remove useless header.
2018-11-14 19:00:59 +01:00
Dmitry Kurtaev
80265a0815
Fix a bug with OpenVINO backend
2018-11-14 13:42:06 +03:00
Alexander Alekhin
f5b212a9d4
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-11-12 17:58:45 +03:00
Alexander Alekhin
801c943009
fix coverity reports
2018-11-11 13:51:47 +00:00
Alexander Alekhin
1913482cf5
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-11-10 20:50:26 +00:00
Alexander Alekhin
0c261acf3a
Merge pull request #13065 from dkurt:dnn_update_tf_faster_rcnn
2018-11-08 16:31:39 +00:00
Alexander Alekhin
997ad12730
Merge pull request #12985 from wzw-intel:vkcom_refine
2018-11-08 10:26:57 +00:00
Dmitry Kurtaev
dc9e6d3af8
Update a script to generate text graphs for Faster-RCNN networks from TensorFlow
2018-11-07 18:33:01 +03:00
catree
eebf0dd7c9
Fix integer overflow when accumulating timing values.
2018-11-07 13:04:48 +01:00
Wu Zhiwen
33c9d57c6f
dnn/Vulkan: skip heavy convolution task
...
This is a workaround for GPU hang on heavy convolution workload (> 10 GFLOPS).
e.g. ResNet101_DUC_HDC
For the long time task, vkWaitForFences() return without error but next call on
vkQueueSubmit() return -4, i.e. "VK_ERROR_DEVICE_LOST" and driver reports GPU hang.
Need more investigation on root cause of GPU hang and need to optimize convolution shader
to reduce process time.
2018-11-07 16:38:36 +08:00
Alexander Alekhin
687fa6a8ca
Merge remote-tracking branch 'upstream/3.4' into merge-3.4
2018-11-02 05:33:35 +00:00
Dmitry Kurtaev
a6f9170f10
Add ONNX's padding import
2018-10-31 18:24:05 +03:00
Wu Zhiwen
34e9d1eb3c
dnn/Vulkan: support log softmax
...
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
2018-10-31 09:47:38 +08:00
Wu Zhiwen
3914c17b0d
dnn/Vulkan: Refine error handle mechanism
...
Fallback to OPENCV backend and CPU target if catch exception from
vkcom backend.
Signed-off-by: Wu Zhiwen <zhiwen.wu@intel.com>
2018-10-31 09:47:33 +08:00