Commit Graph

1096 Commits

Author SHA1 Message Date
Yashas Samaga B L
d85e67d3ec Merge pull request #16063 from YashasSamaga:cuda4dnn-shortcut-unequal
support eltwise sum with different number of input channels in CUDA backend

* add shortcut primitive

* add offsets in shortcut kernel

* skip tests involving more than two inputs

* remove redundant modulus operation

* support multiple inputs

* remove whole file indentation

* skip acc in0 trunc test if weighted

* use shortcut iff channels are unequal
2020-01-16 21:54:00 +03:00
Julien
ced3df73da Fix: rsqrt(float) was improperly put in the ifdef for half 2020-01-16 09:21:50 +01:00
Julien
4e2ef8c8f5 Merge pull request #16218 from JulienMaille:cuda-dnn-for-older-gpus
Enable cuda4dnn on hardware without support for __half

* Enable cuda4dnn on hardware without support for half (ie. compute capability < 5.3)

Update CMakeLists.txt

Lowered minimum CC to 3.0

* UPD: added ifdef on new copy kernel

* added fp16 support detection at runtime

* Clarified #if condition on atomicAdd definition

* More explicit CMake error message
2020-01-15 18:28:37 +03:00
Alexander Alekhin
4cb9faf6c9 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2020-01-14 17:04:22 +03:00
Alexander Alekhin
a67228cd73 Merge pull request #16291 from dkurt:dnn_onnx_graph_simplifier 2020-01-14 12:45:59 +00:00
Liubov Batanina
be86338a79 Enable acrossSpatial normalizeL2 on Myriad 2020-01-14 12:51:19 +03:00
Dmitry Kurtaev
c1c84d2fd1 ONNX graphs simplifier 2020-01-14 12:45:49 +03:00
Alexander Alekhin
3f27f8cf41 Merge pull request #16232 from dkurt:dnn_ie_ngraph_fix_myriad_tests 2020-01-13 16:59:45 +00:00
Dmitry Kurtaev
8f1e36f7c1 Disable some tests for Myriad target of nGraph
Add lightweight IE hardware targets checks

nGraph: Concat with paddings

Enable more nGraph tests

Restore FP32->FP16 for GPU plugin of IE

try to fix buildbot

Use lightweight IE targets check only starts from R4
2020-01-13 15:35:47 +03:00
Alexander Alekhin
fb61f88b9c Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2020-01-12 09:35:39 +00:00
Alexander Alekhin
1f2b2c5242 Merge pull request #16230 from YashasSamaga:cuda4dnn-fp-conversion 2020-01-05 11:59:33 +00:00
Alexander Alekhin
1996ae4a42 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-31 10:11:39 +00:00
Dmitry Kurtaev
f954f0830c Sort text TensorFlow graphs 2019-12-31 11:43:32 +03:00
YashasSamaga
48eecafc89 simplify code to help MSVC 19.10 and lower 2019-12-30 23:02:17 +05:30
Dmitry Kurtaev
76cfa65d55 AddV2 from TensorFlow 2019-12-30 20:06:58 +03:00
YashasSamaga
01f97f150c perfor fp conversions on GPU 2019-12-30 00:05:39 +05:30
YashasSamaga
17a35587e1 use optimized cuDNN path for conv + bias + relu 2019-12-29 13:08:38 +05:30
Alexander Alekhin
9ec3d76b21 Merge pull request #16241 from bwignall:typo 2019-12-27 16:18:57 +00:00
Brian Wignall
f9c514b391 Fix spelling typos
backport commit 659ffaddb4
2019-12-27 12:46:53 +00:00
Brian Wignall
659ffaddb4 Fix spelling typos 2019-12-26 06:45:03 -05:00
YashasSamaga
16bc505d26 improve reduction logic and add fast transpose kernel 2019-12-24 00:23:45 +05:30
Yashas Samaga B L
1fac1421e5 Merge pull request #16010 from YashasSamaga:cuda4dnn-fp16-tests
* enable tests for DNN_TARGET_CUDA_FP16

* disable deconvolution tests

* disable shortcut tests

* fix typos and some minor changes

* dnn(test): skip CUDA FP16 test too (run_pool_max)
2019-12-20 16:36:32 +03:00
Alexander Alekhin
97b6068c46 dnn(test): don't require downloaded data 2019-12-19 19:31:59 +00:00
Alexander Alekhin
4c86fc13cb Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-19 15:09:05 +03:00
Alexander Alekhin
4342657762 Merge pull request #16034 from Quantizs:irLoadFromBuffer 2019-12-19 10:00:07 +00:00
Alexander Alekhin
b8e0898c7c Merge pull request #16082 from YashasSamaga:cuda4dnn-roi-pooling 2019-12-18 14:41:58 +00:00
antalzsiroscandid
aa80f754f4 dnn: reading IR models from buffer 2019-12-18 15:31:08 +01:00
Alexander Alekhin
61969dc158 Merge pull request #16171 from YashasSamaga:cuda4dnn-tensor-cores 2019-12-17 18:58:12 +00:00
Alexander Alekhin
2c0d9fa81f dnn(test): fix Test_Model.Keypoints* tests 2019-12-16 18:07:23 +03:00
YashasSamaga
cf93df41fc enable tensor cores for fp16 convolutions 2019-12-16 15:38:12 +05:30
Alexander Alekhin
ba7b0f4c54 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-15 11:23:46 +00:00
Yashas Samaga B L
17c485eb03 Merge pull request #16092 from YashasSamaga:cuda4dnn-conv-act-fuse
cuda4dnn: fuse activations with convolutions

* fuse ReLU, ReLU6, TanH, Sigmoid with conv

* fix OpenCL errors

* improve ReLU, add power, swish and mish

* fix missing fusion entries

* fix handling of unsetAttached

* remove whole file indentation

* optimize power = 1.0, use IDENTITY instead of NONE

* handle edge case: change backend and then clear
2019-12-14 22:26:58 +03:00
Xuanda Yang
3d60a9b96c Merge pull request #16156 from TH3CHARLie:3.4
* Eltwise::DIV support in Halide backend

* fix typo

* remove div from generated test suite to pass CI, switching to manual test...

* ensure divisor not near to zero

* use randu

* dnn(test): update test data for Eltwise.Accuracy/DIV layer test
2019-12-13 18:29:39 +03:00
Diego
5b0b59ecfb Merge pull request #15189 from dvd42:keypoints_module
Keypoints module
2019-12-13 18:00:06 +03:00
Alexander Alekhin
92b9888837 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-12 13:02:19 +03:00
Alexander Alekhin
5ee7abbe3c
Merge pull request #16088 from alalek:dnn_eltwise_layer_different_src_channels
dnn(eltwise): fix handling of different number of channels

* dnn(test): reproducer for Eltwise layer issue from PR16063

* dnn(eltwise): rework support for inputs with different channels

* dnn(eltwise): get rid of finalize(), variableChannels

* dnn(eltwise): update input sorting by number of channels

- do not swap inputs if number of channels are same after truncation

* dnn(test): skip "shortcut" with batch size 2 on MYRIAD targets
2019-12-11 20:16:58 +03:00
Alexander Alekhin
2a11103a73 Merge pull request #16098 from alalek:dnn_clarify_error_getMemoryShapes 2019-12-11 14:02:15 +00:00
Alexander Alekhin
939099b9ce Merge pull request #16107 from dkurt:dnn_ie_ngraph_v1_conv 2019-12-10 12:10:50 +00:00
Alexander Alekhin
2a19db0f0a Merge pull request #16106 from dkurt:dnn_ie_ngraph_weights_fusion 2019-12-10 12:08:04 +00:00
Dmitry Kurtaev
fe77223dee Modify nGraph's ConvolutionBackpropData and GroupConvolution 2019-12-10 14:14:00 +03:00
Yashas Samaga B L
3fddd3bf93 Merge pull request #16069 from YashasSamaga:cuda4dnn-crop_and_resize
add CropAndResize layer for CUDA backend

* add CropAndResize layer

* process multiple channels per iteration
2019-12-09 22:26:58 +03:00
Alexander Alekhin
45f6931352 Merge pull request #16089 from dkurt:dnn_ie_fix_fpga 2019-12-09 19:26:00 +00:00
Dmitry Kurtaev
c2ca3ee2fa Fix weights fusion for Convolution and Deconvolution layers in nGraph 2019-12-09 19:06:47 +03:00
Alexander Alekhin
b505cf84de Merge pull request #16096 from YashasSamaga:cuda4dnn-region-optimize 2019-12-09 14:34:48 +00:00
Yashas Samaga B L
476a02739e Merge pull request #16097 from YashasSamaga:cuda4dnn-optimize-resize-bilinear
cuda4dnn(resize): process multiple channels each iteration

* resize bilinear: process multiple chans. per iter.

* remove unused headers

* correct dispatch logic

* resize_nn: process multiple chans. per iter.
2019-12-09 17:31:27 +03:00
Dmitry Kurtaev
883c4c60c3 Remove Dummy layer 2019-12-09 12:49:47 +03:00
Alexander Alekhin
b1b505f783 dnn: clarify error message from getMemoryShapes() 2019-12-08 22:17:24 +00:00
Yashas
dd3f517fe9 optimize region kernels 2019-12-08 21:03:30 +05:30
Alexander Alekhin
202ba124a5 Merge pull request #16087 from YashasSamaga:cuda4dnn-eltwise-div 2019-12-06 18:33:55 +00:00
Lubov Batanina
629d47fcd8 Merge pull request #15988 from l-bat:custom_layer
Test create custom layer in python

* check is contiguos

* Add custom layer test

* Fix test

* Remove assert

* Move assert to pyopencv dnn

* remove assert

* Add unregister

* Fix python2

* proto to bytearray

* Fix data type
2019-12-06 21:29:57 +03:00