Commit Graph

541 Commits

Author SHA1 Message Date
Yashas Samaga B L
1fac1421e5 Merge pull request #16010 from YashasSamaga:cuda4dnn-fp16-tests
* enable tests for DNN_TARGET_CUDA_FP16

* disable deconvolution tests

* disable shortcut tests

* fix typos and some minor changes

* dnn(test): skip CUDA FP16 test too (run_pool_max)
2019-12-20 16:36:32 +03:00
Alexander Alekhin
97b6068c46 dnn(test): don't require downloaded data 2019-12-19 19:31:59 +00:00
Alexander Alekhin
4c86fc13cb Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-19 15:09:05 +03:00
Alexander Alekhin
4342657762 Merge pull request #16034 from Quantizs:irLoadFromBuffer 2019-12-19 10:00:07 +00:00
antalzsiroscandid
aa80f754f4 dnn: reading IR models from buffer 2019-12-18 15:31:08 +01:00
Alexander Alekhin
2c0d9fa81f dnn(test): fix Test_Model.Keypoints* tests 2019-12-16 18:07:23 +03:00
Alexander Alekhin
ba7b0f4c54 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-15 11:23:46 +00:00
Xuanda Yang
3d60a9b96c Merge pull request #16156 from TH3CHARLie:3.4
* Eltwise::DIV support in Halide backend

* fix typo

* remove div from generated test suite to pass CI, switching to manual test...

* ensure divisor not near to zero

* use randu

* dnn(test): update test data for Eltwise.Accuracy/DIV layer test
2019-12-13 18:29:39 +03:00
Diego
5b0b59ecfb Merge pull request #15189 from dvd42:keypoints_module
Keypoints module
2019-12-13 18:00:06 +03:00
Alexander Alekhin
92b9888837 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-12 13:02:19 +03:00
Alexander Alekhin
5ee7abbe3c
Merge pull request #16088 from alalek:dnn_eltwise_layer_different_src_channels
dnn(eltwise): fix handling of different number of channels

* dnn(test): reproducer for Eltwise layer issue from PR16063

* dnn(eltwise): rework support for inputs with different channels

* dnn(eltwise): get rid of finalize(), variableChannels

* dnn(eltwise): update input sorting by number of channels

- do not swap inputs if number of channels are same after truncation

* dnn(test): skip "shortcut" with batch size 2 on MYRIAD targets
2019-12-11 20:16:58 +03:00
Alexander Alekhin
202ba124a5 Merge pull request #16087 from YashasSamaga:cuda4dnn-eltwise-div 2019-12-06 18:33:55 +00:00
YashasSamaga
a91eca6ec2 add DIV support to EltwiseOp 2019-12-06 21:28:36 +05:30
Alexander Alekhin
8108fb0575 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-05 18:27:45 +03:00
Alexander Alekhin
986c5084a4 Merge pull request #16036 from dkurt:dnn_backport_15203 2019-12-05 09:56:00 +00:00
Dmitry Kurtaev
d8e10f3a8d Enable MaxPooling with indices in Inference Engine 2019-12-04 19:14:55 +03:00
Alexander Alekhin
1633c29f29 Merge pull request #16037 from alalek:dnn_test_fix_skip_vulkan 2019-12-04 15:49:47 +00:00
Alexander Alekhin
0dc0bc80ae dnn(test): fix Vulkan skip test tag 2019-12-02 18:22:10 +03:00
Dmitry Kurtaev
ca1ba7a53d Backport for dnn input shape estimation 2019-12-02 16:28:59 +03:00
Alexander Alekhin
4b0132ed7a Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-12-02 16:26:52 +03:00
Lubov Batanina
7523c777c5 Merge pull request #15537 from l-bat:ngraph
* Support nGraph

* Fix resize
2019-12-02 16:16:06 +03:00
Manjunath Bhat
78c5e41c23 Merge pull request #15808 from thebhatman:Mish_swish
* Added Swish and Mish activations

* Fixed whitespace errors

* Kernel implementation done

* Added function for launching kernel

* Changed type of 1.0

* Attempt to add test for Swish and Mish

* Resolving type mismatch for log

* exp from device

* Use log1pexp instead of adding 1

* Added openCL kernels
2019-12-02 00:06:17 +03:00
thebhatman
8a18d132fc Port Swish and Mish layers 2019-12-01 11:55:39 +03:00
Alexander Alekhin
b6a58818bb Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-11-11 20:25:42 +00:00
Lubov Batanina
cfc781949d Merge pull request #15811 from l-bat:eltwise_div
Supported ONNX Squeeze, ReduceL2 and Eltwise::DIV

* Support eltwise div

* Fix test

* OpenCL support added

* refactoring

* fix code style

* Only squeeze with axes supported
2019-11-09 14:11:09 +03:00
Dimitri Gerin
7c4158d8c2 Fix dnn::getLayerInputs 2019-11-07 08:13:33 +03:00
Alexander Alekhin
055ffc0425 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-10-24 18:21:19 +00:00
Alexander Alekhin
d65fead337 Merge pull request #15752 from dkurt:fix_15750 2019-10-24 07:06:32 +00:00
Dmitry Kurtaev
dfe0368835 Fix custom IE layers in case of no MKLDNN plugin 2019-10-21 19:09:44 +03:00
Yashas Samaga B L
613c12e590 Merge pull request #14827 from YashasSamaga:cuda4dnn-csl-low
CUDA backend for the DNN module

* stub cuda4dnn design

* minor fixes for tests and doxygen

* add csl public api directory to module headers

* add low-level CSL components

* add high-level CSL components

* integrate csl::Tensor into backbone code

* switch to CPU iff unsupported; otherwise, fail on error

* add fully connected layer

* add softmax layer

* add activation layers

* support arbitary rank TensorDescriptor

* pass input wrappers to `initCUDA()`

* add 1d/2d/3d-convolution

* add pooling layer

* reorganize and refactor code

* fixes for gcc, clang and doxygen; remove cxx14/17 code

* add blank_layer

* add LRN layer

* add rounding modes for pooling layer

* split tensor.hpp into tensor.hpp and tensor_ops.hpp

* add concat layer

* add scale layer

* add batch normalization layer

* split math.cu into activations.cu and math.hpp

* add eltwise layer

* add flatten layer

* add tensor transform api

* add asymmetric padding support for convolution layer

* add reshape layer

* fix rebase issues

* add permute layer

* add padding support for concat layer

* refactor and reorganize code

* add normalize layer

* optimize bias addition in scale layer

* add prior box layer

* fix and optimize normalize layer

* add asymmetric padding support for pooling layer

* add event API

* improve pooling performance for some padding scenarios

* avoid over-allocation of compute resources to kernels

* improve prior box performance

* enable layer fusion

* add const layer

* add resize layer

* add slice layer

* add padding layer

* add deconvolution layer

* fix channelwise  ReLU initialization

* add vector traits

* add vectorized versions of relu, clipped_relu, power

* add vectorized concat kernels

* improve concat_with_offsets performance

* vectorize scale and bias kernels

* add support for multi-billion element tensors

* vectorize prior box kernels

* fix address alignment check

* improve bias addition performance of conv/deconv/fc layers

* restructure code for supporting multiple targets

* add DNN_TARGET_CUDA_FP64

* add DNN_TARGET_FP16

* improve vectorization

* add region layer

* improve tensor API, add dynamic ranks

1. use ManagedPtr instead of a Tensor in backend wrapper
2. add new methods to tensor classes
  - size_range: computes the combined size of for a given axis range
  - tensor span/view can be constructed from a raw pointer and shape
3. the tensor classes can change their rank at runtime (previously rank was fixed at compile-time)
4. remove device code from tensor classes (as they are unused)
5. enforce strict conditions on tensor class APIs to improve debugging ability

* fix parametric relu activation

* add squeeze/unsqueeze tensor API

* add reorg layer

* optimize permute and enable 2d permute

* enable 1d and 2d slice

* add split layer

* add shuffle channel layer

* allow tensors of different ranks in reshape primitive

* patch SliceOp to allow Crop Layer

* allow extra shape inputs in reshape layer

* use `std::move_backward` instead of `std::move` for insert in resizable_static_array

* improve workspace management

* add spatial LRN

* add nms (cpu) to region layer

* add max pooling with argmax ( and a fix to limits.hpp)

* add max unpooling layer

* rename DNN_TARGET_CUDA_FP32 to DNN_TARGET_CUDA

* update supportBackend to be more rigorous

* remove stray include from preventing non-cuda build

* include op_cuda.hpp outside condition #if

* refactoring, fixes and many optimizations

* drop DNN_TARGET_CUDA_FP64

* fix gcc errors

* increase max. tensor rank limit to six

* add Interp layer

* drop custom layers; use BackendNode

* vectorize activation kernels

* fixes for gcc

* remove wrong assertion

* fix broken assertion in unpooling primitive

* fix build errors in non-CUDA build

* completely remove workspace from public API

* fix permute layer

* enable accuracy and perf. tests for DNN_TARGET_CUDA

* add asynchronous forward

* vectorize eltwise ops

* vectorize fill kernel

* fixes for gcc

* remove CSL headers from public API

* remove csl header source group from cmake

* update min. cudnn version in cmake

* add numerically stable FP32 log1pexp

* refactor code

* add FP16 specialization to cudnn based tensor addition

* vectorize scale1 and bias1 + minor refactoring

* fix doxygen build

* fix invalid alignment assertion

* clear backend wrappers before allocateLayers

* ignore memory lock failures

* do not allocate internal blobs

* integrate NVTX

* add numerically stable half precision log1pexp

* fix indentation, following coding style,  improve docs

* remove accidental modification of IE code

* Revert "add asynchronous forward"

This reverts commit 1154b9da9da07e9b52f8a81bdcea48cf31c56f70.

* [cmake] throw error for unsupported CC versions

* fix rebase issues

* add more docs, refactor code, fix bugs

* minor refactoring and fixes

* resolve warnings/errors from clang

* remove haveCUDA() checks from supportBackend()

* remove NVTX integration

* changes based on review comments

* avoid exception when no CUDA device is present

* add color code for CUDA in Net::dump
2019-10-21 14:28:00 +03:00
Dmitry Kurtaev
af61a15839 Fix Darknet eltwise 2019-10-19 12:54:15 +03:00
Dmitry Kurtaev
adbd613660 Enable Eltwise layer with different numbers of inputs channels 2019-10-18 18:51:52 +03:00
Alexander Alekhin
626bfbf309 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-10-05 15:45:31 +00:00
Alexander Alekhin
feff8bf972 Merge pull request #15626 from alalek:dnn_openvino_2019r3 2019-10-04 19:45:37 +00:00
Alexander Alekhin
c13a5ce229 Merge pull request #15622 from dkurt:enet_ie_cpu 2019-10-04 16:31:05 +00:00
Dmitry Kurtaev
e35fd463e7 Enable ENet with Inference Engine backend on CPU 2019-10-04 18:10:11 +03:00
Alexander Alekhin
fd11e3a81d dnn: update IE tests 2019-10-04 17:49:10 +03:00
Alexander Alekhin
f0058bbed3 dnn(test): fix optional test data 2019-10-04 17:26:35 +03:00
Alexander Alekhin
3fb6617d62 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-10-02 17:49:19 +03:00
Alexander Alekhin
c06115cb3f Merge pull request #15619 from alalek:dnn_eltwise_sum_ie_ocl 2019-10-01 18:02:45 +00:00
Alexander Alekhin
8814645c8d dnn(test): skip IE/OCL test for "sum" 2019-10-01 18:29:47 +03:00
Alexander Alekhin
440a937d24 dnn: increase async test timeout 2019-10-01 13:31:57 +03:00
Dmitry Kurtaev
fba9fdfd27 Fix autodetection of input size for dnn networks 2019-09-30 16:05:15 +03:00
Alexander Alekhin
e2a5a6a05c Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-09-25 18:32:44 +00:00
Alexander Alekhin
7ce9428e96 Merge pull request #15580 from smbz:dnn-lstm-reverse 2019-09-25 15:54:06 +00:00
Andrew Ryrie
b88435fdc2 dnn: Allow LSTM layer to operate in reverse direction
This is useful for bidirectional LSTMs.
2019-09-25 14:12:43 +01:00
Alexander Alekhin
ee23a7575d Merge pull request #15515 from dkurt:dnn_detection_model_fix 2019-09-18 12:19:14 +00:00
Alexander Alekhin
b4c5b50a3e Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-09-13 17:15:45 +00:00
Dmitry Kurtaev
3ddb005480 Fix DetectionModel in case of out of [0, 1] range detection prediction 2019-09-13 12:58:56 +03:00
Dmitry Kurtaev
0428f60d66 Fix OpenVINO 2019R1 compilation 2019-09-13 09:22:34 +03:00