Commit Graph

24 Commits

Author SHA1 Message Date
Julien Maille
a696348ec5 FIX: disable dnn cuda input_shortcut on _half for CC<5.3 2020-01-17 14:21:25 +01:00
Yashas Samaga B L
d85e67d3ec Merge pull request #16063 from YashasSamaga:cuda4dnn-shortcut-unequal
support eltwise sum with different number of input channels in CUDA backend

* add shortcut primitive

* add offsets in shortcut kernel

* skip tests involving more than two inputs

* remove redundant modulus operation

* support multiple inputs

* remove whole file indentation

* skip acc in0 trunc test if weighted

* use shortcut iff channels are unequal
2020-01-16 21:54:00 +03:00
Julien
ced3df73da Fix: rsqrt(float) was improperly put in the ifdef for half 2020-01-16 09:21:50 +01:00
Julien
4e2ef8c8f5 Merge pull request #16218 from JulienMaille:cuda-dnn-for-older-gpus
Enable cuda4dnn on hardware without support for __half

* Enable cuda4dnn on hardware without support for half (ie. compute capability < 5.3)

Update CMakeLists.txt

Lowered minimum CC to 3.0

* UPD: added ifdef on new copy kernel

* added fp16 support detection at runtime

* Clarified #if condition on atomicAdd definition

* More explicit CMake error message
2020-01-15 18:28:37 +03:00
Alexander Alekhin
1f2b2c5242 Merge pull request #16230 from YashasSamaga:cuda4dnn-fp-conversion 2020-01-05 11:59:33 +00:00
YashasSamaga
48eecafc89 simplify code to help MSVC 19.10 and lower 2019-12-30 23:02:17 +05:30
YashasSamaga
01f97f150c perfor fp conversions on GPU 2019-12-30 00:05:39 +05:30
Alexander Alekhin
9ec3d76b21 Merge pull request #16241 from bwignall:typo 2019-12-27 16:18:57 +00:00
Brian Wignall
659ffaddb4 Fix spelling typos 2019-12-26 06:45:03 -05:00
YashasSamaga
16bc505d26 improve reduction logic and add fast transpose kernel 2019-12-24 00:23:45 +05:30
Alexander Alekhin
b8e0898c7c Merge pull request #16082 from YashasSamaga:cuda4dnn-roi-pooling 2019-12-18 14:41:58 +00:00
Yashas Samaga B L
17c485eb03 Merge pull request #16092 from YashasSamaga:cuda4dnn-conv-act-fuse
cuda4dnn: fuse activations with convolutions

* fuse ReLU, ReLU6, TanH, Sigmoid with conv

* fix OpenCL errors

* improve ReLU, add power, swish and mish

* fix missing fusion entries

* fix handling of unsetAttached

* remove whole file indentation

* optimize power = 1.0, use IDENTITY instead of NONE

* handle edge case: change backend and then clear
2019-12-14 22:26:58 +03:00
Yashas Samaga B L
3fddd3bf93 Merge pull request #16069 from YashasSamaga:cuda4dnn-crop_and_resize
add CropAndResize layer for CUDA backend

* add CropAndResize layer

* process multiple channels per iteration
2019-12-09 22:26:58 +03:00
Alexander Alekhin
b505cf84de Merge pull request #16096 from YashasSamaga:cuda4dnn-region-optimize 2019-12-09 14:34:48 +00:00
Yashas Samaga B L
476a02739e Merge pull request #16097 from YashasSamaga:cuda4dnn-optimize-resize-bilinear
cuda4dnn(resize): process multiple channels each iteration

* resize bilinear: process multiple chans. per iter.

* remove unused headers

* correct dispatch logic

* resize_nn: process multiple chans. per iter.
2019-12-09 17:31:27 +03:00
Yashas
dd3f517fe9 optimize region kernels 2019-12-08 21:03:30 +05:30
Alexander Alekhin
202ba124a5 Merge pull request #16087 from YashasSamaga:cuda4dnn-eltwise-div 2019-12-06 18:33:55 +00:00
YashasSamaga
a91eca6ec2 add DIV support to EltwiseOp 2019-12-06 21:28:36 +05:30
YashasSamaga
9b8ddba4d1 add ROIPoolingOp 2019-12-06 18:19:37 +05:30
YashasSamaga
fbb3f64a1a fix expm1 and log1p for __half/__half2 2019-12-03 15:25:35 +05:30
Manjunath Bhat
78c5e41c23 Merge pull request #15808 from thebhatman:Mish_swish
* Added Swish and Mish activations

* Fixed whitespace errors

* Kernel implementation done

* Added function for launching kernel

* Changed type of 1.0

* Attempt to add test for Swish and Mish

* Resolving type mismatch for log

* exp from device

* Use log1pexp instead of adding 1

* Added openCL kernels
2019-12-02 00:06:17 +03:00
Brian Wignall
9276f1910b Fix some typos 2019-11-25 19:55:07 -05:00
Yashas Samaga B L
613c12e590 Merge pull request #14827 from YashasSamaga:cuda4dnn-csl-low
CUDA backend for the DNN module

* stub cuda4dnn design

* minor fixes for tests and doxygen

* add csl public api directory to module headers

* add low-level CSL components

* add high-level CSL components

* integrate csl::Tensor into backbone code

* switch to CPU iff unsupported; otherwise, fail on error

* add fully connected layer

* add softmax layer

* add activation layers

* support arbitary rank TensorDescriptor

* pass input wrappers to `initCUDA()`

* add 1d/2d/3d-convolution

* add pooling layer

* reorganize and refactor code

* fixes for gcc, clang and doxygen; remove cxx14/17 code

* add blank_layer

* add LRN layer

* add rounding modes for pooling layer

* split tensor.hpp into tensor.hpp and tensor_ops.hpp

* add concat layer

* add scale layer

* add batch normalization layer

* split math.cu into activations.cu and math.hpp

* add eltwise layer

* add flatten layer

* add tensor transform api

* add asymmetric padding support for convolution layer

* add reshape layer

* fix rebase issues

* add permute layer

* add padding support for concat layer

* refactor and reorganize code

* add normalize layer

* optimize bias addition in scale layer

* add prior box layer

* fix and optimize normalize layer

* add asymmetric padding support for pooling layer

* add event API

* improve pooling performance for some padding scenarios

* avoid over-allocation of compute resources to kernels

* improve prior box performance

* enable layer fusion

* add const layer

* add resize layer

* add slice layer

* add padding layer

* add deconvolution layer

* fix channelwise  ReLU initialization

* add vector traits

* add vectorized versions of relu, clipped_relu, power

* add vectorized concat kernels

* improve concat_with_offsets performance

* vectorize scale and bias kernels

* add support for multi-billion element tensors

* vectorize prior box kernels

* fix address alignment check

* improve bias addition performance of conv/deconv/fc layers

* restructure code for supporting multiple targets

* add DNN_TARGET_CUDA_FP64

* add DNN_TARGET_FP16

* improve vectorization

* add region layer

* improve tensor API, add dynamic ranks

1. use ManagedPtr instead of a Tensor in backend wrapper
2. add new methods to tensor classes
  - size_range: computes the combined size of for a given axis range
  - tensor span/view can be constructed from a raw pointer and shape
3. the tensor classes can change their rank at runtime (previously rank was fixed at compile-time)
4. remove device code from tensor classes (as they are unused)
5. enforce strict conditions on tensor class APIs to improve debugging ability

* fix parametric relu activation

* add squeeze/unsqueeze tensor API

* add reorg layer

* optimize permute and enable 2d permute

* enable 1d and 2d slice

* add split layer

* add shuffle channel layer

* allow tensors of different ranks in reshape primitive

* patch SliceOp to allow Crop Layer

* allow extra shape inputs in reshape layer

* use `std::move_backward` instead of `std::move` for insert in resizable_static_array

* improve workspace management

* add spatial LRN

* add nms (cpu) to region layer

* add max pooling with argmax ( and a fix to limits.hpp)

* add max unpooling layer

* rename DNN_TARGET_CUDA_FP32 to DNN_TARGET_CUDA

* update supportBackend to be more rigorous

* remove stray include from preventing non-cuda build

* include op_cuda.hpp outside condition #if

* refactoring, fixes and many optimizations

* drop DNN_TARGET_CUDA_FP64

* fix gcc errors

* increase max. tensor rank limit to six

* add Interp layer

* drop custom layers; use BackendNode

* vectorize activation kernels

* fixes for gcc

* remove wrong assertion

* fix broken assertion in unpooling primitive

* fix build errors in non-CUDA build

* completely remove workspace from public API

* fix permute layer

* enable accuracy and perf. tests for DNN_TARGET_CUDA

* add asynchronous forward

* vectorize eltwise ops

* vectorize fill kernel

* fixes for gcc

* remove CSL headers from public API

* remove csl header source group from cmake

* update min. cudnn version in cmake

* add numerically stable FP32 log1pexp

* refactor code

* add FP16 specialization to cudnn based tensor addition

* vectorize scale1 and bias1 + minor refactoring

* fix doxygen build

* fix invalid alignment assertion

* clear backend wrappers before allocateLayers

* ignore memory lock failures

* do not allocate internal blobs

* integrate NVTX

* add numerically stable half precision log1pexp

* fix indentation, following coding style,  improve docs

* remove accidental modification of IE code

* Revert "add asynchronous forward"

This reverts commit 1154b9da9da07e9b52f8a81bdcea48cf31c56f70.

* [cmake] throw error for unsupported CC versions

* fix rebase issues

* add more docs, refactor code, fix bugs

* minor refactoring and fixes

* resolve warnings/errors from clang

* remove haveCUDA() checks from supportBackend()

* remove NVTX integration

* changes based on review comments

* avoid exception when no CUDA device is present

* add color code for CUDA in Net::dump
2019-10-21 14:28:00 +03:00
Yashas Samaga B L
ae279966c2 Merge pull request #14660 from YashasSamaga:dnn-cuda-build
add cuDNN dependency and setup build for cuda4dnn (#14660)

* update cmake for cuda4dnn

- Adds FindCUDNN
- Adds new options:
   * WITH_CUDA
   * OPENCV_DNN_CUDA
- Adds CUDA4DNN preprocessor symbol for the DNN module

* FIX: append EXCLUDE_CUDA instead of overwrite

* remove cuDNN dependency for user apps

* fix unused variable warning
2019-06-02 14:47:15 +03:00