Commit Graph

21 Commits

Author SHA1 Message Date
Alexander Smorkalov
cb3af0a08f Merge branch 4.x 2024-09-23 14:18:25 +03:00
Maksim Shabunin
4c81e174bf
Merge pull request #25901 from mshabunin:fix-riscv-aarch-baseline
RISC-V/AArch64: disable CPU features detection #25901

This PR is the first step in fixing current issues with NEON/RVV, FP16, BF16 and other CPU features on AArch64 and RISC-V platforms.

On AArch64 and RISC-V platforms we usually have the platform set by default in the toolchain when we compile it or in the cmake toolchain file or in CMAKE_CXX_FLAGS by user. Then, there are two ways to set platform options: a) "-mcpu=<some_cpu>" ; b) "-march=<arch description>" (e.g. "rv64gcv"). Furthermore, there are no similar "levels" of optimizations as for x86_64, instead we have features (RVV, FP16,...) which can be enabled or disabled. So, for example, if a user has "rv64gc" set by the toolchain and we want to enable RVV. Then we need to somehow parse their current feature set and append "v" (vector optimizations) to this string. This task is quite hard and the whole procedure is prone to errors.

I propose to use "CPU_BASELINE=DETECT" by default on AArch64 and RISC-V platforms. And somehow remove other features or make them read-only/detect-only, so that OpenCV wouldn't add any extra "-march" flags to the default configuration. We would rely only on the flags provided by the compiler and cmake toolchain file. We can have some predefined configurations in our cmake toolchain files.

Changes made by this PR:
- `CMakeLists.txt`: 
  - use `CMAKE_CROSSCOMPILING` instead of `CMAKE_TOOLCHAIN_FILE` to detect cross-compilation. This might be useful in cases of native compilation with a toolchain file
  - removed obsolete variables `ENABLE_NEON` and `ENABLE_VFPV3`, the first one have been turned ON by default on AArch64 platform which caused setting `CPU_BASELINE=NEON`
  - raise minimum cmake version allowed to 3.7 to allow using `CMAKE_CXX_FLAGS_INIT` in toolchain files
- added separate files with arch flags for native compilation on AArch64 and RISC-V, these files will be used in our toolchain files and in regular cmake
- use `DETECT` as default value for `CPU_BASELINE` also allow `NATIVE`, warn user if other values were used (only for AArch64 and RISC-V)
- for each feature listed in `CPU_DISPATCH` check if corresponding `CPU_${opt}_FLAGS_ON` has been provided, warn user if it is empty (only for AArch64 and RISC-V)
- use `CPU_BASELINE_DISABLE` variable to actually turn off macros responsible for corresponding features even if they are enabled by compiler
- removed Aarch64 feature merge procedure (it didn't support `-mcpu` and built-in `-march`)
- reworked AArch64 and two RISC-V cmake toolchain files (does not affect Android/OSX/iOS/Win):
  - use `CMAKE_CXX_FLAGS_INIT` to set compiler flags
  - use variables `ENABLE_BF16`, `ENABLE_DOTPROD`, `ENABLE_RVV`, `ENABLE_FP16` to control `-march`
  - AArch64: removed other compiler and linker flags
    - `-fdata-sections`, `-fsigned-char`, `-Wl,--no-undefined`, `-Wl,--gc-sections`   - already set by OpenCV
    - `-Wa,--noexecstack`, `-Wl,-z,noexecstack`, `-Wl,-z,relro`, `-Wl,-z,now` - can be enabled by OpenCV via `ENABLE_HARDENING`
    - `-Wno-psabi` - this option used to disable some warnings on older ARM platforms, shouldn't harm
  - ARM: removed same common flags as for AArch64, but left `-mthumb` and `--fix-cortex-a8`, `-z nocopyreloc`
2024-09-12 18:07:24 +03:00
Alexander Smorkalov
1a3523d2d8 Drop Python2 support. 2023-07-14 15:06:53 +03:00
Alexander Alekhin
70f69cb265 highgui: backends and plugins 2021-05-24 16:12:02 +00:00
Yashas Samaga B L
613c12e590 Merge pull request #14827 from YashasSamaga:cuda4dnn-csl-low
CUDA backend for the DNN module

* stub cuda4dnn design

* minor fixes for tests and doxygen

* add csl public api directory to module headers

* add low-level CSL components

* add high-level CSL components

* integrate csl::Tensor into backbone code

* switch to CPU iff unsupported; otherwise, fail on error

* add fully connected layer

* add softmax layer

* add activation layers

* support arbitary rank TensorDescriptor

* pass input wrappers to `initCUDA()`

* add 1d/2d/3d-convolution

* add pooling layer

* reorganize and refactor code

* fixes for gcc, clang and doxygen; remove cxx14/17 code

* add blank_layer

* add LRN layer

* add rounding modes for pooling layer

* split tensor.hpp into tensor.hpp and tensor_ops.hpp

* add concat layer

* add scale layer

* add batch normalization layer

* split math.cu into activations.cu and math.hpp

* add eltwise layer

* add flatten layer

* add tensor transform api

* add asymmetric padding support for convolution layer

* add reshape layer

* fix rebase issues

* add permute layer

* add padding support for concat layer

* refactor and reorganize code

* add normalize layer

* optimize bias addition in scale layer

* add prior box layer

* fix and optimize normalize layer

* add asymmetric padding support for pooling layer

* add event API

* improve pooling performance for some padding scenarios

* avoid over-allocation of compute resources to kernels

* improve prior box performance

* enable layer fusion

* add const layer

* add resize layer

* add slice layer

* add padding layer

* add deconvolution layer

* fix channelwise  ReLU initialization

* add vector traits

* add vectorized versions of relu, clipped_relu, power

* add vectorized concat kernels

* improve concat_with_offsets performance

* vectorize scale and bias kernels

* add support for multi-billion element tensors

* vectorize prior box kernels

* fix address alignment check

* improve bias addition performance of conv/deconv/fc layers

* restructure code for supporting multiple targets

* add DNN_TARGET_CUDA_FP64

* add DNN_TARGET_FP16

* improve vectorization

* add region layer

* improve tensor API, add dynamic ranks

1. use ManagedPtr instead of a Tensor in backend wrapper
2. add new methods to tensor classes
  - size_range: computes the combined size of for a given axis range
  - tensor span/view can be constructed from a raw pointer and shape
3. the tensor classes can change their rank at runtime (previously rank was fixed at compile-time)
4. remove device code from tensor classes (as they are unused)
5. enforce strict conditions on tensor class APIs to improve debugging ability

* fix parametric relu activation

* add squeeze/unsqueeze tensor API

* add reorg layer

* optimize permute and enable 2d permute

* enable 1d and 2d slice

* add split layer

* add shuffle channel layer

* allow tensors of different ranks in reshape primitive

* patch SliceOp to allow Crop Layer

* allow extra shape inputs in reshape layer

* use `std::move_backward` instead of `std::move` for insert in resizable_static_array

* improve workspace management

* add spatial LRN

* add nms (cpu) to region layer

* add max pooling with argmax ( and a fix to limits.hpp)

* add max unpooling layer

* rename DNN_TARGET_CUDA_FP32 to DNN_TARGET_CUDA

* update supportBackend to be more rigorous

* remove stray include from preventing non-cuda build

* include op_cuda.hpp outside condition #if

* refactoring, fixes and many optimizations

* drop DNN_TARGET_CUDA_FP64

* fix gcc errors

* increase max. tensor rank limit to six

* add Interp layer

* drop custom layers; use BackendNode

* vectorize activation kernels

* fixes for gcc

* remove wrong assertion

* fix broken assertion in unpooling primitive

* fix build errors in non-CUDA build

* completely remove workspace from public API

* fix permute layer

* enable accuracy and perf. tests for DNN_TARGET_CUDA

* add asynchronous forward

* vectorize eltwise ops

* vectorize fill kernel

* fixes for gcc

* remove CSL headers from public API

* remove csl header source group from cmake

* update min. cudnn version in cmake

* add numerically stable FP32 log1pexp

* refactor code

* add FP16 specialization to cudnn based tensor addition

* vectorize scale1 and bias1 + minor refactoring

* fix doxygen build

* fix invalid alignment assertion

* clear backend wrappers before allocateLayers

* ignore memory lock failures

* do not allocate internal blobs

* integrate NVTX

* add numerically stable half precision log1pexp

* fix indentation, following coding style,  improve docs

* remove accidental modification of IE code

* Revert "add asynchronous forward"

This reverts commit 1154b9da9da07e9b52f8a81bdcea48cf31c56f70.

* [cmake] throw error for unsupported CC versions

* fix rebase issues

* add more docs, refactor code, fix bugs

* minor refactoring and fixes

* resolve warnings/errors from clang

* remove haveCUDA() checks from supportBackend()

* remove NVTX integration

* changes based on review comments

* avoid exception when no CUDA device is present

* add color code for CUDA in Net::dump
2019-10-21 14:28:00 +03:00
Alexander Alekhin
19a4b51371 Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2019-08-16 18:48:08 +03:00
Alexander Alekhin
5ef548a985 cmake: update initialization 2019-08-08 15:23:16 +03:00
Yashas Samaga B L
ae279966c2 Merge pull request #14660 from YashasSamaga:dnn-cuda-build
add cuDNN dependency and setup build for cuda4dnn (#14660)

* update cmake for cuda4dnn

- Adds FindCUDNN
- Adds new options:
   * WITH_CUDA
   * OPENCV_DNN_CUDA
- Adds CUDA4DNN preprocessor symbol for the DNN module

* FIX: append EXCLUDE_CUDA instead of overwrite

* remove cuDNN dependency for user apps

* fix unused variable warning
2019-06-02 14:47:15 +03:00
Alexander Alekhin
dada5a422d Merge remote-tracking branch 'upstream/3.4' into merge-3.4 2018-10-09 21:20:15 +00:00
Alexander Alekhin
913c4151bf
Merge pull request #12725 from alalek:cmake_python_win32
* cmake: don't ignore Python from PATH environment variable

- this breaks selection between 32/64-bit Python
- this breaks Anaconda/Conda environments
- it is not the CMake default behavior, expected by many projects

* cmake: add Python version check, fallback path on CMake 3.12+

* cmake: drop Python 2.6, allow version selection for Python 3.x
2018-10-08 17:45:50 +03:00
Alexander Alekhin
d4688e6474 cmake: require C++11 and CMake 3.5.1+ 2018-04-10 18:09:54 +03:00
Alexander Alekhin
92f182bc3b minimal CMake version => 2.8.12.2 2017-07-21 14:06:19 +03:00
Vladislav Vinogradov
112903c2bd increase minimal supported CUDA toolkit to 6.5 2016-07-13 13:02:13 +03:00
Michael Pratt
cac1218eef Build both Python 2 and Python 3 bindings
If both Python 2 and Python 3 are found, then build bindings for both of
them during the build process.  Currently, one version of Python is
detected automatically, and building for the other requires changes the
CMake config.

The largest chunk of this change generalizes OpenCVDetectPython.cmake to
find both a Python 2 and Python 3 version of Python.  Secondly, the
opencv_python module is split into two modules, opencv_python2 and
opencv_python3.  Both are built from the same source. but for different
versions of Python.
2014-06-29 20:08:13 -04:00
Tony
46ba9d30b9 Merge remote-tracking branch 'upstream/master' 2014-03-25 20:44:06 +00:00
Tony
69dc840583 mprove Gtk2/3 options in cmake
Update to cmake files for to include minimum versions, and tidy up gtk operation.

Files updated:
CMakeLists.txt:
  WITH_GTK now uses Gtk3 by default. If not found then Gtk2 is used.
  WITH_GTK_2_X forces Gtk2.x use

cmake/OpenCVFindLibsGUI.cmake
  Updated selection logic to implement methodology described above.
  Implemented warning if Gtk3 not found (and not overridden)
  Implemented error if Gtk does not meet minimum required version

cmake/OpenCVMinDepVersions.cmake
  Added minimum Gtk version of 2.18.0
2013-11-26 21:35:03 +00:00
Roman Donchenko
49c6533227 Move the minimal CUDA version into the minimal version list. 2013-10-22 14:32:13 +04:00
Roman Donchenko
dbb684b85f Bumped minimal Python version to 2.6.
Rationale: we already depend on it (e.g. some scripts use print_function).
2013-08-23 18:46:54 +04:00
Roman Donchenko
9c01a96b14 Set minimal zlib version to 1.2.3.
Rationale: 1.2.3 was a security update, and we should avoid using
versions with known security vulnerabilities.
2013-08-22 18:17:19 +04:00
Roman Donchenko
a87756e9b3 Bumped minimal CMake version to 2.8.7. 2013-08-08 12:03:30 +04:00
Roman Donchenko
a495bbb967 Added a new file for recording minimal dependency versions. 2013-08-07 13:56:09 +04:00