opencv/.github/workflows/PR-5.x.yaml

65 lines
2.1 KiB
YAML
Raw Normal View History

name: PR:5.x
on:
pull_request:
branches:
- 5.x
jobs:
Ubuntu2004-ARM64:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-ARM64.yaml@main
2022-10-25 18:44:14 +08:00
Ubuntu2004-ARM64-Debug:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-ARM64-Debug.yaml@main
Ubuntu2004-x64:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-U20.yaml@main
Merge pull request #25458 from alexlyulkov:al/dnn-openvino-int-support Added int support for OpenVINO dnn backend #25458 Modified dnn OpenVINO integration to support type inference and int operations. Added OpenVINO support to Cast, CumSum, Expand, Gather, GatherElements, Scatter, ScatterND, Tile layers. I tried to add Reduce layer, but looks like OpenVINO uses float values inside Reduce operation so it can't pass our int tests. OpenVINO uses int32 precision for int64 operations, so I've modified input values for int64 tests when backend is OpenVINO. OpenVINO has a strange behavior with custom layers and int64 values. After model compilation OpenVINO may change types, so the model can have different output type. That's why these tests were disabled: - Test_ArgMax_Int.random/0, where GetParam() = (4, NGRAPH/CPU) - Test_ArgMax_Int.random/6, where GetParam() = (11, NGRAPH/CPU) - Test_Reduce_Int.random/6, where GetParam() = (11, NGRAPH/CPU) - Test_Reduce_Int.two_axes/6, where GetParam() = (11, NGRAPH/CPU) Also these tests were temporary disabled, they didn't work on both 4.x and 5.x branches: - Test_Caffe_layers.layer_prelu_fc/0, where GetParam() = NGRAPH/CPU - Test_ONNX_layers.LSTM_Activations/0, where GetParam() = NGRAPH/CPU - Test_ONNX_layers.Quantized_Convolution/0, where GetParam() = NGRAPH/CPU - Test_ONNX_layers.Quantized_Eltwise_Scalar/0, where GetParam() = NGRAPH/CPU - Test_TFLite.EfficientDet_int8/0, where GetParam() = NGRAPH/CPU ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [ ] There is a reference to the original bug report and related work - [x] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [x] The feature is well documented and sample code can be built with the project CMake
2024-05-15 16:51:59 +08:00
Ubuntu2004-x64-OpenVINO:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-U20-OpenVINO.yaml@main
Ubuntu2204-x64:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-U22.yaml@main
2024-07-03 14:21:54 +08:00
Ubuntu2404-x64:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-U24.yaml@main
Ubuntu2004-x64-CUDA:
2022-10-18 14:03:27 +08:00
if: "${{ contains(github.event.pull_request.labels.*.name, 'category: dnn') }} || ${{ contains(github.event.pull_request.labels.*.name, 'category: dnn (onnx)') }}"
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-U20-Cuda.yaml@main
Windows10-x64:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-W10.yaml@main
Windows10-ARM64:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-W10-ARM64.yaml@main
Merge pull request #24411 from alexlyulkov:al/dnn-type-inference Added int32, int64 support and type inference to dnn #24411 **Added a type inference to dnn similar to the shape inference, added int32 and int64 support.** - Added getTypes method for layers that calculates layer outputs types and internals types from inputs types (Similar to getMemoryShapes). By default outputs and internals types = input[0] type - Added type inference pipeline similar to shape inference pipeline. LayersShapes struct (that is used in shape inference pipeline) now contains both shapes and types - All layers output blobs are now allocated using the calculated types from the type inference. - Inputs and constants with int32 and int64 types are not automatically converted into float32 now. - Added int32 and int64 support for all the layers with indexing and for all the layers required in tests. Added int32 and int64 support for CUDA: - Added host<->device data moving for int32 and int64 - Added int32 and int64 support for several layers (just slightly modified CUDA C++ templates) Passed all the accuracy tests on CPU, OCL, OCL_FP16, CUDA, CUDA_FP16. (except RAFT model) **CURRENT PROBLEMS**: - ONNX parser always converts int64 constants and layers attributes to int32, so some models with int64 constants doesn't work (e.g. RAFT). The solution is to disable int64->int32 conversion and fix attributes reading in a lot of ONNX layers parsers (https://github.com/opencv/opencv/issues/25102) - I didn't add type inference and int support to VULCAN, so it doesn't work at all now. - Some layers don't support int yet, so some unknown models may not work. **CURRENT WORKAROUNDS**: - CPU arg_layer indides are implemented in int32 followed by a int32->int64 conversion (the master branch has the same workaround with int32->float conversion) - CPU and OCL pooling_layer indices are implemented in float followed by a float->int64 conversion - CPU gather_layer indices are implemented in int32, so int64 indices are converted to int32 (the master branch has the same workaround with float->int32 conversion) **DISABLED TESTS**: - RAFT model **REMOVED TESTS**: - Greater_input_dtype_int64 (because it doesn't fit ONNX rules, the whole test is just comparing float tensor with int constant) **TODO IN NEXT PULL REQUESTS**: - Add int64 support for ONNX parser - Add int support for more layers - Add int support for OCL (currently int layers just run on CPU) - Add int tests - Add int support for other backends
2024-03-01 22:07:38 +08:00
# Vulkan configuration disabled as Vulkan backend for DNN does not support int/int64 for now
# Details: https://github.com/opencv/opencv/issues/25110
# Windows10-x64-Vulkan:
# uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-W10-Vulkan.yaml@main
2022-06-15 11:06:52 +08:00
macOS-ARM64:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-macOS-ARM64.yaml@main
Merge pull request #24411 from alexlyulkov:al/dnn-type-inference Added int32, int64 support and type inference to dnn #24411 **Added a type inference to dnn similar to the shape inference, added int32 and int64 support.** - Added getTypes method for layers that calculates layer outputs types and internals types from inputs types (Similar to getMemoryShapes). By default outputs and internals types = input[0] type - Added type inference pipeline similar to shape inference pipeline. LayersShapes struct (that is used in shape inference pipeline) now contains both shapes and types - All layers output blobs are now allocated using the calculated types from the type inference. - Inputs and constants with int32 and int64 types are not automatically converted into float32 now. - Added int32 and int64 support for all the layers with indexing and for all the layers required in tests. Added int32 and int64 support for CUDA: - Added host<->device data moving for int32 and int64 - Added int32 and int64 support for several layers (just slightly modified CUDA C++ templates) Passed all the accuracy tests on CPU, OCL, OCL_FP16, CUDA, CUDA_FP16. (except RAFT model) **CURRENT PROBLEMS**: - ONNX parser always converts int64 constants and layers attributes to int32, so some models with int64 constants doesn't work (e.g. RAFT). The solution is to disable int64->int32 conversion and fix attributes reading in a lot of ONNX layers parsers (https://github.com/opencv/opencv/issues/25102) - I didn't add type inference and int support to VULCAN, so it doesn't work at all now. - Some layers don't support int yet, so some unknown models may not work. **CURRENT WORKAROUNDS**: - CPU arg_layer indides are implemented in int32 followed by a int32->int64 conversion (the master branch has the same workaround with int32->float conversion) - CPU and OCL pooling_layer indices are implemented in float followed by a float->int64 conversion - CPU gather_layer indices are implemented in int32, so int64 indices are converted to int32 (the master branch has the same workaround with float->int32 conversion) **DISABLED TESTS**: - RAFT model **REMOVED TESTS**: - Greater_input_dtype_int64 (because it doesn't fit ONNX rules, the whole test is just comparing float tensor with int constant) **TODO IN NEXT PULL REQUESTS**: - Add int64 support for ONNX parser - Add int support for more layers - Add int support for OCL (currently int layers just run on CPU) - Add int tests - Add int support for other backends
2024-03-01 22:07:38 +08:00
# macOS-ARM64-Vulkan:
# uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-macOS-ARM64-Vulkan.yaml@main
macOS-x64:
2022-06-15 11:06:52 +08:00
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-macOS-x86_64.yaml@main
iOS:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-iOS.yaml@main
2022-07-21 22:59:31 +08:00
Android:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-Android.yaml@main
TIM-VX:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-timvx-backend-tests-4.x.yml@main
2022-07-28 22:47:03 +08:00
docs:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-docs.yaml@main
2023-10-26 13:26:37 +08:00
Linux-RISC-V-Clang:
uses: opencv/ci-gha-workflow/.github/workflows/OCV-PR-5.x-RISCV.yaml@main