mirror of
https://github.com/opencv/opencv.git
synced 2025-06-21 02:20:50 +08:00
![]() Feature: Add OpenVINO NPU support #27363 ## Why - OpenVINO now supports inference on integrated NPU devices in intel's Core Ultra series processors. - Sometimes as fast as GPU, but should use considerably less power. ## How - The NPU plugin is now available as "NPU" in openvino `ov::Core::get_available_devices()`. - Removed the guards and checks for NPU in available targets for Inference Engine backend. ## Test example ### Pre-requisites - Intel [Core Ultra series processor](https://www.intel.com/content/www/us/en/products/details/processors/core-ultra/edge.html#tab-blade-1-0) - [Intel NPU driver](https://github.com/intel/linux-npu-driver/releases) - OpenVINO 2023.3.0+ (Tested on 2025.1.0) ### Example ```cpp #include <opencv2/dnn.hpp> #include <iostream> int main(){ cv::dnn::Net net = cv::dnn::readNet("../yolov8s-openvino/yolov8s.xml", "../yolov8s-openvino/yolov8s.bin"); cv::Size net_input_shape = cv::Size(640, 480); std::cout << "Setting backend to DNN_BACKEND_INFERENCE_ENGINE and target to DNN_TARGET_NPU" << std::endl; net.setPreferableBackend(cv::dnn::DNN_BACKEND_INFERENCE_ENGINE); net.setPreferableTarget(cv::dnn::DNN_TARGET_NPU); cv::Mat image(net_input_shape, CV_8UC3); cv::randu(image, cv::Scalar(0, 0, 0), cv::Scalar(255, 255, 255)); cv::Mat blob = cv::dnn::blobFromImage( image, 1, net_input_shape, cv::Scalar(0, 0, 0), true, false, CV_32F); net.setInput(blob); std::cout << "Running forward" << std::endl; cv::Mat result = net.forward(); std::cout << "Output shape: " << result.size << std::endl; // Output shape: 1 x 84 x 6300 } ``` model files [here](https://limewire.com/d/bPgiA#BhUeSTBnMc) docker image used to build opencv: [ghcr.io/mro47/opencv-builder](https://github.com/MRo47/opencv-builder/blob/main/Dockerfile) Closes #26240 ### Pull Request Readiness Checklist See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request - [x] I agree to contribute to the project under Apache 2 License. - [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV - [x] The PR is proposed to the proper branch - [x] There is a reference to the original bug report and related work - [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable Patch to opencv_extra has the same branch name. - [ ] The feature is well documented and sample code can be built with the project CMake |
||
---|---|---|
.. | ||
caffe | ||
cuda | ||
cuda4dnn | ||
darknet | ||
int8layers | ||
layers | ||
ocl4dnn | ||
onnx | ||
opencl | ||
tensorflow | ||
tflite | ||
torch | ||
vkcom | ||
webnn | ||
backend.cpp | ||
backend.hpp | ||
debug_utils.cpp | ||
dnn_common.hpp | ||
dnn_params.cpp | ||
dnn_read.cpp | ||
dnn_utils.cpp | ||
dnn.cpp | ||
factory.hpp | ||
graph_simplifier.cpp | ||
graph_simplifier.hpp | ||
halide_scheduler.cpp | ||
halide_scheduler.hpp | ||
ie_ngraph.cpp | ||
ie_ngraph.hpp | ||
init.cpp | ||
layer_factory.cpp | ||
layer_internals.hpp | ||
layer.cpp | ||
legacy_backend.cpp | ||
legacy_backend.hpp | ||
math_utils.hpp | ||
model.cpp | ||
net_cann.cpp | ||
net_impl_backend.cpp | ||
net_impl_fuse.cpp | ||
net_impl.cpp | ||
net_impl.hpp | ||
net_openvino.cpp | ||
net_quantization.cpp | ||
net.cpp | ||
nms.cpp | ||
nms.inl.hpp | ||
op_cann.cpp | ||
op_cann.hpp | ||
op_cuda.cpp | ||
op_cuda.hpp | ||
op_halide.cpp | ||
op_halide.hpp | ||
op_inf_engine.cpp | ||
op_inf_engine.hpp | ||
op_timvx.cpp | ||
op_timvx.hpp | ||
op_vkcom.cpp | ||
op_vkcom.hpp | ||
op_webnn.cpp | ||
op_webnn.hpp | ||
plugin_api.hpp | ||
plugin_wrapper.impl.hpp | ||
precomp.hpp | ||
registry.cpp |