opencv/modules/dnn/test/npy_blob.cpp
alexlyulkov 1d1faaabef
Merge pull request #24411 from alexlyulkov:al/dnn-type-inference
Added int32, int64 support and type inference to dnn #24411

**Added a type inference to dnn similar to the shape inference, added int32 and int64 support.**

- Added getTypes method for layers that calculates layer outputs types and internals types from inputs types (Similar to getMemoryShapes). By default outputs and internals types = input[0] type
- Added type inference pipeline similar to shape inference pipeline. LayersShapes struct (that is used in shape inference pipeline) now contains both shapes and types
- All layers output blobs are now allocated using the calculated types from the type inference.
- Inputs and constants with int32 and int64 types are not automatically converted into float32 now.
- Added int32 and int64 support for all the layers with indexing and for all the layers required in tests.

Added  int32 and int64 support for CUDA:
- Added host<->device data moving for int32 and int64
- Added int32 and int64 support for several layers (just slightly modified CUDA C++ templates)

Passed all the accuracy tests on CPU, OCL, OCL_FP16, CUDA, CUDA_FP16. (except RAFT model)

**CURRENT PROBLEMS**:
-  ONNX parser always converts int64 constants and layers attributes to int32, so some models with int64 constants doesn't work (e.g. RAFT). The solution is to disable int64->int32 conversion and fix attributes reading in a lot of ONNX layers parsers (https://github.com/opencv/opencv/issues/25102)
- I didn't add type inference and int support to VULCAN, so it doesn't work at all now.
- Some layers don't support int yet, so some unknown models may not work.

**CURRENT WORKAROUNDS**:
- CPU arg_layer indides are implemented in int32 followed by a int32->int64 conversion (the master branch has the same workaround with int32->float conversion)
- CPU and OCL pooling_layer indices are implemented in float followed by a float->int64 conversion
- CPU gather_layer indices are implemented in int32, so int64 indices are converted to int32 (the master branch has the same workaround with float->int32 conversion)

**DISABLED TESTS**:
- RAFT model

**REMOVED TESTS**:
- Greater_input_dtype_int64 (because it doesn't fit ONNX rules, the whole test is just comparing float tensor with int constant)

**TODO IN NEXT PULL REQUESTS**:
- Add int64 support for ONNX parser
- Add int support for more layers
- Add int support for OCL (currently int layers just run on CPU)
- Add int tests
- Add int support for other backends
2024-03-01 17:07:38 +03:00

104 lines
2.7 KiB
C++

// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html.
//
// Copyright (C) 2017, Intel Corporation, all rights reserved.
// Third party copyrights are property of their respective owners.
#include "test_precomp.hpp"
#include "npy_blob.hpp"
namespace cv
{
static std::string getType(const std::string& header)
{
std::string field = "'descr':";
int idx = header.find(field);
CV_Assert(idx != -1);
int from = header.find('\'', idx + field.size()) + 1;
int to = header.find('\'', from);
return header.substr(from, to - from);
}
static std::string getFortranOrder(const std::string& header)
{
std::string field = "'fortran_order':";
int idx = header.find(field);
CV_Assert(idx != -1);
int from = header.find_last_of(' ', idx + field.size()) + 1;
int to = header.find(',', from);
return header.substr(from, to - from);
}
static std::vector<int> getShape(const std::string& header)
{
std::string field = "'shape':";
int idx = header.find(field);
CV_Assert(idx != -1);
int from = header.find('(', idx + field.size()) + 1;
int to = header.find(')', from);
std::string shapeStr = header.substr(from, to - from);
if (shapeStr.empty())
return std::vector<int>(1, 1);
// Remove all commas.
shapeStr.erase(std::remove(shapeStr.begin(), shapeStr.end(), ','),
shapeStr.end());
std::istringstream ss(shapeStr);
int value;
std::vector<int> shape;
while (ss >> value)
{
shape.push_back(value);
}
return shape;
}
Mat blobFromNPY(const std::string& path)
{
std::ifstream ifs(path.c_str(), std::ios::binary);
CV_Assert(ifs.is_open());
std::string magic(6, '*');
ifs.read(&magic[0], magic.size());
CV_Assert(magic == "\x93NUMPY");
ifs.ignore(1); // Skip major version byte.
ifs.ignore(1); // Skip minor version byte.
unsigned short headerSize;
ifs.read((char*)&headerSize, sizeof(headerSize));
std::string header(headerSize, '*');
ifs.read(&header[0], header.size());
// Extract data type.
int matType;
if (getType(header) == "<f4")
matType = CV_32F;
else if (getType(header) == "<i4")
matType = CV_32S;
else if (getType(header) == "<i8")
matType = CV_64S;
else
CV_Error(Error::BadDepth, "Unsupported numpy type");
CV_Assert(getFortranOrder(header) == "False");
std::vector<int> shape = getShape(header);
Mat blob(shape, matType);
ifs.read((char*)blob.data, blob.total() * blob.elemSize());
CV_Assert((size_t)ifs.gcount() == blob.total() * blob.elemSize());
return blob;
}
} // namespace cv