Current net exporter `dump` and `dumpToFile` exports the network structure (and its params) to a .dot file which works with `graphviz`. This is hard to use and not friendly to new user. What's worse, the produced picture is not looking pretty.
dnn: better net exporter that works with netron #25582
This PR introduces new exporter `dumpToPbtxt` and uses this new exporter by default with environment variable `OPENCV_DNN_NETWORK_DUMP`. It mimics the string output of a onnx model but modified with dnn-specific changes, see below for an example.
![image](https://github.com/opencv/opencv/assets/17219438/0644bed1-da71-4019-8466-88390698e4df)
## Usage
Call `cv::dnn::Net::dumpToPbtxt`:
```cpp
TEST(DumpNet, dumpToPbtxt) {
std::string path = "/path/to/model.onnx";
auto net = readNet(path);
Mat input(std::vector<int>{1, 3, 640, 480}, CV_32F);
net.setInput(input);
net.dumpToPbtxt("yunet.pbtxt");
}
```
Set `export OPENCV_DNN_NETWORK_DUMP=1`
```cpp
TEST(DumpNet, env) {
std::string path = "/path/to/model.onnx";
auto net = readNet(path);
Mat input(std::vector<int>{1, 3, 640, 480}, CV_32F);
net.setInput(input);
net.forward();
}
```
---
Note:
- `pbtxt` is registered as one of the ONNX model suffix in netron. So you can see `module: ai.onnx` and such in the model.
- We can get the string output of an ONNX model with the following script
```python
import onnx
net = onnx.load("/path/to/model.onnx")
net_str = str(net)
file = open("/path/to/model.pbtxt", "w")
file.write(net_str)
file.close()
```
### Pull Request Readiness Checklist
See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
- [x] I agree to contribute to the project under Apache 2 License.
- [x] To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
- [x] The PR is proposed to the proper branch
- [ ] There is a reference to the original bug report and related work
- [ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable
Patch to opencv_extra has the same branch name.
- [x] The feature is well documented and sample code can be built with the project CMake
Add per_tensor_quantize to int8 quantize
* add per_tensor_quantize to dnn int8 module.
* change api flag from perTensor to perChannel, and recognize quantize type and onnx importer.
* change the default to hpp