Updated Android tutorial for MobileNet-SSD detector
- Refreshed images, links, OpenCV API. - Added more details to Android Mobilenet sample. - Moved to new location and re-linked tutorials.
Before Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 41 KiB |
Before Width: | Height: | Size: 55 KiB |
Before Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 5.6 KiB |
Before Width: | Height: | Size: 9.5 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 52 KiB |
Before Width: | Height: | Size: 56 KiB |
@ -1,107 +1 @@
|
|||||||
# How to run deep networks on Android device {#tutorial_dnn_android}
|
The page was moved to @ref tutorial_android_dnn_intro
|
||||||
|
|
||||||
@tableofcontents
|
|
||||||
|
|
||||||
@prev_tutorial{tutorial_dnn_openvino}
|
|
||||||
@next_tutorial{tutorial_dnn_yolo}
|
|
||||||
|
|
||||||
| | |
|
|
||||||
| -: | :- |
|
|
||||||
| Original author | Dmitry Kurtaev |
|
|
||||||
| Compatibility | OpenCV >= 3.3 |
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
In this tutorial you'll know how to run deep learning networks on Android device
|
|
||||||
using OpenCV deep learning module.
|
|
||||||
|
|
||||||
Tutorial was written for the following versions of corresponding software:
|
|
||||||
- Android Studio 2.3.3
|
|
||||||
- OpenCV 3.3.0+
|
|
||||||
|
|
||||||
## Requirements
|
|
||||||
|
|
||||||
- Download and install Android Studio from https://developer.android.com/studio.
|
|
||||||
|
|
||||||
- Get the latest pre-built OpenCV for Android release from https://github.com/opencv/opencv/releases and unpack it (for example, `opencv-4.X.Y-android-sdk.zip`).
|
|
||||||
|
|
||||||
- Download MobileNet object detection model from https://github.com/chuanqi305/MobileNet-SSD. We need a configuration file `MobileNetSSD_deploy.prototxt` and weights `MobileNetSSD_deploy.caffemodel`.
|
|
||||||
|
|
||||||
## Create an empty Android Studio project
|
|
||||||
- Open Android Studio. Start a new project. Let's call it `opencv_mobilenet`.
|
|
||||||

|
|
||||||
|
|
||||||
- Keep default target settings.
|
|
||||||

|
|
||||||
|
|
||||||
- Use "Empty Activity" template. Name activity as `MainActivity` with a
|
|
||||||
corresponding layout `activity_main`.
|
|
||||||

|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Wait until a project was created. Go to `Run->Edit Configurations`.
|
|
||||||
Choose `USB Device` as target device for runs.
|
|
||||||

|
|
||||||
Plug in your device and run the project. It should be installed and launched
|
|
||||||
successfully before we'll go next.
|
|
||||||
@note Read @ref tutorial_android_dev_intro in case of problems.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
## Add OpenCV dependency
|
|
||||||
|
|
||||||
- Go to `File->New->Import module` and provide a path to `unpacked_OpenCV_package/sdk/java`. The name of module detects automatically.
|
|
||||||
Disable all features that Android Studio will suggest you on the next window.
|
|
||||||

|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Open two files:
|
|
||||||
|
|
||||||
1. `AndroidStudioProjects/opencv_mobilenet/app/build.gradle`
|
|
||||||
|
|
||||||
2. `AndroidStudioProjects/opencv_mobilenet/openCVLibrary330/build.gradle`
|
|
||||||
|
|
||||||
Copy both `compileSdkVersion` and `buildToolsVersion` from the first file to
|
|
||||||
the second one.
|
|
||||||
|
|
||||||
`compileSdkVersion 14` -> `compileSdkVersion 26`
|
|
||||||
|
|
||||||
`buildToolsVersion "25.0.0"` -> `buildToolsVersion "26.0.1"`
|
|
||||||
|
|
||||||
- Make the project. There is no errors should be at this point.
|
|
||||||
|
|
||||||
- Go to `File->Project Structure`. Add OpenCV module dependency.
|
|
||||||

|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
- Install once an appropriate OpenCV manager from `unpacked_OpenCV_package/apk`
|
|
||||||
to target device.
|
|
||||||
@code
|
|
||||||
adb install OpenCV_3.3.0_Manager_3.30_armeabi-v7a.apk
|
|
||||||
@endcode
|
|
||||||
|
|
||||||
- Congratulations! We're ready now to make a sample using OpenCV.
|
|
||||||
|
|
||||||
## Make a sample
|
|
||||||
Our sample will takes pictures from a camera, forwards it into a deep network and
|
|
||||||
receives a set of rectangles, class identifiers and confidence values in `[0, 1]`
|
|
||||||
range.
|
|
||||||
|
|
||||||
- First of all, we need to add a necessary widget which displays processed
|
|
||||||
frames. Modify `app/src/main/res/layout/activity_main.xml`:
|
|
||||||
@include android/mobilenet-objdetect/res/layout/activity_main.xml
|
|
||||||
|
|
||||||
- Put downloaded `MobileNetSSD_deploy.prototxt` and `MobileNetSSD_deploy.caffemodel`
|
|
||||||
into `app/build/intermediates/assets/debug` folder.
|
|
||||||
|
|
||||||
- Modify `/app/src/main/AndroidManifest.xml` to enable full-screen mode, set up
|
|
||||||
a correct screen orientation and allow to use a camera.
|
|
||||||
@include android/mobilenet-objdetect/gradle/AndroidManifest.xml
|
|
||||||
|
|
||||||
- Replace content of `app/src/main/java/org/opencv/samples/opencv_mobilenet/MainActivity.java`:
|
|
||||||
@include android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java
|
|
||||||
|
|
||||||
- Launch an application and make a fun!
|
|
||||||

|
|
@ -2,7 +2,7 @@ OpenCV usage with OpenVINO {#tutorial_dnn_openvino}
|
|||||||
=====================
|
=====================
|
||||||
|
|
||||||
@prev_tutorial{tutorial_dnn_halide_scheduling}
|
@prev_tutorial{tutorial_dnn_halide_scheduling}
|
||||||
@next_tutorial{tutorial_dnn_android}
|
@next_tutorial{tutorial_dnn_yolo}
|
||||||
|
|
||||||
| | |
|
| | |
|
||||||
| -: | :- |
|
| -: | :- |
|
||||||
|
@ -3,7 +3,7 @@ YOLO DNNs {#tutorial_dnn_yolo}
|
|||||||
|
|
||||||
@tableofcontents
|
@tableofcontents
|
||||||
|
|
||||||
@prev_tutorial{tutorial_dnn_android}
|
@prev_tutorial{tutorial_dnn_openvino}
|
||||||
@next_tutorial{tutorial_dnn_javascript}
|
@next_tutorial{tutorial_dnn_javascript}
|
||||||
|
|
||||||
| | |
|
| | |
|
||||||
|
@ -5,7 +5,6 @@ Deep Neural Networks (dnn module) {#tutorial_table_of_content_dnn}
|
|||||||
- @subpage tutorial_dnn_halide
|
- @subpage tutorial_dnn_halide
|
||||||
- @subpage tutorial_dnn_halide_scheduling
|
- @subpage tutorial_dnn_halide_scheduling
|
||||||
- @subpage tutorial_dnn_openvino
|
- @subpage tutorial_dnn_openvino
|
||||||
- @subpage tutorial_dnn_android
|
|
||||||
- @subpage tutorial_dnn_yolo
|
- @subpage tutorial_dnn_yolo
|
||||||
- @subpage tutorial_dnn_javascript
|
- @subpage tutorial_dnn_javascript
|
||||||
- @subpage tutorial_dnn_custom_layers
|
- @subpage tutorial_dnn_custom_layers
|
||||||
|
@ -0,0 +1,85 @@
|
|||||||
|
# How to run deep networks on Android device {#tutorial_android_dnn_intro}
|
||||||
|
|
||||||
|
@tableofcontents
|
||||||
|
|
||||||
|
@prev_tutorial{tutorial_dev_with_OCV_on_Android}
|
||||||
|
@next_tutorial{tutorial_android_ocl_intro}
|
||||||
|
|
||||||
|
@see @ref tutorial_table_of_content_dnn
|
||||||
|
|
||||||
|
| | |
|
||||||
|
| -: | :- |
|
||||||
|
| Original author | Dmitry Kurtaev |
|
||||||
|
| Compatibility | OpenCV >= 4.9 |
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
In this tutorial you'll know how to run deep learning networks on Android device
|
||||||
|
using OpenCV deep learning module.
|
||||||
|
Tutorial was written for Android Studio Android Studio 2022.2.1.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Download and install Android Studio from https://developer.android.com/studio.
|
||||||
|
|
||||||
|
- Get the latest pre-built OpenCV for Android release from https://github.com/opencv/opencv/releases
|
||||||
|
and unpack it (for example, `opencv-4.X.Y-android-sdk.zip`).
|
||||||
|
|
||||||
|
- Download MobileNet object detection model from https://github.com/chuanqi305/MobileNet-SSD.
|
||||||
|
Configuration file `MobileNetSSD_deploy.prototxt` and model weights `MobileNetSSD_deploy.caffemodel`
|
||||||
|
are required.
|
||||||
|
|
||||||
|
## Create an empty Android Studio project and add OpenCV dependency
|
||||||
|
|
||||||
|
Use @ref tutorial_dev_with_OCV_on_Android tutorial to initialize your project and add OpenCV.
|
||||||
|
|
||||||
|
## Make an app
|
||||||
|
|
||||||
|
Our sample will takes pictures from a camera, forwards it into a deep network and
|
||||||
|
receives a set of rectangles, class identifiers and confidence values in range [0, 1].
|
||||||
|
|
||||||
|
- First of all, we need to add a necessary widget which displays processed
|
||||||
|
frames. Modify `app/src/main/res/layout/activity_main.xml`:
|
||||||
|
@include android/mobilenet-objdetect/res/layout/activity_main.xml
|
||||||
|
|
||||||
|
- Modify `/app/src/main/AndroidManifest.xml` to enable full-screen mode, set up
|
||||||
|
a correct screen orientation and allow to use a camera.
|
||||||
|
@code{.xml}
|
||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
|
||||||
|
|
||||||
|
<application
|
||||||
|
android:label="@string/app_name">
|
||||||
|
@endcode
|
||||||
|
@snippet android/mobilenet-objdetect/gradle/AndroidManifest.xml mobilenet_tutorial
|
||||||
|
|
||||||
|
- Replace content of `app/src/main/java/com/example/myapplication/MainActivity.java` and set a custom package name if necessary:
|
||||||
|
|
||||||
|
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_tutorial_package
|
||||||
|
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_tutorial
|
||||||
|
|
||||||
|
- Put downloaded `deploy.prototxt` and `mobilenet_iter_73000.caffemodel`
|
||||||
|
into `app/src/main/res/raw` folder. OpenCV DNN model is mainly designed to load ML and DNN models
|
||||||
|
from file. Modern Android does not allow it without extra permissions, but provides Java API to load
|
||||||
|
bytes from resources. The sample uses alternative DNN API that initializes a model from in-memory
|
||||||
|
buffer rather than a file. The following function reads model file from resources and converts it to
|
||||||
|
`MatOfBytes` (analog of `std::vector<char>` in C++ world) object suitable for OpenCV Java API:
|
||||||
|
|
||||||
|
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_tutorial_resource
|
||||||
|
|
||||||
|
And then the network initialization is done with the following lines:
|
||||||
|
|
||||||
|
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java init_model_from_memory
|
||||||
|
|
||||||
|
See also [Android documentation on resources](https://developer.android.com/guide/topics/resources/providing-resources.html)
|
||||||
|
|
||||||
|
- Take a look how DNN model input is prepared and inference result is interpreted:
|
||||||
|
|
||||||
|
@snippet android/mobilenet-objdetect/src/org/opencv/samples/opencv_mobilenet/MainActivity.java mobilenet_handle_frame
|
||||||
|
|
||||||
|
`Dnn.blobFromImage` converts camera frame to neural network input tensor. Resize and statistical
|
||||||
|
normalization are applied. Each line of network output tensor contains information on one detected
|
||||||
|
object in the following order: confidence in range [0, 1], class id, left, top, right, bottom box
|
||||||
|
coordinates. All coordinates are in range [0, 1] and should be scaled to image size before rendering.
|
||||||
|
|
||||||
|
- Launch an application and make a fun!
|
||||||
|

|
@ -1,7 +1,7 @@
|
|||||||
Use OpenCL in Android camera preview based CV application {#tutorial_android_ocl_intro}
|
Use OpenCL in Android camera preview based CV application {#tutorial_android_ocl_intro}
|
||||||
=====================================
|
=====================================
|
||||||
|
|
||||||
@prev_tutorial{tutorial_dev_with_OCV_on_Android}
|
@prev_tutorial{tutorial_android_dnn_intro}
|
||||||
@next_tutorial{tutorial_macos_install}
|
@next_tutorial{tutorial_macos_install}
|
||||||
|
|
||||||
| | |
|
| | |
|
||||||
|
@ -2,7 +2,7 @@ Android Development with OpenCV {#tutorial_dev_with_OCV_on_Android}
|
|||||||
===============================
|
===============================
|
||||||
|
|
||||||
@prev_tutorial{tutorial_O4A_SDK}
|
@prev_tutorial{tutorial_O4A_SDK}
|
||||||
@next_tutorial{tutorial_android_ocl_intro}
|
@next_tutorial{tutorial_android_dnn_intro}
|
||||||
|
|
||||||
| | |
|
| | |
|
||||||
| -: | :- |
|
| -: | :- |
|
||||||
|
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 118 KiB |
@ -23,6 +23,7 @@ Introduction to OpenCV {#tutorial_table_of_content_introduction}
|
|||||||
- @subpage tutorial_android_dev_intro
|
- @subpage tutorial_android_dev_intro
|
||||||
- @subpage tutorial_O4A_SDK
|
- @subpage tutorial_O4A_SDK
|
||||||
- @subpage tutorial_dev_with_OCV_on_Android
|
- @subpage tutorial_dev_with_OCV_on_Android
|
||||||
|
- @subpage tutorial_android_dnn_intro
|
||||||
- @subpage tutorial_android_ocl_intro
|
- @subpage tutorial_android_ocl_intro
|
||||||
|
|
||||||
##### Other platforms
|
##### Other platforms
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
<application
|
<application
|
||||||
android:label="@string/app_name"
|
android:label="@string/app_name"
|
||||||
android:icon="@drawable/icon">
|
android:icon="@drawable/icon">
|
||||||
|
<!-- //! [mobilenet_tutorial] -->
|
||||||
<activity
|
<activity
|
||||||
android:exported="true"
|
android:exported="true"
|
||||||
android:name=".MainActivity"
|
android:name=".MainActivity"
|
||||||
@ -25,3 +25,4 @@
|
|||||||
<uses-feature android:name="android.hardware.camera.front.autofocus" android:required="false"/>
|
<uses-feature android:name="android.hardware.camera.front.autofocus" android:required="false"/>
|
||||||
|
|
||||||
</manifest>
|
</manifest>
|
||||||
|
<!-- //! [mobilenet_tutorial] -->
|
||||||
|
@ -1,5 +1,11 @@
|
|||||||
package org.opencv.samples.opencv_mobilenet;
|
package org.opencv.samples.opencv_mobilenet;
|
||||||
|
/*
|
||||||
|
// snippet was added for Android tutorial
|
||||||
|
//! [mobilenet_tutorial_package]
|
||||||
|
package com.example.myapplication;
|
||||||
|
//! [mobilenet_tutorial_package]
|
||||||
|
*/
|
||||||
|
//! [mobilenet_tutorial]
|
||||||
import android.content.Context;
|
import android.content.Context;
|
||||||
import android.content.res.AssetManager;
|
import android.content.res.AssetManager;
|
||||||
import android.os.Bundle;
|
import android.os.Bundle;
|
||||||
@ -47,6 +53,7 @@ public class MainActivity extends CameraActivity implements CvCameraViewListener
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//! [init_model_from_memory]
|
||||||
mModelBuffer = loadFileFromResource(R.raw.mobilenet_iter_73000);
|
mModelBuffer = loadFileFromResource(R.raw.mobilenet_iter_73000);
|
||||||
mConfigBuffer = loadFileFromResource(R.raw.deploy);
|
mConfigBuffer = loadFileFromResource(R.raw.deploy);
|
||||||
if (mModelBuffer == null || mConfigBuffer == null) {
|
if (mModelBuffer == null || mConfigBuffer == null) {
|
||||||
@ -54,9 +61,9 @@ public class MainActivity extends CameraActivity implements CvCameraViewListener
|
|||||||
} else
|
} else
|
||||||
Log.i(TAG, "Model files loaded successfully");
|
Log.i(TAG, "Model files loaded successfully");
|
||||||
|
|
||||||
|
|
||||||
net = Dnn.readNet("caffe", mModelBuffer, mConfigBuffer);
|
net = Dnn.readNet("caffe", mModelBuffer, mConfigBuffer);
|
||||||
Log.i(TAG, "Network loaded successfully");
|
Log.i(TAG, "Network loaded successfully");
|
||||||
|
//! [init_model_from_memory]
|
||||||
|
|
||||||
setContentView(R.layout.activity_main);
|
setContentView(R.layout.activity_main);
|
||||||
|
|
||||||
@ -106,6 +113,7 @@ public class MainActivity extends CameraActivity implements CvCameraViewListener
|
|||||||
Imgproc.cvtColor(frame, frame, Imgproc.COLOR_RGBA2RGB);
|
Imgproc.cvtColor(frame, frame, Imgproc.COLOR_RGBA2RGB);
|
||||||
|
|
||||||
// Forward image through network.
|
// Forward image through network.
|
||||||
|
//! [mobilenet_handle_frame]
|
||||||
Mat blob = Dnn.blobFromImage(frame, IN_SCALE_FACTOR,
|
Mat blob = Dnn.blobFromImage(frame, IN_SCALE_FACTOR,
|
||||||
new Size(IN_WIDTH, IN_HEIGHT),
|
new Size(IN_WIDTH, IN_HEIGHT),
|
||||||
new Scalar(MEAN_VAL, MEAN_VAL, MEAN_VAL), /*swapRB*/false, /*crop*/false);
|
new Scalar(MEAN_VAL, MEAN_VAL, MEAN_VAL), /*swapRB*/false, /*crop*/false);
|
||||||
@ -143,11 +151,14 @@ public class MainActivity extends CameraActivity implements CvCameraViewListener
|
|||||||
Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 0, 0));
|
Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 0, 0));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
//! [mobilenet_handle_frame]
|
||||||
|
|
||||||
return frame;
|
return frame;
|
||||||
}
|
}
|
||||||
|
|
||||||
public void onCameraViewStopped() {}
|
public void onCameraViewStopped() {}
|
||||||
|
|
||||||
|
//! [mobilenet_tutorial_resource]
|
||||||
private MatOfByte loadFileFromResource(int id) {
|
private MatOfByte loadFileFromResource(int id) {
|
||||||
byte[] buffer;
|
byte[] buffer;
|
||||||
try {
|
try {
|
||||||
@ -167,6 +178,7 @@ public class MainActivity extends CameraActivity implements CvCameraViewListener
|
|||||||
|
|
||||||
return new MatOfByte(buffer);
|
return new MatOfByte(buffer);
|
||||||
}
|
}
|
||||||
|
//! [mobilenet_tutorial_resource]
|
||||||
|
|
||||||
private static final String TAG = "OpenCV-MobileNet";
|
private static final String TAG = "OpenCV-MobileNet";
|
||||||
private static final String[] classNames = {"background",
|
private static final String[] classNames = {"background",
|
||||||
@ -181,3 +193,4 @@ public class MainActivity extends CameraActivity implements CvCameraViewListener
|
|||||||
private Net net;
|
private Net net;
|
||||||
private CameraBridgeViewBase mOpenCvCameraView;
|
private CameraBridgeViewBase mOpenCvCameraView;
|
||||||
}
|
}
|
||||||
|
//! [mobilenet_tutorial]
|
||||||
|