Plugins API
This guide consists of multiple sections, first we discuss Wikitude SDK Plugins in general, than we talk about platform specifics and how to register a plugin with the Wikitude SDK and then we go through each of the sample plugins included with the Wikitude Example Applications.
About Wikitude SDK Plugins
A plugin is a class, or rather a set of classes, written in C++ that allows extending the functionality of the Wikitude SDK. While there is a Plugin
base class that offers some of the main functionality, a plugin itself can have multiple different optional modules that allow for more complex concepts to be implemented. The following table gives a very brief overview of the distribution of responsibilities of the plugin related classes.
class | purpose |
---|---|
Plugin |
Main class of a plugin implementation. Derive from this class to create your plugin. It handles the application lifecycle and main plugin functionality. It provides access to various parameters of the Wikitude SDK and provides access to the camera frame and the recognized targets. It further owns and manages all the optional the plugin modules. |
ImageTrackingPluginModule |
Optional module to allow for custom image tracking implementations. Derive from this class to implement you own image tracking algorithm to work in conjunction with the Wikitude SDK algorithms. |
InstantTrackingPluginModule |
Optional module to allow for custom instant tracking implementations. Derive from this class to implement you own instant tracking algorithm to work in conjunction with the Wikitude SDK algorithms. |
ObjectTrackingPluginModule |
Optional module to allow for custom object tracking implementations. Derive from this class to implement you own object tracking algorithm to work in conjunction with the Wikitude SDK algorithms. |
CameraFrameInputPluginModule |
Optional module to allow for frame data to be input into the Wikitude SDK. Derive from this class to implement you own camera frame acquisition. The supplied frame data supplied can be processed and rendered by the Wikitude SDK. |
DeviceIMUInputPluginModule |
Optional module to allow for sensor data to be input into the Wikitude SDK. Derive from this class to implement you own sensor data acquisition. The supplied sensor data supplied will be used by the Wikitude SDK for its tracking algorithms where applicable. |
OpenGLESRenderingPluginModule |
Optional module to allow for custom OpenGL ES rendering. Available on iOS and Android. |
MetalRenderingPluginModule |
Optional module to allow for custom Metal rendering. Only available on iOS. |
DirectXRenderingPluginModule |
Optional module to allow for custom DirectX rendering. Only available on Windows. |
Each of the optional modules can be registered by calling the corresponding function of the Plugin
class.
An important thing to remember when working with plugins is that they need to have a unique identifier. If an attempt is made to register a plugin with an identifier that is already known to the Wikitude SDK, the register method call will return false.
Plugin Base Class
class Plugin {
public:
Plugin(std::string identifier_);
virtual ~Plugin();
virtual void initialize(const std::string& temporaryDirectory_, PluginParameterCollection& pluginParameterCollection_);
virtual void pause();
virtual void resume(unsigned int pausedTime_);
virtual void destroy();
virtual void cameraFrameAvailable(common_code::ManagedCameraFrame& managedCameraFrame_) = 0;
virtual void update(const RecognizedTargetsBucket& recognizedTargetsBucket_) = 0;
virtual const std::string& getIdentifier() const;
virtual void setEnabled(bool enabled_);
virtual bool isEnabled() const;
virtual bool canPerformTrackingOperationsAlongOtherPlugins();
virtual bool canUpdateMultipleTrackingInterfacesSimultaneously();
ImageTrackingPluginModule* getImageTrackingPluginModule() const;
InstantTrackingPluginModule* getInstantTrackingPluginModule() const;
ObjectTrackingPluginModule* getObjectTrackingPluginModule() const;
CameraFrameInputPluginModule* getCameraFrameInputPluginModule() const;
DeviceIMUInputPluginModule* getDeviceIMUInpputPluginModule() const;
OpenGLESRenderingPluginModule* getOpenGLESRenderingPluginModule() const;
MetalRenderingPluginModule* getMetalRenderingPluginModule() const;
DirectXRenderingPluginModule* getDirectXRenderingPluginModule() const;
protected:
void setImageTrackingPluginModule(std::unique_ptr<ImageTrackingPluginModule> imageTrackingPluginModule_);
void setObjectTrackingPluginModule(std::unique_ptr<ObjectTrackingPluginModule> objectTrackingPluginModule_);
void setInstantTrackingPluginModule(std::unique_ptr<InstantTrackingPluginModule> instantTrackingPluginModule_);
void setCameraFrameInputPluginModule(std::unique_ptr<CameraFrameInputPluginModule> cameraFrameInputPluginModule_);
void setDeviceIMUInputPluginModule(std::unique_ptr<DeviceIMUInputPluginModule> deviceIMUInputPluginModule_);
void setOpenGLESRenderingPluginModule(std::unique_ptr<OpenGLESRenderingPluginModule> openGLESRenderingPluginModule_);
void setMetalRenderingPluginModule(std::unique_ptr<MetalRenderingPluginModule> metalRenderingPluginModule_);
void setDirectXRenderingPluginModule(std::unique_ptr<DirectXRenderingPluginModule> directXRenderingPluginModule_);
void iterateEnabledPluginModules(std::function<void(PluginModule& activePluginModule_)> activePluginModuleIteratorHandle_);
protected:
std::string _identifier;
bool _enabled;
private:
std::unique_ptr<ImageTrackingPluginModule> _imageTrackingModule;
std::unique_ptr<InstantTrackingPluginModule> _instantTrackingModule;
std::unique_ptr<ObjectTrackingPluginModule> _objectTrackingModule;
std::unique_ptr<CameraFrameInputPluginModule> _cameraFrameInputModule;
std::unique_ptr<DeviceIMUInputPluginModule> _deviceIMUInputPluginModule;
std::unique_ptr<OpenGLESRenderingPluginModule> _openGlesRenderingModule;
std::unique_ptr<MetalRenderingPluginModule> _metalRenderingModule;
std::unique_ptr<DirectXRenderingPluginModule> _directXRenderingModule;
mutable std::mutex _pluginModuleAccessMutex;
std::set<PluginModule*> _availablePluginModules;
};
While we will not go over every function of the Plugin
class and all the optional module classes, the following sections will present sample plugins that should convey most of the concepts and methods involved in creating your own plugin.
Information about Recognized Targets
If the Wikitude SDK is running with active image-, instant- or object recognition, the plugins API will populate the RecognizedTargetsBucket
in the update
method with the currently recognized targets. The plugin may then use the corresponding target objects to acquire data, most importantly the pose, and use it for further processing.
class RecognizedTargetsBucket {
public:
RecognizedTargetsBucket(const RecognizedTargetsBucketConnector& recognizedTargetsBucketConnector_);
~RecognizedTargetsBucket() = default;
const std::list<ImageTarget>& getImageTargets() const;
const std::list<ObjectTarget>& getObjectTargets() const;
const std::list<InstantTarget>& getInstantTargets() const;
private:
RecognizedTargetsBucketConnector _recognizedTargetsBucketConnector;
};
Platform Specifics
To be able to use a C++ Wikitude plugin on Android, it is necessary to create a binary from the C++ code for each supported CPU architecture. To make this process as easy as possible we prepared a CMake file.
Please note that if you would like to use multiple C++ plugins in your app, you will need to package all plugins in one shared library. This is necessary because we use JNI to register C++ plugins with the Wikitude SDK and the symbol to do that has to be unique.
Android C++ Wikitude Plugin Library Build
The following CMake file shows how plugins can be added and built. The class used as Plugin has to derive from wikitude::sdk::Plugin
.
Note that the libnativesdk.so
library is not shipped with the SDK package directly, but needs to be extracted from the .aar file, either manually or automatically.
cmake_minimum_required(VERSION 3.6)
add_library(lib_nativeSDK SHARED IMPORTED)
set_target_properties(lib_nativeSDK PROPERTIES IMPORTED_LOCATION ${WIKITUDE_NATIVE_PATH}/${ANDROID_ABI}/libnativesdk.so)
add_library(yourPlugins SHARED
src/jniHelper.cpp
src/JniRegistration.cpp
src/__YOUR_PLUGIN__.cpp
)
target_include_directories(yourPlugins
PRIVATE
include/wikitude
src
)
target_link_libraries(yourPlugins
lib_nativeSDK
)
This gradle snippet shows a possible way to enable automatic extraction of the nativeSDK shared library from the .aar file. It assumes that the gradle script that this code is added to is able to build and run with the java Native SDK (as explained in Setup Guide Android Native API
).
/* Defines the temporary path for the NativeSDK shared library and Plugins shared library.*/
def wikitudeTmpPath = "${buildDir}/wikitude"
/* Creates a new configuration to extract the shared library from the NativeSDK for the Plugins to link against. */
configurations { libraryExtraction }
android {
...
defaultConfig {
...
externalNativeBuild {
cmake {
/* Only build for supported architectures. */
abiFilters 'x86', 'armeabi-v7a', 'arm64-v8a'
arguments "-DANDROID_TOOLCHAIN=clang",
"-DANDROID_STL=c++_shared", /* Can also be c++_static. */
"-DANDROID_NATIVE_API_LEVEL=19",
/* Provides the path to the extracted nativesdk.so to CMake */
"-DWIKITUDE_NATIVE_PATH=${wikitudeTmpPath}/jni"
cppFlags "-std=c++14"
}
}
}
...
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
}
}
}
dependencies {
...
/* Extract the native sdk shared library from the aar. */
libraryExtraction (name:'wikitude-native-sdk', ext:'aar')
}
/* Task to extract the nativesdk shared library from the aar. */
task extractNativeLibraries() {
doFirst {
configurations.libraryExtraction.files.each { file ->
copy {
from zipTree(file)
into wikitudeTmpPath
include "jni/**/*"
}
}
}
}
tasks.whenTaskAdded {
task ->
if (task.name.contains("external") && !task.name.contains("Clean")) {
/* The externalNativeBuild depends on the extraction task to be able to link properly. */
task.dependsOn(extractNativeLibraries)
}
}
To connect the C++ Plugin to the SDK the
Java_com_wikitude_common_plugins_internal_PluginManagerInternal_createNativePlugins
function has to be implemented. This function has to have three parameters(JNIEnv*
, jobject
and jstring
) and has to return a jlongArray
. The third parameter is the plugin that is being registered in case there are several plugins. The return array contains the pointers to the Plugins that should be registered as jlong
.
This can be seen in the JNIRegistration.cpp
file in the example app code.
JavaVM* pluginJavaVM;
extern "C" JNIEXPORT jlongArray JNICALL Java_com_wikitude_common_plugins_internal_PluginManagerInternal_createNativePlugins(JNIEnv *env, jobject thisObj, jstring jPluginName) {
env->GetJavaVM(&pluginJavaVM);
int numberOfPlugins = 1;
jlong cPluginsArray[numberOfPlugins];
JavaStringResource pluginName(env, jPluginName);
if (pluginName.str == "face_detection") {
FaceDetectionPluginConnector* connector = new FaceDetectionPluginConnector();
cPluginsArray[0] = (jlong) new FaceDetectionPlugin(640, 480, connector);
} else if (pluginName.str == "barcode") {
cPluginsArray[0] = (jlong) new BarcodePlugin(640, 480);
} else if ( pluginName.str == "customcamera" ) {
cPluginsArray[0] = (jlong) new YUVFrameInputPlugin();
} else if ( pluginName.str == "simple_input_plugin" ) {
cPluginsArray[0] = (jlong) new SimpleInputPlugin();
}
jlongArray jPluginsArray = env->NewLongArray(numberOfPlugins);
if (jPluginsArray != nullptr) {
env->SetLongArrayRegion(jPluginsArray, 0, numberOfPlugins, cPluginsArray);
}
return jPluginsArray;
}
Registering Plugins
Register C++ Plugin
To register a C++ plugin with the Wikitude Native SDK, get the PluginManager
from the WikitudeSDK
instance and call registerNativePlugin
passing the name of your library. Do not add lib
in front of the name or add the .so
extension. If you register your Plugin in the onCreate
method of your activity, please also make sure to call the onCreate
method of the WikitudeSDK
first. The following snippet comes from the BarcodePluginActivity
of the Wikitude Native SDK Example application.
@Override
protected void onCreate(Bundle savedInstanceState) {
...
mWikitudeSDK.onCreate(getApplicationContext(), startupConfiguration);
...
wikitudeSDK.getPluginManager().registerNativePlugins("wikitudePlugins", "barcode", new ErrorCallback() {
@Override
public void onError(@NonNull WikitudeError error) {
Toast.makeText(BarcodePluginActivity.this, "Could not load plugin. Reason: " + error.getMessage(), Toast.LENGTH_LONG).show();
Log.v(TAG, "Plugin failed to load. Reason: " + error.getMessage());
}
});
}
Barcode and QR code reader
This samples shows a full implementation of the popular barcode library ZBar into the Wikitude SDK. As ZBar is licensed under LGPL2.1 this sample can also be used for other projects.
ZBar is an open source software suite for reading bar codes from various sources, such as video streams, image files and raw intensity sensors. It supports many popular symbologies (types of bar codes) including EAN-13/UPC-A, UPC-E, EAN-8, Code 128, Code 39, Interleaved 2 of 5 and QR Code.
In the BarcodePluginActivity.onCreate
method we register the bar code C++ plugin by getting the PluginManager
from the Wikitude SDK and calling registerNativePlugins
passing the name of the native library containing our C++ plugin. Right after that we call initNative(), which we declared as a native method and implement in the C++ plugin, to initialize the JavaVM pointer hold by the native plugin. We also implement the method onBarcodeDetected
to display the contents of the scanned bar code. We'll later call this method from the bar code plugin.
public class BarcodePluginActivity extends Activity implements ImageTrackerListener, ExternalRendering {
@Override
protected void onCreate(Bundle savedInstanceState) {
[...]
wikitudeSDK.getPluginManager().registerNativePlugins("wikitudePlugins", "barcode", new ErrorCallback() {
@Override
public void onError(@NonNull WikitudeError error) {
Toast.makeText(BarcodePluginActivity.this, "Could not load plugin. Reason: " + error.getMessage(), Toast.LENGTH_LONG).show();
Log.v(TAG, "Plugin failed to load. Reason: " + error.getMessage());
}
});
initNative();
}
...
public void onBarcodeDetected(final String codeContent) {
runOnUiThread(new Runnable() {
@Override
public void run() {
dropDownAlert.removeAllImages();
dropDownAlert.setTextWeight(1);
dropDownAlert.setText("Scan result: " + codeContent);
}
});
}
private native void initNative();
}
Now let's move to the plugins C++ code. First we'll have a look at the BarcodePlugin.h
file. To create the bar code plugin we derive our BarcodePlugin
class from wikitude::sdk::Plugin
and override initialize
, destroy
, cameraFrameAvailable
and update
. We also declare the following member variables: _worldNeedsUpdate
, _image
, _imageScanner
and _methodId
. The _worldNeedsUpdate
variable will later be used as an indicator if we need to update the View
, _image
and _imageScanner
are classes from zBar
which we'll use to scan for bar codes and _methodId
will hold the method id of the onBarcodeDetected
Java method.
extern JavaVM* pluginJavaVM;
class BarcodePlugin : public wikitude::sdk::Plugin {
public:
BarcodePlugin(unsigned int cameraFrameWidth, unsigned int cameraFrameHeight);
virtual ~BarcodePlugin();
void initialize(const std::string& temporaryDirectory_, wikitude::sdk::PluginParameterCollection& pluginParameterCollection_) override;
void destroy() override;
void cameraFrameAvailable(wikitude::common_code::ManagedCameraFrame& managedCameraFrame_) override;
void update(const wikitude::sdk::RecognizedTargetsBucket& recognizedTargetsBucket_) override;
protected:
int _worldNeedsUpdate;
zbar::Image _image;
zbar::ImageScanner _imageScanner;
private:
jmethodID _methodId;
bool _jniInitialized;
};
We declare two variables in the global namespace one which will hold a pointer to the JavaVM and one which will hold a reference to our activity. To initialize those two variables we declared the initNative
native method in the BarcodeActivity
and implement it like in the following code snippet. All we do is get the pointer to the JavaVM
from the JNIEnv
and create a new global reference to the calling activity instance.
jobject barcodeActivityObj;
extern "C" JNIEXPORT void JNICALL
Java_com_wikitude_samples_plugins_BarcodePluginActivity_initNative(JNIEnv* env, jobject obj) {
env->GetJavaVM(&pluginJavaVM);
barcodeActivityObj = env->NewGlobalRef(obj);
}
In the constructor we set _worldNeedsUpdate
to zero indicating that there is no update necessary and initialize the zBar::Image
member variable passing its constructor the width and height of the camera frame, the image format of Y800
, set its data pointer to null and the data length to zero. We use the JavaVM to create an instance of JavaVMResource
which is a helper class to manage the JavaVM, we provided in the file jniHelper.cpp
. Next we get the Java environment from the JavaVMResource
and initialize the _methodId
member. In the destructor we delete the global reference to the activity object.
BarcodePlugin::BarcodePlugin(unsigned int cameraFrameWidth, unsigned int cameraFrameHeight) :
Plugin("com.wikitude.android.barcodePlugin"),
_worldNeedsUpdate(0),
_image(cameraFrameWidth, cameraFrameHeight, "Y800", nullptr, 0),
_jniInitialized(false) {
}
BarcodePlugin::~BarcodePlugin() {
JavaVMResource vm(pluginJavaVM);
vm.env->DeleteGlobalRef(barcodeActivityObj);
}
In the initialize
method we configure the zbar::ImageScanner
by calling setConfig
, enabling all supported bar codes. If you are only interested in one or some particular types of codes, first disabling all bar code types and manually enabling each particular type would be the better idea. That way performance could be greatly improved.
void BarcodePlugin::initialize(const std::string& temporaryDirectory_, wikitude::sdk::PluginParameterCollection& pluginParameterCollection_) {
_imageScanner.set_config(zbar::ZBAR_NONE, zbar::ZBAR_CFG_ENABLE, 1);
}
We react to the destroy
event by setting the current data pointer of the zbar::Image
member to null and length to zero.
void BarcodePlugin::destroy() {
_image.set_data(nullptr, 0);
}
The last but most interesting method is cameraFrameAvailable
. In the cameraFrameAvailable
method we set the data of our previously initialized zbar::Image
member variable to the frame data we just received and the length of the data to frame width * frame height by calling set_data
. We then start the scanning process by calling the scan
method of our zBar::ImageScanner
passing the zBar::Image
member instance. The zBar::ImageScanner::scan
method returns the number of detected bar codes in the image frame, we save this number in a local variable n
. If n
is not equal to the result of the last frame, which we saved to _worldNeedsUpdate
member variable, we know there was a new bar code detected (meaning there was no bar code in the last frame) or that there was a bar code in the last frame and now there isn't. When that's the case, we do another check if there really was a bar code detected this frame and if there was we call the onBarcodeDetected
Java method passing the code content.
void BarcodePlugin::cameraFrameAvailable(wikitude::common_code::ManagedCameraFrame& cameraFrame_) {
if (!_jniInitialized) {
JavaVMResource vm(pluginJavaVM);
jclass clazz = vm.env->FindClass("com/wikitude/samples/plugins/BarcodePluginActivity");
_methodId = vm.env->GetMethodID(clazz, "onBarcodeDetected", "(Ljava/lang/String;)V");
_jniInitialized = true;
}
const wikitude::sdk::CameraFramePlane& luminancePlane = cameraFrame_.get()[0];
_image.set_data(luminancePlane.getData(), luminancePlane.getDataSize());
int n = _imageScanner.scan(_image);
if (n != _worldNeedsUpdate) {
if (n) {
JavaVMResource vm(pluginJavaVM);
zbar::Image::SymbolIterator symbol = _image.symbol_begin();
jstring codeContent = vm.env->NewStringUTF(symbol->get_data().c_str());
vm.env->CallVoidMethod(barcodeActivityObj, _methodId, codeContent);
}
}
_worldNeedsUpdate = n;
}
Face Detection
This samples shows how to add face detection to your Wikitude augmented reality experience using OpenCV.
The Face Detection Plugin Example consists of the C++ classes FaceDetectionPlugin
, FaceDetectionPluginConnector
and the Java class FaceDetectionPluginActivity
. We will use OpenCV to detect faces in the current camera frame and OpenGL calls in Java to render a frame around detected faces.
The FaceDetectionPluginConnector
acts as our interface between native code and Java and contains some JNI code, since JNI is not the focus of this example we won't go into detail about the implementation. If you would like to have a look at the complete code feel free to browse the source code in the Wikitude SDK release package.
We implement to Java native method initNative
; it will be used to initialize the plugin with the path to an OpenCV database, The other methods faceDetected
, faceLost
, and projectionMatrixChanged
will be called by the Plugin to update the Java Android Activity, which controls the rendering.
extern "C" JNIEXPORT void JNICALL
Java_com_wikitude_samples_plugins_FaceDetectionPluginActivity_initNative(JNIEnv* env, jobject obj, jstring databasePath_) {
env->GetJavaVM(&pluginJavaVM);
faceDetectionActivityObj = env->NewGlobalRef(obj);
FaceDetectionPlugin::_databasePath = env->GetStringUTFChars(databasePath_, NULL);
}
void FaceDetectionPluginConnector::faceDetected(const float* modelViewMatrix) {
JavaVMResource vm(pluginJavaVM);
jclass clazz = vm.env->FindClass("com/wikitude/samples/plugins/FaceDetectionPluginActivity");
_faceDetectedId = vm.env->GetMethodID(clazz, "onFaceDetected", "([F)V");
jfloatArray jModelViewMatrix = vm.env->NewFloatArray(16);
vm.env->SetFloatArrayRegion(jModelViewMatrix, 0, 16, modelViewMatrix);
vm.env->CallVoidMethod(faceDetectionActivityObj, _faceDetectedId, jModelViewMatrix);
}
void FaceDetectionPluginConnector::faceLost() {
JavaVMResource vm(pluginJavaVM);
jclass clazz = vm.env->FindClass("com/wikitude/samples/plugins/FaceDetectionPluginActivity");
_faceLostId = vm.env->GetMethodID(clazz, "onFaceLost", "()V");
vm.env->CallVoidMethod(faceDetectionActivityObj, _faceLostId);
}
void FaceDetectionPluginConnector::projectionMatrixChanged(const float* projectionMatrix) {
if (faceDetectionActivityObj) {
JavaVMResource vm(pluginJavaVM);
jclass clazz = vm.env->FindClass("com/wikitude/samples/plugins/FaceDetectionPluginActivity");
_projectionMatrixChangedId = vm.env->GetMethodID(clazz, "onProjectionMatrixChanged", "([F)V");
jfloatArray jProjectionMatrix = vm.env->NewFloatArray(16);
vm.env->SetFloatArrayRegion(jProjectionMatrix, 0, 16, projectionMatrix);
vm.env->CallVoidMethod(faceDetectionActivityObj, _projectionMatrixChangedId, jProjectionMatrix);
}
}
Next we have a look at the FaceDetectionPlugin
class. Again we we will leave out implementation details and focus on how we use the plugin itself. In the cameraFrameAvailable
method we use OpenCV to detect faces in the current camera frame which the Wikitude SDK passes to the plugin. We call the observer which is an instance of the FaceDetectionPluginConnector
to notify the Java activity about the result.
void FaceDetectionPlugin::cameraFrameAvailable(wikitude::common_code::ManagedCameraFrame& cameraFrame_) {
if (!_isDatabaseLoaded) {
_isDatabaseLoaded = _cascadeDetector.load(_databasePath);
if (!_isDatabaseLoaded) {
return;
}
}
wikitude::sdk::Size<int> frameSize = cameraFrame_.getColorMetadata().getPixelSize();
std::memcpy(_grayFrame.data, cameraFrame_.get()[0].getData(), cameraFrame_.get()[0].getDataSize());
cv::Mat smallImg = cv::Mat(frameSize.height * 0.5f, frameSize.width * 0.5f, CV_8UC1);
cv::resize(_grayFrame, smallImg, smallImg.size(), CV_INTER_AREA);
/* Depending on the device orientation, the camera frame needs to be rotated in order to detect faces in it */
float currentCameraToSurfaceAngle;
{ // auto release scope
std::unique_lock<std::mutex>(_cameraToSurfaceAngleMutex);
currentCameraToSurfaceAngle = _cameraToSurfaceAngle;
}
if (currentCameraToSurfaceAngle == 90) {
cv::transpose(smallImg, smallImg);
cv::flip(smallImg, smallImg, 1);
} else if (currentCameraToSurfaceAngle == 180) {
cv::flip(smallImg, smallImg, 0);
} else if (currentCameraToSurfaceAngle == 270) {
cv::transpose(smallImg, smallImg);
cv::flip(smallImg, smallImg, -1);
} else if (currentCameraToSurfaceAngle == 0) {
// nop for landscape right
}
cv::Rect crop = cv::Rect(smallImg.cols / 4, smallImg.rows / 4, smallImg.cols / 2, smallImg.rows / 2);
cv::Mat croppedImg = smallImg(crop);
_result.clear();
_cascadeDetector.detectMultiScale(croppedImg, _result, 1.1, 2, 0, cv::Size(20, 20));
if (_result.size()) {
convertFaceRectToModelViewMatrix(croppedImg, _result.at(0), currentCameraToSurfaceAngle);
_observer->faceDetected(_modelViewMatrix);
} else {
_observer->faceLost();
}
}
In the FaceDetectionPluginActivity
we override onCreate
and initialize the Plugin by calling the initNative native method, passing the path to the database file. To render a frame around detected faces we created an instance of GLRendererFaceDetectionPlugin
class which takes care of rendering a rectangle around faces and all targets of the also active ImageTracker
. When the plugin detects, looses or recalculated the projection matrix it will call the appropriate Java methods which we use to update the Renderer
instance.
@Override
protected void onCreate(Bundle savedInstanceState) {
[...]
wikitudeSDK.getPluginManager().registerNativePlugins("wikitudePlugins", "face_detection", new ErrorCallback() {
@Override
public void onError(WikitudeError error) {
Log.v(TAG, "Plugin failed to load. Reason: " + error.getMessage());
}
});
try {
// load cascade file from application resources
InputStream is = getResources().openRawResource(R.raw.high_database);
File cascadeDir = getDir("cascade", Context.MODE_PRIVATE);
final File cascadeFile = new File(cascadeDir, "lbpcascade_frontalface.xml");
FileOutputStream os = new FileOutputStream(cascadeFile);
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = is.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
is.close();
os.close();
initNative(cascadeFile.getAbsolutePath());
cascadeDir.delete();
} catch (IOException e) {
e.printStackTrace();
Log.e(TAG, "Failed to load cascade. Exception thrown: " + e);
}
[...]
}
[...]
public void onFaceDetected(float[] modelViewMatrix) {
faceTarget.setViewMatrix(modelViewMatrix);
glRenderer.setCurrentlyRecognizedFace(faceTarget);
[...]
}
public void onFaceLost() {
glRenderer.setCurrentlyRecognizedFace(null);
}
public void onProjectionMatrixChanged(float[] projectionMatrix) {
faceTarget.setProjectionMatrix(projectionMatrix);
glRenderer.setCurrentlyRecognizedFace(faceTarget);
}
private native void initNative(String cascadeFilePath);
}
If you are interested in the implementation details of the FaceDetectionPluginActivity
or the StrokedRectangle
class, you can find both classes in our Wikitude SDK Example Application.