Documentation

Plugins API

This guide consists of multiple sections, first we discuss Wikitude SDK Plugins in general, than we talk about platform specifics and how to register a plugin with the Wikitude SDK and then we go through each of the sample plugins included with the Wikitude Example Applications.

About Wikitude SDK Plugins

A plugin is a class, or rather a set of classes, written in C++ that allows extending the functionality of the Wikitude SDK. While there is a Plugin base class that offers some of the main functionality, a plugin itself can have multiple different optional modules that allow for more complex concepts to be implemented. The following table gives a very brief overview of the distribution of responsibilities of the plugin related classes.

class purpose
Plugin Main class of a plugin implementation. Derive from this class to create your plugin. It handles the application lifecycle and main plugin functionality. It provides access to various parameters of the Wikitude SDK and provides access to the camera frame and the recognized targets. It further owns and manages all the optional plugin modules.
ImageTrackingPluginModule Optional module to allow for custom image tracking implementations. Derive from this class to implement your own image tracking algorithm to work in conjunction with the Wikitude SDK algorithms.
InstantTrackingPluginModule Optional module to allow for custom instant tracking implementations. Derive from this class to implement your own instant tracking algorithm to work in conjunction with the Wikitude SDK algorithms.
ObjectTrackingPluginModule Optional module to allow for custom object tracking implementations. Derive from this class to implement your own object tracking algorithm to work in conjunction with the Wikitude SDK algorithms.
CameraFrameInputPluginModule Optional module to allow for frame data to be input into the Wikitude SDK. Derive from this class to implement your own camera frame acquisition. The supplied frame data supplied can be processed and rendered by the Wikitude SDK.
DeviceIMUInputPluginModule Optional module to allow for sensor data to be input into the Wikitude SDK. Derive from this class to implement your own sensor data acquisition. The supplied sensor data supplied will be used by the Wikitude SDK for its tracking algorithms where applicable.
OpenGLESRenderingPluginModule Optional module to allow for custom OpenGL ES rendering. Available on iOS and Android.
MetalRenderingPluginModule Optional module to allow for custom Metal rendering. Only available on iOS.
DirectXRenderingPluginModule Optional module to allow for custom DirectX rendering. Only available on Windows.

Each of the optional modules can be registered by calling the corresponding function of the Plugin class.

An important thing to remember when working with plugins is that they need to have a unique identifier. If an attempt is made to register a plugin with an identifier that is already known to the Wikitude SDK, the register method call will return false.

Plugin Base Class

class Plugin {
public:
    Plugin(std::string identifier_);
    virtual ~Plugin();

    virtual void initialize(const std::string& temporaryDirectory_, PluginParameterCollection& pluginParameterCollection_);
    virtual void setSDKEdition(SDKEdition sdkEdition_);
    virtual void pause();
    virtual void resume(unsigned int pausedTime_);
    virtual void destroy();

    virtual void cameraFrameAvailable(common_code::ManagedCameraFrame& managedCameraFrame_) = 0;
    virtual void deviceRotationEventAvailable(const DeviceRotationEvent& deviceRotationEvent_);
    virtual void deviceOrientationEventAvailable(const DeviceOrientationEvent& deviceOrientationEvent_);
    virtual void prepareUpdate();
    virtual void update(const RecognizedTargetsBucket& recognizedTargetsBucket_) = 0;

    virtual const std::string& getIdentifier() const;

    virtual void setEnabled(bool enabled_);
    virtual bool isEnabled() const;

    virtual PluginType getPluginType() const;

    virtual bool canPerformTrackingOperationsAlongOtherPlugins();
    virtual bool canUpdateMultipleTrackingInterfacesSimultaneously();

    ImageTrackingPluginModule* getImageTrackingPluginModule() const;
    InstantTrackingPluginModule* getInstantTrackingPluginModule() const;
    ObjectTrackingPluginModule* getObjectTrackingPluginModule() const;

    CameraFrameInputPluginModule* getCameraFrameInputPluginModule() const;
    DeviceIMUInputPluginModule* getDeviceIMUInpputPluginModule() const;

    OpenGLESRenderingPluginModule* getOpenGLESRenderingPluginModule() const;
    MetalRenderingPluginModule* getMetalRenderingPluginModule() const;
    DirectXRenderingPluginModule* getDirectXRenderingPluginModule() const;

protected:
    void setImageTrackingPluginModule(std::unique_ptr<ImageTrackingPluginModule> imageTrackingPluginModule_);
    void setObjectTrackingPluginModule(std::unique_ptr<ObjectTrackingPluginModule> objectTrackingPluginModule_);
    void setInstantTrackingPluginModule(std::unique_ptr<InstantTrackingPluginModule> instantTrackingPluginModule_);

    void setCameraFrameInputPluginModule(std::unique_ptr<CameraFrameInputPluginModule> cameraFrameInputPluginModule_);
    void setDeviceIMUInputPluginModule(std::unique_ptr<DeviceIMUInputPluginModule> deviceIMUInputPluginModule_);

    void setOpenGLESRenderingPluginModule(std::unique_ptr<OpenGLESRenderingPluginModule> openGLESRenderingPluginModule_);
    void setMetalRenderingPluginModule(std::unique_ptr<MetalRenderingPluginModule> metalRenderingPluginModule_);
    void setDirectXRenderingPluginModule(std::unique_ptr<DirectXRenderingPluginModule> directXRenderingPluginModule_);

    void iterateEnabledPluginModules(std::function<void(PluginModule& activePluginModule_)> activePluginModuleIteratorHandle_);

protected:
    std::string     _identifier;
    bool            _enabled;

    mutable std::mutex          _pluginModuleAccessMutex;
    std::set<PluginModule*>     _availablePluginModules;

private:
    std::unique_ptr<ImageTrackingPluginModule> _imageTrackingModule;
    std::unique_ptr<InstantTrackingPluginModule> _instantTrackingModule;
    std::unique_ptr<ObjectTrackingPluginModule> _objectTrackingModule;

    std::unique_ptr<CameraFrameInputPluginModule>   _cameraFrameInputModule;
    std::unique_ptr<DeviceIMUInputPluginModule>     _deviceIMUInputPluginModule;

    std::unique_ptr<OpenGLESRenderingPluginModule> _openGlesRenderingModule;
    std::unique_ptr<MetalRenderingPluginModule> _metalRenderingModule;
    std::unique_ptr<DirectXRenderingPluginModule> _directXRenderingModule;
};

While we will not go over every function of the Plugin class and all the optional module classes, the following sections will present sample plugins that should convey most of the concepts and methods involved in creating your own plugin.

Information about Recognized Targets

If the Wikitude SDK is running with active image-, instant- or object recognition, the plugins API will populate the RecognizedTargetsBucket in the update method with the currently recognized targets. The plugin may then use the corresponding target objects to acquire data, most importantly the pose, and use it for further processing.

class RecognizedTargetsBucket {
public:
    virtual ~RecognizedTargetsBucket() = default;

    virtual const std::vector<ImageTarget*>& getImageTargets() const = 0;
    virtual const std::vector<ObjectTarget*>& getObjectTargets() const = 0;

    virtual const std::vector<InstantTarget*>& getInstantTargets() const = 0;
    virtual const std::vector<InitializationPose*>& getInitializationPoses() const = 0;

    virtual const std::vector<Plane*>& getPlanes() const = 0;
    virtual const Matrix4& getViewMatrix() const = 0;
};

Platform Specifics

C++ types cannot be passed through public APIs of a WinRT component, so C++ plugins need to be wrapped in a WinRT compatible object first. In our examples, we create a C++/CX class deriving from wikitude::sdk::uwp::IPlugin that holds a std::shared_ptr from a C++ class deriving from wikitude::sdk::Plugin.

The C++ plugin retrieved in the SDK by calling IPlugin::getPluginPointerAsInt to get the address of the C++ plugin's shared_ptr.

Registering Plugins

Register C++ Plugin

To register a C++ plugin, the Wikitude Native SDK for UWP offers WinRT wrapper class.

Registering a C++ Plugin

Create a class that inherit from wikitude::sdk::uwp::IPlugin. You will have to implement two methods IPlugin::getPluginPointerAsInt and Iplugin::getIdentifier. The first one is used to pass the address of a shared_ptr to the C++ plugin :

uint64 PluginWrapper::getPluginPointerAsInt()
    {
        return reinterpret_cast<uint64>(&cppPlugin);
    }

The later is used to retrieve the plugin identifier.

Removing a C++ Plugin

To remove a already registered C++ plugin, call the PluginManager::removePlugin method with the plugin identifier.

_sdk->getPluginManager()->removePlugin(barcodePlugin->getIdentifier());

Barcode and QR code reader

This samples shows a full implementation of the popular barcode library ZBar into the Wikitude SDK. As ZBar is licensed under LGPL2.1 this sample can also be used for other projects.

ZBar is an open source software suite for reading bar codes from various sources, such as video streams, image files and raw intensity sensors. It supports many popular symbologies (types of bar codes) including EAN-13/UPC-A, UPC-E, EAN-8, Code 128, Code 39, Interleaved 2 of 5 and QR Code.

The C++ barcode plugin CoreBarcodePlugin is created when the wrapper plugin BarcodePlugin is created in the OnNavigatedTo event handler. The wrapped plugin is then passed to the SDK.

auto barcodePlugin = ref new NativeSDKExamplesPlugins::BarcodePlugin();
barcodePlugin->ScannedBarcode += ref new NativeSDKExamplesPlugins::ScannedBarcodeEventHandler(this, &BarcodePluginPage::onDecodedData);
_sdk->getPluginManager()->addPlugin(barcodePlugin);

Now let's move to the plugin C++ code. First we'll have a look at the BarcodePlugin.cpp file. To create the bar code plugin we derive our CoreBarcodePlugin class from wikitude::sdk::Plugin and override initialize, destroy, cameraFrameAvailable and update.

class CoreBarcodePlugin : public wikitude::sdk::Plugin {
public:
   CoreBarcodePlugin();
   // Inherited via Plugin
   virtual void cameraFrameAvailable(wikitude::sdk::ManagedCameraFrame& managedCameraFrame_) override;
   virtual void update(const wikitude::sdk::RecognizedTargetsBucket& recognizedTargetsBucket_) override;
   void initialize(const std::string& temporaryDirectory_, wikitude::sdk::PluginParameterCollection& pluginParameterCollection_) override;
   void destroy() override;

   void setScannedHandler(std::function<void(Platform::String^)> scannedHandler_);
private:
   int _currentlyRecognizedBarcodes;
   std::function<void(Platform::String^)> _scannedHandler;
};

The most interesting method is cameraFrameAvailable. In this method, we instantiate a ZXing::BarcodeReader, transform the luminance plane to a compatible data format, and call BarcodeReader::Decode to analyze the frame. If a barcode or qrcode is detected we pass the decoded text to the scanned handler.

void CoreBarcodePlugin::cameraFrameAvailable(wikitude::sdk::ManagedCameraFrame & managedCameraFrame_)
{
   const wikitude::sdk::CameraFramePlane& luminancePlane = managedCameraFrame_.get()[0];

   auto reader = ref new ZXing::BarcodeReader;

   Platform::Array<unsigned char>^ data = ref new Platform::Array<unsigned char>((unsigned char*)luminancePlane.getData(), luminancePlane.getDataSize());

   auto metadata = managedCameraFrame_.getColorMetadata();
   auto result = reader->Decode(data, metadata.getPixelSize().width, metadata.getPixelSize().height, ZXing::BitmapFormat::Gray8);

   if (result != nullptr)
   {
      auto type = result->BarcodeFormat.ToString();
      auto content = result->Text;
      if (_scannedHandler)
      {
            _scannedHandler(content);
      }
   }

}

Back in the BarcodePluginPage code, we attach a handler to the BarcodePlugin::ScannedBarcode event and display the decoded result in a TextBlock.

void BarcodePluginPage::onDecodedData(Platform::String ^ decodedData_)
{
   _codeScannedTime = high_resolution_clock::now();
   _dispatcher->RunAsync(CoreDispatcherPriority::Normal, ref new DispatchedHandler([=]() {
      DecodedData->Text = "Scan result : " + decodedData_;
   }));
}

Face Detection

This samples shows how to add face detection to your Wikitude augmented reality experience using OpenCV.

The Face detection plugin example consists of the C++ classes CoreFaceDetectionPlugin, the C++/CX wrapper class FaceDetectionPlugin and the C++/CX view class FaceDetectionPluginPage. We will use OpenCV to detect faces in the current camera frame and Direct3D to render a rectangle around detected faces. FaceDetectionPlugin is used to connect the CoreFaceDetectionPlugin class with the FaceDetectionPluginPage. Since this class mainly exists to ease the implementation of a cross platform plugin, we will not go into any implementation details for this class. We also don't go into any OpenCV or Direct3D details. If one is interested in such topics, the source code is part of the Wikitude Native SDK example application.

FaceDetectionPluginPage handles the face detection plugin creation and registration exactly as described in the previously bar code example.

A novelty in this example is a dependence on the camera frame orientation due to the face detection algorithm expecting faces to be upright. We therefore accept the camera-to-surface angle as an input from the Wikitude SDK in order to rotate the camera frame accordingly.

void CoreFaceDetectionPlugin::initialize(const std::string& temporaryDirectory_, wikitude::sdk::PluginParameterCollection& pluginParameterCollection_) {
    wikitude::sdk::RuntimeParameters& runtimeParameters = pluginParameterCollection_.getRuntimeParameters();
    auto cameraParameters = pluginParameterCollection_.getCameraParameters();
    cameraParameters.addCameraFrameSizeChangedHandler(reinterpret_cast<std::uintptr_t>(this), std::bind(&CoreFaceDetectionPlugin::cameraFrameSizeChanged, this, std::placeholders::_1));
    _runtimeParameters = &runtimeParameters;

}

Next we have a look at the CoreFaceDetectionPlugin class. Again we we will leave out implementation details and focus on how we use the plugin itself. In the cameraFrameAvailable method we use OpenCV to detect faces in the current camera frame which the Wikitude SDK passes to the plugin. We call the observer which is an instance of the FaceDetectionPluginObserverWrapper to notify the view controller about the result.

if (!_isDatabaseLoaded) {
    _isDatabaseLoaded = _cascadeDetector.load(_databasePath);
    if (!_isDatabaseLoaded) {
        return;
    }
}

wikitude::sdk::Size<int> cameraFrameSize = cameraFrame_.getColorMetadata().getPixelSize();
cv::Mat greyFrame{ cv::Size(cameraFrameSize.width, cameraFrameSize.height), CV_8UC1, const_cast<void*>(cameraFrame_.get()[0].getData())};

cv::Mat smallImg = cv::Mat(cv::Size(cameraFrameSize.width/2, cameraFrameSize.height/2), CV_8UC1);
cv::resize(greyFrame, smallImg, smallImg.size(), CV_INTER_AREA);

/* Depending on the device orientation, the camera frame needs to be rotated in order to detect faces in it */
float currentCameraToSurfaceAngle = _runtimeParameters->getCameraToSurfaceAngle();
if (currentCameraToSurfaceAngle == 90) {
    cv::transpose(smallImg, smallImg);
    cv::flip(smallImg, smallImg, 1);
}
else if (currentCameraToSurfaceAngle == 180) {
    cv::flip(smallImg, smallImg, -1);
}
else if (currentCameraToSurfaceAngle == 270) {
    cv::transpose(smallImg, smallImg);
    cv::flip(smallImg, smallImg, 0);
}

auto smallImgSize = smallImg.size();
cv::Rect crop = cv::Rect(smallImgSize.width / 4, smallImgSize.height / 4, smallImgSize.width / 2, smallImgSize.height / 2);
cv::Mat croppedImg = smallImg(crop);

std::vector<cv::Rect> results;
_cascadeDetector.detectMultiScale(croppedImg, results, 1.1, 2, 0, cv::Size(20, 20));

if (results.size()) {
    auto matrix = convertFacePositionToViewMatrix(croppedImg, results.at(0), currentCameraToSurfaceAngle, _frameSize);
    _observer->faceDetected(matrix);
} else {
    _observer->faceLost();
}

To render a frame around detected faces we created an instance of the StrokedRectangle class which takes care of rendering a rectangle around faces and all targets of the also active ImageTracker. When the plugin detects, loses or recalculated the projection matrix it will call the appropriate view controller methods which we use to update the StrokedRectangle instance.

void FaceDetectionPluginPage::OnFaceDetected(wikitude::sdk::uwp::Matrix ^ matrix)
{
    auto& outputSize = _renderer->deviceResources()->GetOutputSize();
    if (!_faceRectangle) {
        _faceRectangle = new StrokedRectangle(_renderer->deviceResources()->GetD3DDevice());
        _faceRectangle->setColor(0.f, 1.f, 0.f);
    }
    _faceRectangle->updateModelMatrix(matrix);
}