In combination with the Plugins API the Wikitude SDK allows for renderables defined with the JavaScript API to be positioned directly without using the built-in tracking mechanisms. It therefore allows to take advantage of the rendering capabilities of the Wikitude SDK while supplying custom tracking algorithms. This example will take you through the process of implementing such a custom algorithm and highlight the intricacies related thereto. Specifically, a marker tracking plugin is implemented using the OpenCV and ArUco libraries.


To be able to understand the happenings of this example and utilise the AR.Positionable object, one must first understand how it is implemented in the Wikitude SDK. This section serves as a quick introduction on the topic.

Within the JavaScript API an AR.Positionable can be defined. This definition in turn invokes the instantiation of a complementary C++ object, of which a reference is provided in the updatePositionables function of the wikitude::sdk::Plugin, allowing it to be manipulated therein. A custom plugin utilising the positionable feature can therefore be implemented by deriving from said class and overriding the updatePositionables member function accordingly. After alterations have been performed by the updatePositionables function, the AR.Positionable objects are submitted for rendering each frame. Conceptually, a positionable is therefore a plugin mutable wrapper object to a renderable in the Wikitude SDK. This enables the extension of the JavaScript API though the Plugins API in a simple manner.


For this example the following resources are recommended.

Plugin example

Have a look at the Plugins API example on this page if you are not familiar with it yet.

ArUco marker

If you would like to create your own ArUco markers, please refer to the utilities accompanying the ArUco library package. It can be downloaded from SourceFore.

A marker specific to the ArUco augmented reality library with ID #303.

ArUco and OpenCV documentation

If you would like to delve into the details of the tracking algorithm, the ArUco website and the OpenCV documentation pages on camera calibration and 3d reconstruction are the recommended starting points.

JavaScript implementation

Similar to the AR.ImageTrackable and AR.GeoObject, the AR.Positionable is available. It requires a string identifier and a renderable as its input parameters. For this example, an AR.Model is used. Notice that no tracker can be specified, as the tracking will be provided by the plugin instead.

var World = {
    _myPositionable: null,

    init: function initFn() {

    createOverlays: function createOverlaysFn() {
        var myModel = new AR.Model(
            "assets/car.wt3", {
                onLoaded: this.loadingStep,
                    scale: {
                        x: 0.01,
                        y: 0.01,
                        z: 0.01

        World._myPositionable = new AR.Positionable("myPositionable", {
            drawables: {
                cam: myModel


Plugin implementation

To implement a custom tracking we use the marker tracking capabilities of the ArUco library, which is based on the OpenCV library. It allows ArUco markers to be recognised within the camera frame. It additionally allows to compute their camera relative 3D position, enabling placement of the model onto the tracked marker. Although the ArUco and OpenCV libraries do most of the heavy lifting, there are quite a lot of things to be considered and done for it to work correctly. These considerations are important for most practical plugins and will be presented in the following sections.

Ultimately however, all the custom plugin has to do is set the world matrix, view matrix and projection matrix of the AR.Positionable object. How these matrices are to be set differs based on whether a 3D renderable or a 2D renderable is attached.

// transformation matrices for a 3D renderable

// transformation matrices for a 2D renderable
positionable->setWorldMatrix((projectionMatrix * modelViewMatrix).get());

The header file

Please see below the content of the MarkerTrackingPlugin.h file. We derive from the wikitude::sdk::Plugin class and override the cameraFrameAvailable function and the updatePositionables function.

Regarding member variables, there are some additions as well. The aruco::MarkerDetector is the main class of the aruco library; it performs all the steps of the tracking algorithm. The std::vector<aruco::Marker> members are containers that hold the detected markers. The remaining member variables should be self explanatory with the exception of the std::mutex, which will be explained as it becomes relevant.

class MarkerTrackingPlugin : public wikitude::sdk::Plugin {

    virtual void surfaceChanged(wikitude::sdk::Size<int> renderSurfaceSize_, wikitude::sdk::Size<float> cameraSurfaceScaling_, wikitude::sdk::InterfaceOrientation interfaceOrientation_);

    virtual void cameraFrameAvailable(const wikitude::sdk::impl::Frame& cameraFrame_);

    virtual void update(const std::list<wikitude::sdk::impl::RecognizedTarget>& recognizedTargets_);

    virtual void updatePositionables(const std::unordered_map<std::string, wikitude::sdk_core::impl::PositionableWrapper*>& positionables_);

    aruco::MarkerDetector _detector;
    std::vector<aruco::Marker> _markers;
    std::vector<aruco::Marker> _markersPrev;
    std::vector<aruco::Marker> _markersCurr;
    std::vector<aruco::Marker> _markersPrevUpdate;
    std::vector<aruco::Marker> _markersCurrUpdate;

    bool _projectionInitialized;
    float _width;
    float _height;
    float _scaleWidth;
    float _scaleHeight;

    std::mutex _markerMutex;
    bool _updateDone;

    float _viewMatrixData[16];
    wikitude::sdk::Matrix4 _projectionMatrix;

    std::mutex _interfaceOrientationMutex;
    wikitude::sdk::InterfaceOrientation _currentInterfaceOrientation;

The cameraFrameAvailable function

In the cameraFrameAvailable function the _detector.detect() function call performs the marker tracking on the luminance camera frame given a set of input parameters. While most of the parameters should be self explanatory, the cameraMatrix parameter is not. It contains the data required to calculate the 3D position of the marker relative to the camera. Traditionally, the camera parameters along with distortion coefficients are precomputed by a separate camera calibration process. For the sake of this example however, the parameters are simply estimated with the specifications of the iPhone 5. While the results suffers slightly, they should suffice for this simple demonstration. Even on different devices, the application still performs well. Should this not be the case for your device, you may need to alter the focal length or CDD sensor sizes accordingly.

// calculate the focal length in pixels (fx, fy)
const float focalLengthInMillimeter = 4.12f;
const float CCDWidthInMillimeter = 4.536f;
const float CCDHeightInMillimeter = 3.416f;

const float focalLengthInPixelsX = _width * focalLengthInMillimeter / CCDWidthInMillimeter;
const float focalLengthInPixelsY = _height * focalLengthInMillimeter / CCDHeightInMillimeter;

cv::Mat cameraMatrix = cv::Mat::zeros(3, 3, CV_32F);<float>(0, 0) = focalLengthInPixelsX;<float>(1, 1) = focalLengthInPixelsY;

// calculate the frame center (cx, cy)<float>(0, 2) = 0.5f * _width;<float>(1, 2) = 0.5f * _height;

// always 1<float>(2, 2) = 1.0f;

const float markerSizeInMeters = 0.1f;

_detector.detect(frameLuminance, _markers, cameraMatrix, cv::Mat(), markerSizeInMeters);

Once markers are detected, a matrix is calculated that transforms the origin into the center of the tracked marker. Note that the tracking is restricted to a specific marker ID in this case to avoid ambiguities.

double viewMatrixData[16];
for (auto& marker : _markers) {
    // consider only marker 303
    if ( == 303) {
        marker.calculateExtrinsics(markerSizeInMeters, cameraMatrix, cv::Mat(), false);

Additionally, a projection matrix is computed that will be used by the updatePositionables function. The input parameters are, again, chosen to coincide with the specifications of the iPhone 5. Should your device have different characteristics, please change the vertical field of view value accordingly.

if (!_projectionInitialized) {
    const float fieldOfViewYDegree = 50.0f;
    const float nearZ = 0.1f;
    const float farZ = 100.0f;
    _projectionMatrix.perspective(fieldOfViewYDegree, _width / _height, nearZ, farZ);
    _projectionInitialized = true;

As we want to have access to the AR.Positionable we defined earlier with the JavaScript API, we need to continue our algorithm within the updatePositionables function. There is however, an important issue that needs to be considered. The cameraFrameAvailable function and the updatePositionables function are executed concurrently. Therefore we need to introduce synchronisation measures to allow data to be passed from one to the other.

This is where the previously mentioned std::mutex becomes relevant. With it we ensure that the threads never have mutual access to the data being shared. Additionally, we utilize the _updateDone boolean flag to signal the update method that new data is available for processing.

/* critical section begin */

if (_updateDone) {

    _markersPrev = _markersCurr;
    _markersCurr = _markers;

    for (unsigned int i = 0; i < 16; ++i) {
        _viewMatrixData[i] = static_cast<float>(viewMatrixData[i]);

    _updateDone = false;

/* critical section end */

The updatePositionables function

The updatePositionables method fulfils two tasks. Firstly, it determines whether any markers have been newly found that were not found in the previous frame and whether any markers have been lost that were found in the previous frame. It then accordingly calls the enteredFieldOfVision and exitedFieldOfVision trigger functions, which enable use of these triggers within the JavaScript API.

std::unordered_map<std::string, wikitude::sdk_core::impl::PositionableWrapper*>::const_iterator it = positionables_.find("myPositionable");

if (it == positionables_.end()) {

/* critical section start */

if (!_updateDone) {

    _markersPrevUpdate = _markersPrev;
    _markersCurrUpdate = _markersCurr;

    for (const auto& marker : _markersCurrUpdate) {
        auto itFound = std::find_if(_markersPrevUpdate.begin(), _markersPrevUpdate.end(), [&](const aruco::Marker& other) -> bool { return ==; });

        if (itFound != _markersPrevUpdate.end()) {
        else {

    for (const auto& marker : _markersPrevUpdate) {

    _updateDone = true;

/* critical section end */

Secondly, it composes a model view matrix that transforms the origin of the coordinate system into the marker center, enabling our model to be drawn on top. It is aligned such that the X-axis and Y-axis lie in the marker plane with the Z-axis being perpendicular thereto such that the positive half space is in front of the marker.

To produce this matrix several transformations have to be composed. The ArUco generated view matrix assumes a left handed coordinate system while the Wikitude SDK assumes a right handed coordinate system. To correct this discrepancy the Y-axis is flipped. As this application is intended to run on a mobile device, we need to account for the different device orientations. This is a twofold issue as is requires rotations to be applied depending on the current interface orientation and the correction of the aspect ratio for portrait orientations. Additionally, mobile devices have different screen and video capturing characteristics, therefore another corrective matrix is required to account for the aspect ratio.

    wikitude::sdk::Matrix4 rotationToLandscapeLeft;

    wikitude::sdk::Matrix4 rotationToPortrait;

    wikitude::sdk::Matrix4 rotationToUpsideDown;

    wikitude::sdk::Matrix4 aspectRatioCorrection;
    aspectRatioCorrection.scale(_scaleWidth, _scaleHeight, 1.0f);

    wikitude::sdk::Matrix4 portraitAndUpsideDownCorrection;
    const float aspectRatio = _width / _height;
    portraitAndUpsideDownCorrection.scale(aspectRatio, 1.0f / aspectRatio, 1.0f);

    wikitude::sdk::Matrix4 viewMatrix(_viewMatrixData);
    // OpenCV left handed coordinate system to OpenGL right handed coordinate system
    viewMatrix.scale(1.0f, -1.0f, 1.0f);

    wikitude::sdk::Matrix4 modelViewMatrix;

    wikitude::sdk::InterfaceOrientation currentInterfaceOrientation;
        std::lock_guard<std::mutex> lock(_interfaceOrientationMutex);
        currentInterfaceOrientation = _currentInterfaceOrientation;

    if (currentInterfaceOrientation == wikitude::sdk::InterfaceOrientation::InterfaceOrientationPortrait || currentInterfaceOrientation == wikitude::sdk::InterfaceOrientation::InterfaceOrientationPortraitUpsideDown) {
        modelViewMatrix *= portraitAndUpsideDownCorrection;

    modelViewMatrix *= aspectRatioCorrection;

    switch (currentInterfaceOrientation) {
        case wikitude::sdk::InterfaceOrientation::InterfaceOrientationLandscapeRight:
            // nop
            // we don't like warnings and not having this case included would cause one
        case wikitude::sdk::InterfaceOrientation::InterfaceOrientationLandscapeLeft:
            modelViewMatrix *= rotationToLandscapeLeft;
        case wikitude::sdk::InterfaceOrientation::InterfaceOrientationPortrait:
            modelViewMatrix *= rotationToPortrait;
        case wikitude::sdk::InterfaceOrientation::InterfaceOrientationPortraitUpsideDown:
            modelViewMatrix *= rotationToUpsideDown;

    modelViewMatrix *= viewMatrix;

Once the model view matrix and the projection matrix have been generated, they can be applied to the positionable.

wikitude::sdk::Matrix4 identity;

// 3d trackable

Native implementation

As the plugin instantiation and registration is covered by the Plugins API example, a detailed description on this subject is omitted here.

Running the sample with the ArUco marker provided in the resource section should present you with the car model nicely being placed on top of the marker.