Documentation

3D Rendering

This example shows how to augment a target image with 3D content. It starts by displaying a 3D model on a target and advances by adding displayed animations, interactivity and demonstrates the snap-to-screen functionality.

If you are not yet familiar with how to create a vision based augmented reality scene (based on image recognition and tracking), please have a look at the previous example Image Recognition.

3D content within Wikitude can only be loaded from Wikitude 3D Format files (.wt3). This is a compressed binary format for describing 3D content which is optimized for fast loading and handling of 3D content on a mobile device. You still can use 3D models from your favorite 3D modeling tools (Autodesk® Maya® or Blender) but you'll need to convert them into the wt3 file format. The Wikitude 3D Encoder desktop application (Windows and Mac) encodes your 3D source file. The Encoder can handle Autodesk® FBX® files (.fbx)for encoding to .wt3 .

For more details on how to convert your 3D content please see the Wikitude 3D Encoder section. In this example the .wt3 file has already been prepared and saved to assets/bee.wt3.

Rendering of bee model in Wikitude 3D Encoder

The following fictional print advertisement is used as target image, which will be augmented with a 3D model of the bee pictured in it.

Print ad used as image target

3D Model on Image Target

First of all create an AR.Model and pass the URL to the actual .wt3 file of the model. Additional options allow for scaling, rotating and positioning the model in the scene.

this.modelBee = new AR.Model("assets/bee.wt3", {
    onLoaded: this.loadingStep,
    scale: {
        x: 0.045,
        y: 0.045,
        z: 0.045
    },
    translate: {
        x: 0.0,
        y: 0.05,
        z: 0.0
    },
    rotate: {
        z: -25
    }
});
view source code on GitHub

In this example a function is attached to the onLoaded trigger to receive a notification once the 3D model is fully loaded. Depending on the size of the model and where it is stored (locally or remotely) it might take some time to completely load and it is recommended to inform the user about the loading time.

Similar to 2D content the 3D model is added to the drawables.cam property of an AR.ImageTrackable.

var trackable = new AR.ImageTrackable(this.tracker, "*", {
    drawables: {
        cam: [this.modelBee]
    }
});
view source code on GitHub

This is everything that is needed to allow the 3D model appear on an image target. To adjust scaling and position of the model pass the scale and translate properties as options to the AR.Model.

To view the sample you can use the image in on this page

Appearing Animation

As a next step, an appearing animation is added which scales up the 3D model once the target is inside the field of vision. Creating an animation on a single property of an object is done using an AR.PropertyAnimation. Since the bee model needs to be scaled up on all three axis, three animations are needed. These animations are grouped together utilizing an AR.AnimationGroup that allows them to play them in parallel.

var sx = new AR.PropertyAnimation(model, "scale.x", 0, scale, 1500, {
    type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_QUAD
});
var sy = new AR.PropertyAnimation(model, "scale.y", 0, scale, 1500, {
    type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_QUAD
});
var sz = new AR.PropertyAnimation(model, "scale.z", 0, scale, 1500, {
    type: AR.CONST.EASING_CURVE_TYPE.EASE_OUT_QUAD
});

return new AR.AnimationGroup(AR.CONST.ANIMATION_GROUP_TYPE.PARALLEL, [sx, sy, sz]);
view source code on GitHub

Each AR.PropertyAnimation targets one of the three axis and scales the model from 0 to the value passed in the scale variable. An EASE_OUT_QUAD easing curve is used to create a more dynamic effect of the animation.

To get a notification once the image target is inside the field of vision the onEnterFieldOfVision trigger of the AR.ImageTrackable is used. In the example the function appear() is attached.

appear: function appearFn() {
    World.trackableVisible = true;
    if (World.loaded) {
        World.appearingAnimation.start();
    }
},
view source code on GitHub

Within the appear function the previously created AR.AnimationGroup is started by calling its start() function which plays the animation once.

Interactivity

To add more functionality, a rotating animation is added to the 3D model. It is started and paused by clicking on the button or on the 3D model.

Additionally to the 3D model an image that will act as a button is added to the image target. This can be accomplished by loading an AR.ImageResource and creating a drawable from it.

var imgRotate = new AR.ImageResource("assets/rotateButton.png");
var buttonRotate = new AR.ImageDrawable(imgRotate, 0.2, {
    translate: {
        x: 0.35,
        y: 0.45
    },
    onClick: this.toggleAnimateModel
});
view source code on GitHub

To add the AR.ImageDrawable to the image target together with the 3D model both drawables are supplied to the AR.ImageTrackable.

var trackable = new AR.ImageTrackable(this.tracker, "*", {
    drawables: {
        cam: [this.modelBee, buttonRotate]
    },
    onEnterFieldOfVision: this.appear
});
view source code on GitHub

The rotation animation for the 3D model is created by defining an AR.PropertyAnimation for the rotate.z property.

// Rotation Animation
this.rotationAnimation = new AR.PropertyAnimation(this.modelBee, "rotate.z", -25, 335, 10000);
view source code on GitHub

The drawables are made clickable by setting their onClick triggers. Click triggers can be set in the options of the drawable when the drawable is created. Thus, when the 3D model onClick: this.toggleAnimateModel is set in the options it is then passed to the AR.Model constructor. Similar the button's onClick: this.toggleAnimateModel trigger is set in the options passed to the AR.ImageDrawable constructor. toggleAnimateModel() is therefore called when the 3D model or the button is clicked.

Inside the toggleAnimateModel() function, it is checked if the animation is running and decided if it should be started, resumed or paused.

toggleAnimateModel: function toggleAnimateModelFn() {
    if (!World.rotationAnimation.isRunning()) {
        if (!World.rotating) {
            World.rotationAnimation.start(-1);
            World.rotating = true;
        } else {
            World.rotationAnimation.resume();
        }
    } else {
        World.rotationAnimation.pause();
    }

    return false;
}
view source code on GitHub

Starting an animation with .start(-1) will loop it indefinitely.

Snap to Screen

To finish things up, the snap to screen feature is added so that the 3D model can be explored in a more immersive way. Snap to screen will bring the drawables, attached to a AR.ImageTrackable, out of the augmented reality scene and directly onto the screen. Once snapped, the drawables will stay on the screen as long as they are not set back into the augmented reality context. Thereby users can explore content even if they don't look at the target image.

The snap position on the screen is defined through a div element. During the AR.ImageTrackable creation, the div is passed as a additional option. In this example a div with id snapContainer is used.

this.trackable = new AR.ImageTrackable(this.tracker, "*", {
    drawables: {
        ...
    },
    snapToScreen: {
        snapContainer: document.getElementById('snapContainer')
    },
    ...
});
view source code on GitHub

Snapping is then enabled through an additional button. The button is created and setup just the same way as the rotate button. The only difference is that the onClick function of the newly created button is pointing to a different function.

toggleSnapping: function toggleSnappingFn() {
    if (World.appearingAnimation.isRunning()) {
        World.appearingAnimation.stop();
    }
    World.snapped = !World.snapped;
    World.trackable.snapToScreen.enabled = World.snapped;

    if (World.snapped) {
        World.applyLayout(World.layout.snapped);
    } else {
        World.applyLayout(World.layout.normal);
    }
}
view source code on GitHub

To enable snapping, set the AR.ImageTrackable property snapToScreen.enabled accordingly (either true or false). Based on the current snapping state, the drawables are positioned and scaled differently.

In the sample the 3D model can be rotated and scaled through gestures once it is snapped to the screen. To apply the new rotation, position and scale values, the gesture callbacks onScale, onDrag and onRotation are used.

Custom Animations

A 3D model represents a set of triangle meshes which can further be subdivided in mesh parts. Each mesh or mesh part stores material properties and transformations which determines its appearance and spatial position. The grouping of meshes allows to animate parts of the 3D model independently.

Meshes and mesh parts and can have identifiers which are passed to the onClick trigger function of the AR.Model as parameter modelPart. This allows to apply different actions when certain parts of a 3D model have been clicked/touched by the user. In the code snippet shown below the parameter modelPart is used in a switch instruction.

this.model = new AR.Model("assets/bee.wt3", {
    ...
});

this.animationBee = new AR.ModelAnimation(this.modelBee, "chest_low_animation");

this.model.onClick = function( drawable, model_part ) {
switch(model_part)
{
    case 'chest_low':
        World.animationBee.start();
        break;
    ...
}

The identifiers of the mesh parts are provided by the 3D model. They are specified by the modelling tool the 3D model was created with (e.g. 3d Studio Max, Maya, Blender, ...). A list of meshes and mesh parts for a 3D model can be obtained from by the Wikitude 3D Encoder.