Documentation

Cloud Recognition Sample

This example shows how to recognize images on a cloud server and then overlay it with augmentations utilizing the AR.ImageTracker class.

The sample is based on a use-case to recognize wine labels directly from wine bottles. We have set-up a target collection on the Wikitude server hosting several wine-labels from around the world.

Single Image Recognition

The goal of this and the following samples in this section is to recognize and augment the wine labels in the image below. All three samples build on each other and functionality is added/improved in each sample.

Please note that in this section a public cloud archive will be used. See the documentation for the Manager API for instructions how to create your own cloud archives which can be used with the Wikitude SDK.

Regional server endpoints

Before we get started please note that you have to choose which regional-distributed Wikitude server the SDK should contact.

The cloud recognition server region can be selected by calling the AR.context.setCloudRecognitionServerRegion function from JavaScript with on of the following constants.

AR.CONST.CLOUD_RECOGNITION_SERVER_REGION.AMERICAS
AR.CONST.CLOUD_RECOGNITION_SERVER_REGION.EUROPE

The default behaviour is Europe. In case of a wrong value the SDK will silently choose Europe.

Now let's get on with the first sample and have a look at the first part of the JavaScript code - the init function.

init: function initFn() {
    this.createTracker();
    this.createOverlays();
},

Once the wine is recognized we want to display a banner which shows a rating, the wine label and in later chapters the name of the recognized wine to the end user. To keep this example simple we will reuse the same banner image on every target. Because of that we are able to load the image once and reuse again and again. This will be done in the createOverlays function, the second function call in the init function above.

createOverlays: function createOverlaysFn() {
    this.bannerImg = new AR.ImageResource("assets/banner.jpg");
    this.bannerImgOverlay = new AR.ImageDrawable(this.bannerImg, 0.4, {
        translate: {
            y: -0.6
        }
    });
},

First an image resource is created and then passed to an AR.ImageDrawable. A drawable is a visual component that can be connected to an image recognized target (AR.ImageTrackable) or a geolocated object (AR.GeoObject). The AR.ImageDrawable is initialized by the image and its size. Optional parameters allow to position it relative to the recognized target.

After we laid the groundwork in the previous function let's move on to the first call in the init function, the createTracker function.

createTracker: function createTrackerFn() {
    World.cloudRecognitionService = new AR.CloudRecognitionService("b277eeadc6183ab57a83b07682b3ceba", "54e4b9fe6134bb74351b2aa3", {
        onInitialized: this.trackerLoaded,
        onError: this.trackerError
    });

    World.tracker = new AR.ImageTracker(this.cloudRecognitionService, {
        onError: this.trackerError
    });
},

As you can see in the code above we pass three parameters to the AR.CloudRecognitionService. The first parameter represents the Client API authentication token, in the example above we will use the public Wikitude authentication token. Read more about authentication and tokens here. The second parameter represents the target collection id. This unique id will identify which cloud archive the AR.CloudRecognitionService will use from all the cloud archives connected to your authentication token. Optional parameters are passed as object in the last argument. In this case a callback function for the onLoaded and onError trigger are set.

Once the server fully loaded the AR.CloudRecognitionService the onInitialized() function is called. If there was a problem initializing the AR.ImageTracker the SDK will call the function trackerError(). Note the initialization can take a few seconds, especially when working with large cloud archives.

After the SDK calls the onInitialized() function we continue with our wine sample and want to display a 'Scan'-Button to the user. Clicking this button starts the image recognition process by sending the current camera frame to the cloud recognition server. The next code fragment contains the onClick listener function for this button.

scan: function scanFn() {
    World.cloudConnection.recognize(this.onRecognition, this.onRecognitionError);
},

After the user clicked the "Scan" button the Wikitude SDK calls the recognize function on the previously created AR.CloudRecognitionService. The tracker recognize function is passed two callback functions. The first callback will be called by the SDK after each recognition cycle. The second will be called if there is something wrong with the specified cloud archive.

The next code snipped contains the first callback function onRecognition.

onRecognition: function onRecognitionFn(recognized, response) {
    if (recognized) {
        if (World.wineLabel !== undefined) {
            World.wineLabel.destroy();
        }

        if (World.wineLabelOverlay !== undefined) {
            World.wineLabelOverlay.destroy();
        }

        World.wineLabel = new AR.ImageResource("assets/" + response.targetInfo.name + ".jpg");
        World.wineLabelOverlay = new AR.ImageDrawable(World.wineLabel, 0.3, {
            translate: {
                x: -0.5,
                y: -0.6
            },
            zOrder: 1                
        });

        if (World.wineLabelAugmentation !== undefined) {
            World.wineLabelAugmentation.destroy();
        }

        World.wineLabelAugmentation = new AR.ImageTrackable(World.tracker, response.targetInfo.name , {
            drawables: {
                cam: [World.bannerImgOverlay, World.wineLabelOverlay]
            }
        });
    } else {
        document.getElementById('errorMessage').innerHTML = "<div class='errorMessage'>Recognition failed, please try again!</div>";

        setTimeout(function() {
            var e = document.getElementById('errorMessage');
            e.removeChild(e.firstChild);
        }, 3000);
    }        
},

The first parameter of this callback function is a boolean value which indicates if the server was able to recognize the target, its value will either be 0 or 1 depending on the outcome. The second parameter is a JSON object which will contain metadata about the recognized target. If a target was recognized this JSON object will contain another JSON object named targetInfo, which consists of the target name (targetName), its star rating (rating) and optional its physical height. If no target was recognized the JSON object will be empty. More information on the response object will follow in the next chapters.

If the recognition was successful we would like to display the banner augmentation. To display the label of the recognized wine on top of the previously created banner, another overlay is defined. The property targetInfo.name contained in the response object is read to load the equally named image file. The zOrder property (defaults to 0) is set to 1 to make sure it will be positioned on top of the banner.

After that, we combine everything by creating an AR.ImageTrackable using the ImageTracker, the name of the image target (targetInfo.name) and the drawables that should augment the recognized image.

If on the other hand the recognition failed we will show an error message to the user.

Continuous Image Recognition

This chapter will build upon the first chapter. Only relevant changes will be shown, please read the previous chapter before continuing.

In the first sample of this section we triggered the recognition mode manually ("Tap To Scan"). This is useful in some situations but sometime you probably want to use the continuous mode ("Continuous Search"), explained in the following chapter. The main difference is that the recognition will now be triggered continuously in a defined time interval instead of once by a manual click.

Let's look at the changes necessary to enable this functionality.

The first change takes place in the 'trackerLoaded' function. In the sample before we would only show some instructions to the user, now we'll also start the continuous recognition mode.

    trackerLoaded: function trackerLoadedFn() {
        this.startContinuousRecognition(750);
        this.showUserInstructions();
    },

We call the function startContinuousRecognition with the parameter 750. This parameter represents a time interval which we'll use to tell the SDK how often a recognition should be started. The snippet below shows the code of the function startContinuousRecognition.

startContinuousRecognition: function startContinuousRecognitionFn(interval) {
    this.cloudRecognitionService.startContinuousRecognition(interval, this.onInterruption, this.onRecognition, this.onRecognitionError);
},

In the function above we start the continuous recognition by calling the startContinuousRecognition function of the AR.CloudRecognitionService. It is passed four parameters, the first is the already mentioned time interval in which a new recognition is started. It is set in milliseconds and the minimum value is 500. The second parameter defines yet another callback function which is called by the SDK if the recognition interval was set too high for the current network speed. The third parameter defines a callback function for when a recognition cycle is completed. The fourth parameter defines the onRecognitionError callback.

We will now take a look at the changes to the callback functions. The onRecognition function has a slight change, the onRecognitionError function stays the same and there is a new callback called onInterruption.

First the onRecognition function.

onRecognition: function onRecognitionFn(recognized, response) {
    if (recognized) {
        if (World.wineLabel !== undefined) {
            World.wineLabel.destroy();
        }

        if (World.wineLabelOverlay !== undefined) {
            World.wineLabelOverlay.destroy();
        }

        World.wineLabel = new AR.ImageResource("assets/" + response.targetInfo.name + ".jpg");
        World.wineLabelOverlay = new AR.ImageDrawable(World.wineLabel, 0.27, {
            translate: {
                x: -0.5,
                y: -0.6
            },
            zOrder: 1
        });

        if (World.wineLabelAugmentation !== undefined) {
            World.wineLabelAugmentation.destroy();
        }

        World.wineLabelAugmentation = new AR.ImageTrackable(World.tracker, response.targetInfo.name , {
            drawables: {
                cam: [World.bannerImgOverlay, World.wineLabelOverlay]
            }
        });
    }
},

The only change is that we removed the error message when there was no recognition since this will happen fairly often when the user does not point the camera on the actual target.

The next function onInterruption wasn't necessary in the last example. Take a look at it in the next snippet.

onInterruption: function onInterruptionFn(suggestedInterval) {
    World.cloudRecognitionService.stopContinuousRecognition();
    World.cloudRecognitionService.startContinuousRecognition(suggestedInterval);
},

In case the current network speed isn't fast enough for the set interval, the Wikitude SDK calls this callback function with a new suggested interval more appropriate to the current network speed. To set the new interval the recognition mode will be restarted.

This example showed how to enable the continuous mode of AR.CloudRecognitionService, in the next sample we will take a look at how to use the server response object and custom metadata.

Using MetaInformation in the response

As the previous chapter this chapter builds upon the chapters before. Again please read the first two chapters before you get started with this.

In this section we add another augmentation for the end user. Again the image overlay does not change depending on the recognized target, so we create it once in the createOverlays function. Let's have a look.

createOverlays: function createOverlaysFn() {
    this.bannerImg = new AR.ImageResource("assets/bannerWithNameField.jpg");
    this.bannerImgOverlay = new AR.ImageDrawable(this.bannerImg, 0.4, {
        translate: {
            y: 0.6
        }
    });

    this.orderNowButtonImg = new AR.ImageResource("assets/orderNowButton.png");
    this.orderNowButtonOverlay = new AR.ImageDrawable(this.orderNowButtonImg, 0.3, {
        translate: {
            y: -0.6
        }
    });
},

The new augmentation we will display is an "Order Now" button. It is created in the same manner as the previous augmentations.

All other changes took place in the 'onRecognition' function shown below.

onRecognition: function onRecognitionFn(recognized, response) {
    if (recognized) {
        if (World.wineLabelOverlay !== undefined) {
            World.wineLabel.destroy();
        }

        if (World.wineLabelOverlay !== undefined) {
            World.wineLabelOverlay.destroy();
        }

        World.wineLabel = new AR.ImageResource("assets/" + response.targetInfo.name + ".jpg");
        World.wineLabelOverlay = new AR.ImageDrawable(World.wineLabel, 0.2, {
            translate: {
                x: -0.37,
                y: 0.55
            },
            zOrder: 1
        });

        World.wineName = new AR.Label(response.metadata.name, 0.06, {
            translate: {
                y: 0.72
            },
            zOrder: 2
        });

        if (World.wineLabelAugmentation !== undefined) {
            World.wineLabelAugmentation.destroy();
        }

        World.wineLabelAugmentation = new AR.ImageTrackable(World.tracker, response.targetInfo.name , {
            drawables: {
                cam: [World.bannerImgOverlay, World.wineLabelOverlay, World.wineName]
            }
        });

        World.orderNowButtonOverlay.onClick = function() {
            AR.context.openInBrowser(response.metadata.shop_url);
        }

        if (World.orderNowAugmentation !== undefined) {
            World.orderNowAugmentation.destroy();
        }

        World.orderNowAugmentation = new AR.ImageTrackable(World.tracker, response.targetInfo.name, {
            drawables: {
                cam: World.orderNowButtonOverlay
            }
        });
    }
},

When the cloud archive was created custom metadata for every target was defined. You are a free to choose the content of the metadata depending on your needs. See the Manager API documentation on how to add metadata for a target. For this example, we created two fields:

  • metadata.name which represents the real name of the wine and
  • metadata.shop_url a url to a webshop stocking the particular wine were defined.

The corresponding JSON when creating targets on the Manager API looks like the following:

    "metadata":{
         "name":"Lote 43 Cabernet Sauvignon-Merlot",
        "shop_url":"http://loja.miolo.com.br/ch/index.aspx"
     }

To display the name of the wine in the banner overlay, an AR.Label is created. The first parameter defines the text of the label, the second its height in SDUs, the third parameter sets some optional options. To set the first parameter of the AR.Label we read the before mentioned name from the custom metadata object. Since the response object returned by the server is a JSON object it is possible to navigate it by dot notation.

Like the AR.ImageDrawable objects we created before, we add the AR.Label to the AR.ImageTrackable which combines everything for our banner.

Next we add a onClick handler to the orderNowButtonOverlay where we make use of the AR.context class to open the shop's website in browser. Again we utilize the server response object and our custom metadata to read the url for the current target from shop_url.