Documentation

Cloud Recognition

The documentation for cloud recognition is split into two parts

  1. Documentation of the server-side component (Studio and Studio API)
  2. Documentation of the SDK side implementation, which follows below in more detail
Server-side documentation - Studio

Make sure to read the documentation about Studio and Studio API when using cloud recognition feature.

This example shows how to recognize images on a cloud server and then overlay it with augmentations utilizing the ImageTracker and CloudRecognitionService classes.

For a better understanding, here are some terms that will be used in the following and other sections of this documentation related to vision-based augmented reality.

  • Target: An image and its associated extracted data that is used to recognize an image.

  • Target Collection: A group of targets that are searched together. Think of it as a directory, which contains all your images you want to search. The Wikitude SDK can work with two different sorts of Target Collections

    • On-device Target Collection: a static wtc file containing the extracted data of your images. Can consist of up to 1,000 images.
    • Cloud Target Collection: A target collection stored on the Wikitude server. See Cloud Archive below. Can consist of up to 50,000 images.
  • Cloud Archive: An archive stored on the server that is optimized for cloud-based recognition. It is generated from a Target Collection and is used in combination with CloudRecognitionService .

  • CloudRecognitionService: Instead of analysing and computing the live camera feed directly on the device, the CloudRecognitionService will send the image(s) taken by the camera to the Wikitude Cloud Recognition server. The server will then do the hard work of trying to match the image with your targets in the specified cloud archive. Beside the benefit of searching in large image database, using the CloudRecognitionService has also a positive impact on the general performance in most cases. Especially when using a large target collection and on older devices.

Cloud Recognition Sample

For both Cloud Recognition samples below we will use external rendering. If you don't know what that means please go through the section on rendering before starting here.

The CloudRecognitionService is able to run in two modes, called on-click and continuous. In On-Click mode a single recognition cycle will be executed, while in continuous mode the recognition will be run repeatedly with a variable interval.

Both view controllers are located in Controller/CloudTracking.

On-Click Cloud Tracking

The starting point for on-click recognition is WTOnClickCloudRecognition. In -viewDidLoad we create a new instance of WTWikitudeNativeSDK and initialize it with a valid license key.

WTMetalRenderingConfiguration *metalRenderingConfiguration = [WTWikitudeNativeSDK createExternalMetalRenderingConfiguration:self];
self.wikitudeSDK = [[WTWikitudeNativeSDK alloc] initWithRenderingConfiguration:metalRenderingConfiguration delegate:self];
[self.wikitudeSDK setLicenseKey:kWTLicenseKey];

Then in -viewWillAppear: the Wikitude Native SDK is started using the -start:completion: method. If the SDK could be started, an object of type WTCloudRecognitionService is created using the WTTrackerManager factory method -createCloudRecognitionServiceWithClientToken:targetCollectionId:completion. The returned pointer needs to be retained in order to keep it alive, so assign it to a strong property. In the completion block we need to check if the cloud recognition service was successfully initialized, and if so, we can create an image tracker with it by using the -createImageTrackerFromCloudRecognitionService:delegate:configuration factory method.

[self.wikitudeSDK start:nil completion:^(BOOL isRunning, NSError * __nonnull error) {
    if ( !isRunning ) {
        NSLog(@"Wikitude SDK is not running. Reason: %@", [error localizedDescription]);
    }
    else
    {
        __weak typeof(self) weakSelf = self;

        self.cloudRecognitionService = [self.wikitudeSDK.trackerManager createCloudRecognitionServiceWithClientToken:@"b277eeadc6183ab57a83b07682b3ceba" targetCollectionId:@"54e4b9fe6134bb74351b2aa3" completion:^(BOOL success, NSError * error) {
            if ( !success )
            {
                NSLog(@"Cloud recognition service failed to initialize with error: %@", [error localizedDescription]);
            }
            else
            {
                NSLog(@"Creating image tracker");
                weakSelf.imageTracker = [weakSelf.wikitudeSDK.trackerManager createImageTrackerFromCloudRecognitionService:weakSelf.cloudRecognitionService delegate:weakSelf configuration:nil];
            }
        }];
    }
}];

When an image tracker is initialized with a cloud recognition service, the imageTrackerDidLoadTargets and imageTracker:didFailToLoadTargets are not called during initialization, but rather after receiving a valid response from the server and they inform us whether the tracker was able to load and initialize the targets received it received from the server.

- (void)imageTrackerDidLoadTargets:(nonnull WTImageTracker *)imageTracker
{
    NSLog(@"Image tracker loaded");
}

- (void)imageTracker:(WTCloudTracker * __nonnull)imageTracker didFailToLoadTargets:(NSError * __nonnull)error
{
    NSLog(@"Image tracker failed to load with error: %@", [error localizedDescription]);
}

To start a single recognition, call the -recognize: method. The first parameter in the block is the response returned from the cloud recognition service. To call the -recognize:errorHandler: method, a UIButton is set up to call a specific method once it was touched.

- (IBAction)sendCloudRecognitionRequest:(id)sender
{
    if ( self.cloudRecognitionService && self.cloudRecognitionService.initialized )
    {
        [self.cloudRecognitionService recognize:^(WTCloudRecognitionServiceResponse * _Nullable response, NSError * _Nullable error) {
            if ( response )
            {
                if ( response.recognized )
                {
                    NSLog(@"Recognized target '%@' which has a rating of '%ld'", [response.targetInformations objectForKey:WTCloudRecognitionServiceResponseKey_TargetName], (long)[[response.targetInformations objectForKey:WTCloudRecognitionServiceResponseKey_TargetRating] integerValue]);
                    NSLog(@"Associated (custom) metadata: %@", response.metadata);
                }
                else
                {
                    NSLog(@"No target image found in the analysed camera frame.\nKeep Calm \n\t&\nKeep Looking");
                }
            }
            else
            {
                NSLog(@"Cloud recognition recognition failed. Reason: %@", [error localizedDescription]);
            }
        }];
    }
}

Please note that only a recognize call is made if the cloud recognition service finished initializing. In case the server communication was successful, the first parameter in the block is a valid object of type WTCloudRecognitionServiceResponse that represents the server result. This object contains information if a image target was found or not, its name and, if specified, its associated meta data. In case of an error, the second parameter will contain a valid NSError object which contains detailed information about why the communication failed.

Since both simple image tracking and this use the WTImageTrackerDelegate protocol, all tracking related methods to draw a augmentation around the recognized target are available as well.

Continuous Cloud Tracking

On-Click recognition is useful in some particular cases, but more often than not you probably want to use continuous recognition. For continuous cloud recognition we set an interval in which the CloudRecognitionService automatically calls the recognize function.

To start continuous recognition, simply call the -startContinuousRecognitionWithInterval:interruptionHandler:responseHandler: cloud recognition service method once it finished initializing.

if ( self.cloudRecognitionService && self.cloudRecognitionService.initialized )
{
    __weak typeof(self) weakSelf = self;
    [self.cloudRecognitionService startContinuousRecognitionWithInterval:1.5 interruptionHandler:nil responseHandler:^(WTCloudRecognitionServiceResponse *response, NSError * error) {
        if (response)
        {
            NSLog(@"received continuous response...");
            if ( response.recognized ) {
                NSLog(@"target recognized...");
                dispatch_async(dispatch_get_main_queue(), ^{
                    [weakSelf.cloudRecognitionService stopContinuousRecognition];
                    [weakSelf.continuousRecognitionButton setTitle:@"Start Continuous Recognition" forState:UIControlStateNormal];
                });
            }
            else
            {
                NSLog(@"target NOT recognized...");
            }
        }
        else
        {
            NSLog(@"Cloud Recognition Service error %ld occurred. %@", (long)error.code, [error localizedDescription]);
        }
    }];
    [self.continuousRecognitionButton setTitle:@"Stop Continuous Recognition" forState:UIControlStateNormal];
}
else
{
    NSLog(@"Cloud Recognition Service is not ready yet.");
}

As before, this method is called once a UIButton was touched.

The first parameter defines in which interval new camera frames should be sent to the server for processing. The second parameter is again a response handler with information about the recognized target and behaves the same way as in the case of On-Click recognition.

To stop a continuous recognition session, e.g. once a target was found, simply call the -stopContinuousRecognition method.