Categories
Dev to Dev

Product update: Titanium Module

Over six years ago Wikitude launched its Titanium Module for the Wikitude SDK. It enabled Appcelerator developers to easily embed augmented reality into their Titanium project. Since then, hundreds of developers were using Wikitude and Titanium to create and publish their apps.

As the AR market evolved, we saw a strong shift in download patterns and a declining demand from Titanium developers over the past year.

As we aim to expand our offering to better serve AR developers with a broader range of innovative products, we find it necessary to discontinue our support for Titanium.

What does this mean?

The last updated version of the Module is scheduled for February 2019.

Our intent is to provide support for existing subscription customers using Titanium for a limited time in order to ease migration. Wikitude will fix critical issues in the Titanium Module at its discretion until 01.06.2019.

Other development frameworks

Next to the officially maintained extensions like Unity, Cordova or Xamarin, the Wikitude developer community created several extensions in the past weeks like Ionic, React Native or Adobe Flash. We are also investigating to support new development platforms in the future.

You can check all supported platforms and details how to get started in our 101 AR series and in our documentation section.

We apologize for any inconvenience this announcement may cause and are eager to meet our community’s future product requirements with our award-winning AR SDK.

Our team is here to help and clear any questions you might have.

Categories
SDK releases

Download now: Wikitude SDK 8.2

Our team is proud to announce the availability of Wikitude SDK 8.2 today. This update brings some exciting new features and performance improvements such as:

  • Support for Windows in Native API
  • Increased performance of Wikitude’s computer vision (CV) engine
  • Bug fixes and general improvements on the SDK

Run Wikitude Apps on Surface and other Windows devices

Back in summer 2018, Wikitude made its first step into the Windows universe by adding support for Windows-based apps (UWP) through Unity. While Unity 3D is a great rendering engine and a comprehensive tool to create AR apps, more and more developers need to be in full control of their apps.

With SDK 8.2 UWP you can use the UWP Native API to embed augmented reality features directly into their UWP apps. We’re launching this in a Beta version today and are looking forward to your feedback on using the new Native APIs and running the SDK on more UWP hardware – based on customer feedback our primary target device is the Microsoft Surface tablet.

As with all Wikitude SDK products the trial version of the UWP Native API is entirely free and can be used to test the SDK on your specific business use-case.

Newest version of the Wikitude CV engine

Since the beginning Wikitude has placed great emphasis on its own independent computer vision engine to power the features of the Wikitude SDK.

SDK 8.2 incorporates the newest version of the engine with improvements in the following areas:

Instant Tracking: More accurate and faster mapping for all scenes;

Object Recognition and Tracking: Faster and more accurate recognition with a significantly enhanced tracking behavior;

Plane Detection: Since its first experimental appearance in SDK 8.1, we ran many tests with the goal to improve the overall performance of plane detection. In 8.2, we still label this feature experimental, but you will find many improvements in detection speed and detection accuracy.

Note, the overall behavior at the beginning changed in a way, that the initial area is not assumed as plane and it will take a little bit longer to see planes. This prevents the “plane blinking effect” in which false planes are detected, then removed, and substituted by the correct plane detections within seconds after starting.

Quality, Quality, Quality

Wikitude has invested a lot of time to ensure a stable product and increased quality. This release includes the first improvements of this investment with numerous fixes and enhancements throughout the code-base. Some of them are listed below:

  • Camera2 API (Android) now accessed faster
  • Gestures now behave correctly when targets and objects are rotated
  • Fix for 3D model animations when a target is lost/found
  • Fix for the Unity Editor where the indicator for target quality was wrongly calculated
  • Fix a networking issue for iOS when loading content from https servers using SNI

Start developing with Wikitude

Getting started with SDK 8.2 has never been easier! Here’s how:

We can’t wait to hear your feedback! Spread the news about SDK 8.2 on Twitter, Facebook or Instagram using the hashtag #Wikitude.

Categories
Dev to Dev

Track, Save and Share: A Developer View on SDK 8

More than 6 years ago in April 2012 the very first version of the Wikitude SDK found its way to our first customers. So now, after XYZ releases in total, what can one expect from the 8th major market version of an augmented reality SDK? In short: a lot. Wikitude SDK 8 represents a change as significant as SDK 5 was when we started to offer native APIs for the Wikitude SDK. Some features in this 8th version made it necessary to rethink software architecture choices from the past and re-write major parts of the SDK to give it a structure that supports future challenges and meets the requirements of future projects.

In this article, we will share insights about the technical background of the key changes and features that are new in Wikitude SDK 8.

Instant Targets – making augmented reality persistent

Back in early 2017, the Wikitude SDK was the first SDK to provide developers with the ability to use Instant Trackers (aka markerless tracking, aka Positional tracking) for a wide range of devices. Some of this functionality is now available as part of our SMART feature, which wraps ARKit and ARCore and Instant Tracking into a single API. Positional tracking is a fantastic capability, but it is not persistent at all. Users can place content freely but will have to re-do this every time they start another session.

Wikitude’s Instant Tracking included the ability of visual re-localization right from the beginning (once tracking was lost, it would pick up again in case it detected an area already covered). With SDK 8 Wikitude now introduces the ability to save Instant Tracking sessions in the form of Instant Targets. The API allows you to serialize the current session and use this target to load as a target later on – the same way used with other trackers in the Wikitude SDK. Object Tracker recognizing Object Targets, Image Tracker recognizing Image Targets, and now, Instant Tracker recognizing Instant Targets.



Instant Targets is a 3D representation of whatever a user tracked in a session – in fact, the file format in which the information is stored is identical to the files generated for Object Tracker (wto). This could be really anything – internally we tried plain images, objects, and smaller scenes with good results.

Reworked AR engine

The introduction of Instant Targets is accompanied by a major upgrade to our Instant Tracking engine – actually to the entire 3D SLAM engine that is powering Instant Tracking, Object Tracking, and all Extended Tracking modes in the background. Anything SLAM-related was revisited and improved – the algorithms have been updated to the latest generation of SLAM algorithms and incorporate findings from recent research.

While ARKit and ARCore are deploying sensor-fusion-based systems, we saw that there is still a great potential in vision-only systems, that don’t require detailed calibration of camera and IMU. Of course, sensor-fusion systems have their advantages when it comes to fast, rotational-dominant movements like fast 180° turns.

Apart from this, the new 3D SLAM engine reaches extremely good results in tracking benchmarks even compared to ARKit and ARCore. In our tests, we could see a reduction of tracking error by up to 60% compared to SDK 7.2.

The general benefits of the new SLAM engine you will enjoy are:

  • Higher precision of tracking the environment in general
  • Instant Tracking more robust for pure lateral movements with low parallax at the beginning
  • Tracking can cover greater areas

Going beyond Object Recognition with Extended Object Tracking

For Object Recognition SDK 8 has many changes included. Both the quality and accuracy of the initial recognition and tracking improved substantially, particularly for 360° tracking scenarios. The version of Object Recognition in SDK 8 is fundamentally better compared to our previous versions. Part of the improvements are due to the new 3D SLAM engine, but a great part is also due to the new recording process for Object Targets introduced with SDK 8.

Previously, Object Targets were created by uploading a video of the object. While this method worked properly for many of our customers, we identified weaknesses with this approach:

  • Extending Object Targets was cumbersome as it required producing a new video of the entire object
  • The size of the video files was limited and as a direct consequence, the video quality could only be average.
  • For some objects, it is hard to create a proper video

So with SDK 8, we are introducing a different way to create Object Targets: Meet image-based conversion. Object Targets for SDK 8 are now created by uploading images of the object to Studio Manager, which will then convert the images into an Object Target.

With this approach, the quality of the resulting Object Targets will already be a lot better compared to the video-based approach (e.g. if you create an Object Target from a video and compare it with an Object Target created from still images of that very same video, the new image-based conversion will produce a more accurate Object Target).

Precise Object Targets obviously also result in higher accuracy. However, the new conversion method made it necessary to adapt the internals of the wto file format and wtos created using image-based conversion will not be backward compatible.





The new image-based conversion also makes it a lot easier to add uncovered parts by just adding images to the existing map and recreating it using the new images.

We also saw that many objects from customers consist of mirrored faces or have some sort of symmetry in it. The long sides of the toy fire truck that you find in Wikitude’s sample app are nearly identical but mirrored. The new image-based conversion has been developed, so it can detect these cases and identify symmetrical faces correctly.

Speaking of new images: this method also allows you to upload images from the object under different conditions. Using images of the same object in front of a light background and in front of a dark background (or adding variants of an object in direct sunlight and in overcast conditions) will increase the stability of recognition in changing light conditions.

Scene Recognition – tracking LARGE objects and scenes

This brings me to the next great (!) addition of Object Recognition in SDK 8 – Scene recognition. As a direct consequence of the new conversion method, suitable objects can now be considerably larger. In our tests, we mapped objects like monuments, entire house facades, and castles that cover areas over 2400 square meters.





You might already get the feeling that SDK 8 is more than just another feature release of the Wikitude SDK. With SDK 8 we re-built many parts of the SDK to make the architecture fit the growing requirements. Similar to SDK 5 more than 3 years ago, SDK 8 includes another major software architecture redesign. Under the hood of the SDK now works our internally dubbed “Universal SDK”, a platform and OS-agnostic SDK wrapping the core functionality in C++ – while the Wikitude SDK always consisted of a C++ core, we took this approach to the extreme with SDK 8.

Unity Live Preview

As a result of the Universal SDK, we can now finally offer Live Preview for Unity, for both using attached cameras or Unity Remote for testing your experiences directly within the IDE. No more building and transferring your app to an attached device for your testing. Live Preview works for all tracker types – even Instant Tracking using the Unity Remote application. Unity developers can now work using this feature on their macOS or Windows machines.

Introducing support for Windows

Windows! Finally, the Wikitude SDK will run on Windows 10 UWP computers as well. Not only for Live Preview but as a first-class citizen under the operating systems supported by Wikitude. Again, the Wikitude SDK for Windows is powered by the Universal C++ core. For the start, you can build augmented reality experiences powered by Wikitude through Unity and a Native API, so you can build pure UWP AR apps without the need to learn Unity. Our JavaScript API and support for Hololens will follow later. Given the unfortunate developments of the Windows Phone ecosystem, we decided on the initial release in which only devices with Intel™ chips will be supported. As Microsoft and ARM are gearing up to run newer Microsoft hardware also on ARM-based chips, we will monitor the market and add support for ARM-based devices later on.


Some additional cool features


The camera obviously plays a vital role in every augmented reality experience. With the Wikitude SDK, you already have several ways to control the camera streams available on a device. From changing camera (back/front) to setting focal distances, there already is a great variety available. With SDK 8 we are adding two additional items that you can manually control if needed. Developers can now tell the camera which area of the camera image should be used to calculate exposure time and set focus. In a sample, we demonstrate that usage in a tap-to-focus scenario.

With this release, Android developers can use Gradle 3 (used in Android Studio 3) without any limitations.

Endless AR possibilities

Wikitude SDK 8 is a big step for us to make the life of AR developers easier and help them create new ways of AR experiences. We are proud to announce the beta version of SDK 8 today and will continue to increase stability and performance in the upcoming weeks.

Start developing with Wikitude SDK 8

Getting started with SDK 8 has never been easier! Here’s how:

Download SDK 8 and sample app (included in the download package)

Select your license plan

Got questions? Our developers are here for you!

Help us spread the news on Twitter, Facebook, and LinkedIn using the hashtag #SDK8 and #Wikitude.

Categories
Dev to Dev

Wikitude SDK 7: A developer insight

Wikitude SDK 7 includes a long list of changes and additions to our augmented reality SDK. In this blog post, we will go through the modifications in more detail and what they offer for developers and users.

As you will see, SDK 7 has 3 key areas of improvement: Object Recognition and Tracking based on SLAM, multiple image recognition and enhancements for iOS developers.

Bring your objects into your augmented reality scene

Let’s get started with the biggest addition in this release: Object Recognition and Tracking for augmented reality. With this, we introduce a new tracker type beside our existing Image and Instant Tracking. The Object Tracker in the SDK gives you the possibility to recognize and track arbitrary shaped objects. The idea behind it is very similar to our Image Tracker, but instead of recognizing images and planar surfaces, the Object Tracker can work with three-dimensional structures and objects (tools, toys, machinery…). As you may have noticed, we don’t claim that the Object Tracker can work on any kind of object. There are some restrictions you should be aware of and types of objects that work a lot better. The SDK 7 documentation has a separate chapter on that.

In short – objects should be well structured and the surface should be well textured to play nicely with object recognition. API-wise the Object Tracker is set-up the same way as the Image Tracker.

The Object Tracker works according to the same principle as the Image Tracker. It detects pre-recorded references of the same object (the reference is actually a pre-recorded SLAM map). Once detected in the camera, the object is continuously tracked. While providing references for Image Targets is straight-forward (image upload), creating a reference for the object is a little bit more complex.

Scalable generation of object references

We decided to go for an approach that is scalable and usable for many users. This ruled out a recording application, which would be used to capture your object. This would also make it necessary, that each object is physically present. Considering this, we went for a server-side generation of Object Targets (sometimes also referred to as maps). Studio Manager, our web-tool for converting Image Targets, has been adopted for converting regular video files into Object Targets. You will find a new project type in Studio Manager that will produce Object Targets for you. Here’s a tutorial on how to successfully record objects.

https://www.youtube.com/watch?v=eY8B2A_OYF8

After you have uploaded your video, the backend will try to find the best possible Object Target in several computation runs. We can utilize the power of the server to run intensive computational algorithms to come to a more accurate result compared to a pure on-device solution that has to operate in real-time. Have a look at the chapter “How to create an Object Target” in the SDK 7 documentation for a deeper understanding of the process. It also gives us the ability to roll-out improvements of the recording process without the need for a new SDK version.

Rendering Upgrade: Working with occlusion models

When moving from Image Targets to Object Targets, requirements for rendering change as well. When the object has a solid body with different sides it is particularly important to reflect that when rendering the augmentations. SDK 7 introduces a new type called AR.Occluder in the JavaScript API, that can take any shape. It acts as an occlusion model in the 3D rendering engine, so you can hide augmentations or make them a lot more realistic. For your convenience, the occluder can either be used with standard pre-defined geometric shapes or take the form of any 3D model/shape (in wt3 format). Not only Object Tracking benefits from this. Occluders, of course, can be used in combination with Image Targets as well – think of an Image Target on your hand wrist, that acts as an Image Target for trying on watches. For a proper result, parts of the watch need to be hidden behind your actual arm.

Updated SLAM engine enhancing Instant Tracking

Object Recognition and Tracking is based on the same SLAM engine that powers Instant Tracking and Extended Tracking. To make Object Recognition work, we upgraded the SLAM engine with several improvements, changes to the algorithm and bug fixes to the engine itself. This means SDK 7 carries an entirely revamped SLAM engine. You as a developer and your users will notice that in several ways:

1. Higher degree of accuracy in Instant Tracking and Extended Tracking
2. More stable tracking when it comes to rotation
3. Less memory consumption
4. Less power consumption

All in all, that means that devices running 32-bit CPUs (ARMv7 architecture) will see a performance boost and perform considerably better.

Instant Tracking comes also with two new API additions. Setting trackingPlaneOrientation for the InstantTracker lets you freely define on which kind of plane the Instant Tracker should start (wall, floor, ramp…). The other API is called hit testing API and will let you query the depth value of any given screen point (x,y). It will return the 3D- coordinates of the corresponding point in the currently tracked scene. This is useful for placing augmentations at the correct depth in the scene. The SDK will return an estimate dependent on the surrounding tracked points. The video below gives you an idea of how the hit testing API can be used.

1,2,3…. Multiple targets now available

Additionally, our computer vision experts worked hard to make our Image CV engine even better. The most noticeable change is the ability to recognize and track multiple images at the same time in the camera frame. The engine can detect multiple different images, as well as multiple duplicate images in the camera (e.g. for counting purposes). Images can overlap or even superimpose each other. The SDK does not have a hard-coded limit on the number of multiple images it can track – only the processing power of the phone puts a restriction on it. With modern smartphones it is easily possible to track 8 and more images.

Furthermore, the SDK offers developers the ability to get more information about the targets in relation to each other. APIs will tell you how far targets are apart and how targets are oriented towards each other. Callbacks let developers react on changes of the relationship between targets. Developers can define the maximum number of targets, so the application does not waste power searching for further targets. The image below gives you an idea how this feature can look like for a simple interactive card game.

Boosting recognition to unparalleled distances

All developers and users that require images to be recognized from a far distance in their augmented reality scene should take a look at the extended range recognition feature included in SDK 7. By using more information from the camera frame, SDK 7 triples recognition distance compared to previous SDK versions. This means that an A4/US-letter sized target can be detected from 2.4 meters/8 feet. Calculated differently, images that cover 1% of the screenable area can still be accurately recognized and a valid pose can be successfully calculated. The SDK enables this mode automatically for devices capable of this feature (auto-mode). Alternatively, developers can manually enable/disable the function. When testing the feature and comparing it to competing SDKs, we did not detect any other implementation delivering this kind of recognition distance. All in all this means easier handling for your users and more successfully recognized images.

Bitcode, Swift, Metal – iOS developers rejoice

This brings us to a chapter dedicated to iOS developers, as SDK 7 brings several changes and additions for this group. First of all, Wikitude SDK now requires iOS 9 or later, which shouldn’t be a big hurdle for the majority of apps (currently nearly 95% devices meet this requirement). With SDK 7, iOS developers can now build apps including the Wikitude SDK using the bitcode option. Apps built with the bitcode will have the benefit of being smaller, as only the version necessary for the actual device architecture (armv7, armv7s, armv8) is delivered to the user and not a fat binary including all architectures.
As a more than welcomed side-effect of re-structuring our build dependencies to be compatible with bitcode, the

Wikitude SDK can now also run in the iOS simulator. You still won’t see a camera image in the simulator from your webcam, but you can work with pre-recorded movie files as input for the simulator.

In SDK 6.1 we introduced support for OpenGL ES 3 as graphics API. SDK 7 now also lets you use Metal as your rendering API in Native and Unity projects. Talking about new stuff, Wikitude SDK 7 also includes an extensive sample for Swift explaining how to integrate the Wikitude SDK in Swift application. Note the API itself is still an Obj-C API, but the sample makes it a lot clearer how to use API within a Swift environment.

We haven’t forgotten Android

Android developers will be happy to hear, that the Android version of Wikitude SDK 7 makes use of a different sensor implementation for Geo AR experiences. The result is a smoother and more accurate tracking when displaying geo-related content. For Android, we are also following the trend and moving the minimum Android version up a little bit by requiring Android 4.4 or later, which corresponds to a minimum of 90% of Android devices.

We hope you can put SDK 7 and its additions to good use in your AR project. We love to hear from you and we are keen to receive suggestions on how to make the Wikitude SDK even more useful to you!

Start developing with Wikitude SDK 7

Getting started with SDK 7 has never been easier! Here’s how:

Help us spread the news on Twitter, Facebook and Linkedin using the hashtag #SDK7 and #Wikitude.

Categories
SDK releases

Here comes SDK 6.1

Update (August 2017): Object recognition, multi-target tracking and SLAM: Track the world with SDK 7

When we launched Wikitude SDK 6 nearly 2 months ago, we were excited to see developers jump onto markerless SLAM tracking and creating fascinating augmented and mixed reality experiences. Today we are proudly releasing an update for SDK 6 with stability updates and a few new features. Check out what’s new:

Support for OpenGL ES 3.x Graphics API:

Over the past weeks, we have seen that many augmented reality projects are based on more modern graphics API than OpenGL ES 2.0. Developers using Wikitude SDK 6.1 can now make use of OpenGL ES 3.x. Support for Metal Graphics API is currently being worked on and a new iOS rendering API will be included in our next release.

Improved stability for image tracking:

This release comes with an updated computer-vision engine for image tracking, which delivers a smoother AR experiences. Particularly when holding the device still or using larger augmentations, developers will notice a more stable tracking and little to no jittering. See video below for a performance comparison.

Reworked communication from JavaScript API to native code:

An essential part of the JavaScript API is the ability to communicate with parts of the app that are not involved in the augmented reality experience as such, often written in Obj-C or Java. This communication has been based on a custom URL protocol to send and receive data. In Wikitude SDK 6.1 we are introducing a different approach to the communication between JavaScript API and native code based on exchanging JSONObjects directly.

Several stability updates:

 SDK 6.1 comes with many stability updates and improvements – the most noticeable being the fix of a nasty bug that prevented 2D and 3D augmentations to be rendered separately. With this fix, the z-order of augmentations is properly respected. Additionally, developers now can use the ADE.js script again to debug experiences in the browser.

For a full list of improvements and fixes, make sure to check out the release notes for all supported platforms and extensions:

For customers with an active subscription, the update is free – depending on your current license key, it might be necessary to issue a new key. Please reach out via email for any license key related issues.

All other existing customers can try out Wikitude’s cross-platform SDK 6.1 for free and purchase an upgrade for Wikitude SDK 6.1 anytime.

The update SDK package is available on our download page for all supported platforms, extensions and operating systems. If you have any questions feel free to reach out to our developers via the Wikitude forum.

Categories
SDK releases

Product update: Wikitude Studio and Wikitude App

It’s been a long ride since we first launched the Wikitude App, the world’s first AR mobile app, and Wikitude Studio, the easiest AR content management tool in the market. With the launch of our SDK 6 Wikitude started a new chapter in its history, focusing on the development of powerful tools for developers to create their own augmented reality apps in a single platform.

This blog post aims to share important dates and details related to the upcoming changes in the Wikitude product suite in accordance with our Terms and Services. This information comes ahead of time so our community can have enough time to plan for the upcoming changes. 

Wikitude is terminating Wikitude Studio (studio.wikitude.com) and replacing it by our newly developed Studio Editor. Additionally, Wikitude will terminate support for geo-worlds hosted in the Wikitude App. 

Our team has been building the new generation AR content management tool, called Studio Editor, that will continue to provide the same features you loved in Wikitude Studio. Studio Editor is now available for free trial. Exact migration instructions and actions to be taken will follow via email to all customers within the next weeks.


Migration & hosting 
  • Automatic migration of AR experiences from Wikitude Studio to Studio Editor will be provided for all customers later this year. 

  • AR experiences hosted in Wikitude Studio will be upgraded for compatibility with Studio Editor. 

Important dates (in chronological order) 

  • Customers using Wikitude Studio hosting in combination with a SDK version lower than 4.1, should contact our team for further information latest by 2017-03-30.

  • Wikitude Studio will export worlds only in SDK 4.1 or higher version starting 2017-04-04 (if you use the Wikitude App as publishing channel, this does not affect you). 

  • The Wikitude Studio (studio.wikitude.com) will be discontinued from 2017-09-30 onwards. 

  • After 2017-09-30 AR experiences hosted in Wikitude Studio will not be editable (read-only). 

  • Geo-Worlds in the Wikitude App will not be available after 2017-09-30. 

  • Existing tools offering ‘Publish in Wikitude’ feature will not be available after 2017-09-30 except for upcoming the new features in Studio Editor. 

  • AR experiences hosted in Wikitude Studio will be deleted after 2017-12-31.

Wikitude is committed to creating powerful tools that allow anyone to build ultimate AR experiences in just a few clicks. 

Should you have any doubts please don’t hesitate to contact our team on the email sales@wikitude.com.

The Wikitude Team

Categories
SDK releases

Wikitude SDK 6 – A technical insight

Update (August 2017): Object recognition, multi-target tracking and SLAM: Track the world with SDK 7

In this blog post, Wikitude CTO Philipp Nagele shares some insights into the technical background of SDK 6 and the changes that go along with the new version of Wikitude’s augmented reality SDK.

The planning and work for SDK 6 reach far back into 2015. While working on the previous release 5.3 to add support for Android 7.0 and iOS 10, we already had a clear plan on what the next major release of our SDK should include. It is very gratifying that we now can lift the curtain on the scope and details of what we think is the most comprehensive release in the history of Wikitude’s augmented reality SDK.

While Instant Tracking is without doubt the highlight feature of this release, there are many more changes included, that are worth mentioning. In this blog post, I’ll try to summarize the noteworthy changes and some additional information and insights.

Pushing the boundaries of image recognition

Most of our customers choose the Wikitude SDK for the ability to recognize and track images for various purposes. The engine that powers it has been refined over the years, and, already with SDK 5, reached a level of performance (both in speed and reliability) that puts it at the top of augmented reality SDKs today.
With Wikitude SDK 6, our computer vision engine took another major step forward. In more detail, the .wtc file format now uses a different approach in generating search indexes, which improves the recognition rate. Measured on the MVS Stanford data set, SDK 5 delivered a recognition rate of around 86%, while SDK 6 now recognizes 94 out of 100 images correctly. Moreover, the recognition rate stays above 90% independent of the size of the .wtc file. So, no matter whether your .wtc file includes 50 or 1000 images, users will successfully recognize your target images.

Another development we have been working lately is to embrace the power of genetic algorithms to optimize the computer vision algorithms. Several thousands of experiments and hours on our servers led to an optimized configuration of our algorithms. The result is a 2D image tracking engine that tracks targets in many more sceneries and different light conditions. You can get a first impression on the increased robustness in this direct comparison between the 2D engine in SDK 5 and SDK 6. The footage is unedited and shows a setup with several challenging factors:

  • Low-light condition (single spot light source)
  • Several occluding objects
  • Strong shadows further occluding the images
  • Busy scenery in general
  • Reflections and refractions

The improvements in SDK 6 make the 2D engine more robust while keeping the same performance in terms of battery consumption and speed.

Instant Tracking – SLAM in practice

You might have seen our previous announcements and advancements in not only recognizing and tracking two-dimensional images, but instead, working with entire maps of three-dimensional scenes. It is no secret that Wikitude has been working on several initiatives in the area of 3D tracking.

For the first time, Wikitude SDK 6 includes a feature for general availability that is based on a 3D computer vision engine that has been developed in-house for the past 2 years. Instant Tracking is based on a SLAM approach to track the surrounding of the device and localize the device. In contrast to image recognition, Instant Tracking does not recognize previously recorded items, but instantaneously tracks the user’s surroundings. While the user keeps moving, the engine extends the recorded map of the scenery. If the tracking is lost, the engine immediately tries to re-localize and start tracking again, with no need for user input.
Instant Tracking is true markerless tracking. No reference image or marker is needed. The initialization phase is instant and does not require a special initialization movement or pattern (e.g. translation movement with PTAM).

The same engine used for Instant Tracking is used in the background for Extended Tracking. Extended Tracking uses an image from the 2D computer vision engine as an initialization starting point instead of an arbitrary surface as in the case of Instant Tracking. After a 2D image has been recognized, the 3D computer vision engines starts recording the environment and stays in tracking mode even when the user is no longer viewing the image.

For details on how to get started with Instant Tracking, visit our documentation, see our sample app (included in the download package) and watch the instruction video to learn how to track the environment. SDK 6 markerless augmented reality feature is available for Unity, JavaScript, Native, extensions and Smart-glasses (Epson and ODG).

Putting the Pieces Together – Wikitude SDK and API

What is the strength of the Wikitude SDK? It is much more than a collection of computer vision algorithms. When planning a new release, the product and engineering team aim to create a cross-platform SDK that is highly usable. We try to think of use-cases for our technology, then identify missing features. So it should come as no surprise that Wikitude SDK 6 is packed with changes and new features beyond the computer vision features.

The most obvious and noticeable change, especially for your users, is FullHD rendering of the camera image. Previously, Wikitude rendered the camera stream in Standard Definition (SD) quality, which was perfect back in 2012 when the Wikitude SDK hit the market. Since then, device manufacturers have introduced Retina displays and pixel-per-inch densities beyond the distinguishable. An image rendered in VGA resolution on this kind of display just doesn’t look right anymore. In Wikitude SDK 6, developers can now choose between SD, HD or FullHD rendering of the camera stream..

Additionally, on some devices, users can now enjoy a smoother rendering experience, as the rendering frequency can be increased to 60fps. For Android, these improvements are based on new support of the Android Camera 2 API, which, since Android 5.0, is the successor to the previous API (technically more than 60% of Android devices as of 1/1/2017 should run the Camera2 API). It allows fine-grained control and access to the camera and its capabilities. While the API and the idea behind it are a welcome improvement, the implementations of the Camera 2 API throughout the various Android vendors are diverse. Different implementations of an API are never a good thing, so support for the new camera features are limited to participating Android devices.

“Positioning” was another feature needed to allow users to interact with augmentations. This feature is ideal for placing virtual objects in unknown environments. With Wikitude SDK 6, developers now have a consistent way to react to and work with multi-touch gestures. Dragging, panning, rotating – the most commonly used gestures on touch devices are now captured by the SDK and exposed in easy-to-understand callbacks. This feature has been implemented in a way that you can use it in combination with any drawable in any of the different modes of the Wikitude SDK – be it Geo AR, Image Recognition, or Instant Tracking

The new tracking technology (Instant Tracking) lead us to another change which developers will encounter quite quickly when using SDK 6. Our previously used scheme of ClientTracker and CloudTracker didn’t fit anymore for an SDK with a growing number of tracker types. SDK 6 now carries a different tracking scheme with more intuitive naming. For now, you will encounter ImageTracker with various resources (local or cloud-based) and InstantTracker, with more tracker types coming soon. We are introducing this change now in SDK 6, while keeping it fully backward compatible with the SDK 5 API, while also deprecating parts of the SDK 5 API. The SDK comes with an extensive migration guide for all platforms, detailing the changes.

Last, I don’t want to miss the opportunity to talk about two minor changes that I think can have great impact on several augmented reality experiences. Both are related to visualizing drawables. The first change affects the way 2D drawables are rendered when they are attached to geo-locations. So far, 2D drawables have always been aligned to the user when attached to a geo-location. Now, developers have the ability to align drawables as they wish(e.g. North), and drawables will stay like that. The second change also affects 2D drawables. The new SDK 6 API unifies how 2D and 3D drawables can be positioned, which adds the ability to position 2D drawables along the z-axis.

Naturally, all of our official extensions are compatible with the newest features. The Titanium (Titanium 6.0) and Unity (Unity3D 5.5) extensions now support the latest releases of their development environments, and x86 builds are now available in Unity.

The release comes with cross-platform samples (e.g. gestures are demonstrated in a SnapChat-like photo-booth experience) and documentation for each of the new features, so you can immediately work with the new release.

Start developing with Wikitude SDK 6

Getting started with Wikitude’s new SLAM-based SDK is super easy! Here’s how:

  1. Download SDK 6 and sample app
  2. Check out our documentation
  3. Select your license plan
  4. Got questions? Our developers are here for you! 

We are excited to see what you will build with our new SDK. Let us know what is your favorite feature via Twitter and Facebook using the hashtag #SDK6 and #Wikitude !

Categories
SDK releases

Introducing: Wikitude SDK 5

Update (August 2017): Object recognition, multi-target tracking and SLAM: Track the world with SDK 7

We would like to share the details of our version 5 of the cross-platform Wikitude SDK with you.
We have been working on this release for some time now and consider this our most ambitious release since launching the first version of the Wikitude SDK more than three years ago.

Continue to read the details and you will easily understand why.

For our developers the Wikitude SDK 5 brings increased flexibility when it comes to choosing your development environment.

Beside the existing options to work with Wikitude’s JavaScript API and the well-established extensions for Cordova, Titanium and Xamarin, developers are now able to embed augmented reality features using the new Wikitude Native API for Android and iOS. The Native API will give access to all computer-vision related features like 2D Markerless Image Recognition and Tracking, 2D Cloud Markerless Image Recognition and Tracking.

Speaking of 2D image tracking, the new SDK extends – literally – its functionality here as well. Extended Image Tracking, available in both Native and JavaScript API, is a new tracking mode that will keep the image tracking on although the original target image can not be seen by the camera image anymore.

The Native API is also the base for the new Unity3D plugin for the Wikitude SDK. With the Unity3D plugin developers are able to add 2D target images to their Unity3D based application.

Starting with this SDK release, developers are be able to create and use custom plugins for the Wikitude SDK. Plugins under this framework can receive a shared camera frame plus additional information about recognized images – like the pose and distance. Plugins can either be written in C++, Java or ObjC and can communicate with your augmented reality experience.

Furthermore the Wikitude SDK 5 brings full compatibility with Android Studio (intermediate set-up guide is already available).

WikitudeSDK_architecture_v5

Extended Image Tracking

150527_WT_SDK5_Icon_ExtendedTracking The Extended Image Tracking option is an advanced tracking mode for 2D markerless tracking, that will continue to track your target image, although it can’t be seen by the camera anymore. Users will scan your target image as of now, but will be able to leave the target and continue to move around, still keeping the tracking of the entire 3D scene.
Extended Tracking is the first release of Wikitude’s new 3D Tracking engine and is supplementing Wikitude’s 2D image tracking capabilities. The new mode is fantastic for larger 3D model sceneries, or smaller image targets on larger surfaces, where the user can move around more freely.

Native API for Wikitude SDK

150527_WT_SDK5_Icon_iOS-Android For all developers, who want to use the Wikitude SDK at its core, Wikitude is branching off its computer vision core technology. The Native API contains the full computer vision engine of the Wikitude SDK, but can be integrated using native programming languages for Android and iOS (Java, ObjC).

 

The Native API features:

  • Plugin Framework
  • 2D Image Recognition and Tracking (Offline)
  • 2D Cloud Recognition and Tracking (Online)
  • 2D Extended Image Tracking

Unity3D Plugin for Wikitude SDK

150527_WT_SDK5_Icon_Unity3D
Based on the new Native API, Wikitude offers a plugin for Unity3D so you can integrate Wikitude’s computer vision engine into a game or application fully based on Unity3D. This means you can work with target images and image recognition in your Unity3D app and benefit from the full feature set of the Unity3D development environment. Adding the power of Wikitude’s SDK with the advanced capabilities of Unity3D makes this combo an unbeatable duo.

Plugin Framework

150527_WT_SDK5_Icon_Plugin
The new Plugin Framework allows to extend the Wikitude SDK by 3rd party functionality. Plugins for the Wikitude SDK have access to the camera frames and information about recognized images (pose, distance). This is perfect for additional functionality that also requires camera images. Plugins are written in C++, Java or ObjC and can communicate both with the JavaScript API and the Native API.

The SDK includes two samples for plugins:

  • Barcode and QR Scanner
  • Face Detection

Full Android Studio compatibility

150527_WT_SDK5_Icon_AndroidStudio Android Studio is becoming more and more the preferred IDE for developing Android apps.
While the Wikitude SDK version 4.1 can run in Android Studio Wikitude SDK 5.0 now has been optimized to work nicely with Android Studio.

  • Updated library format .aar
  • Sample App for Android Studio

Availability

Update (August 2017): The SDK 7.0 official release is now available for download. Customers with a valid Wikitude subscription license will receive future updates for free upon release.

Oh, there is one more thing…3D Tracking!

Wikitude will publicly release SLAM based 3D tracking capabilities soon! Please check wikitude.com/SLAM for details. Here is a quick video demo to give you a glimpse of what’s coming.

Categories
SDK releases

Preview on the wikitude SDK Plugins API

The SDK 5 is now available for download!

In the past months our development focus for the Wikitude SDK was on executing our ambitious plans for the next major version – 5.0, released at the end of August. Today we are lifting the curtain on one particular feature: the Wikitude SDK Plugins API!

Over and over customers approached us with the desire to enhance the functionality of the Wikitude SDK with features from other areas of computer vision. Optical character recognition (OCR), face detection and recognition of QR codes were demanded repeatedly. All these features have two things in common: first they also require access to the camera image, and second they can compliment the AR experience built with the Wikitude SDK.

Embedding and integrating some of those libraries on our own would have been an option, but it would have bloated the SDK, and we were sure, that we wouldn’t be able to cover all customer requirements. The idea then was born to link external libraries to the SDK through a common interface, which then lead to the concept of plug-ins for the wikitude SDK.

Technically a plugin is a class, either written in C++, Java or ObjC, that is derived from the wikitude Plugin base class. Beside lifecycle handling and options to enable and disable the plugin, the Plugin class has two main methods that you can override

cameraFrameAvailable, which is called each time the camera has a new frame
update, which is called each time the wikitude SDK renders a new frame

/* Derive from this class for custom plugin implementations */
class Plugin {
   public:
      Plugin();
      ~Plugin();
      string identifier() const; // returns a unique plugin identifier
      bool processesColorCameraFrames(); // returns true if the plugins wants to process color frames instead of bw
 
      void setEnabled(bool enabled_);
      bool isEnabled();
 
      string callJavaScript(string javaScriptSnippet); // evaluates the given JavaScript snippet in the currently loaded ARchitect World context.
 
   protected:
      void initialize(); // called when the plugin is initially added to the Wikitude SDK
      void pause(); // called when the Wikitude SDK is paused e.g. the application state changes from active to background
      void resume(uint pausedTime_); // called when the Wikitude SDK resumes e.g. from background to active state. pausedTime represents the time in milliseconds that the plugin was not updated.
      void destroy(); // called when the plugin is removed from the Wikitude SDK
 
      void cameraFrameAvailable(const Frame&; cameraFrame_); // called each time the camera has a new frame
      void update(const vector recognizedTargets_); // called each time the Wikitude SDK renders a new frame
 
   protected:
      string      _identifier;
      bool        _enabled;
}; 

With those methods in place your plugin will be able to read the full camera image for your own purpose, where the YUV image is also processed in Wikitude’s computer vision engine.

In case you have the wikitude SDK running with ongoing image recognition, the plugin API will populate the RecognizedTarget in the update method once an image has been recognized. The plugin can then work with class RecognizedTarget, which wraps the details of the target image in the camera view. With that you can read out the pose of the target image and use it for your purposes. Additionally, the call contains the calculated distance to the recognized target.

class RecognizedTarget {
   public:
      const string&    getIdentifier() const; // the identifier of the target. The identifier is defined when the target is added to a target collection
      const Mat4&      getModelViewMatrix() const; // the model view matrix that defines the transformation of the target in the camera frame (translation, rotation, scale)
      const Mat4&      getProjectionMatrix() const;
      const float      getDistanceToCamera() const; // represents the distance from the target to the camera in millimeter
};

Passing values from within the plugin to the JavaScript part of your augmented reality experience is done via the addToJavaScriptQueue() method of the Plugin class. Using this function will execute any JavaScript code in the context of your augmented reality experience.

We hope you like the first release of the Plugins API and can build powerful extensions for the Wikitude SDK. We already have ideas how to further develop the concept like generic Positionables, which you can pass to the Wikitude SDK (e.g. pose of something you recognize and track) or sharing the render loop with plugins.

Categories
Dev to Dev

Wikitude SDK and Android Studio

Update (August 2017): A developer insight into Wikitde SDK 7- Object recognition, multi-target tracking and SLAM based instant tracking

Android Studioandroid-studio-logo-1 the new IDE by Google for developing Android apps was announced back at Google I/O 2013. The adapted IDE, which in its core is based on IntelliJ IDEA, has been available since December last year. This year Google I/O also brought news, that the Android NDK will be fully covered as well. Something we are looking forward to use internally.

Google made it clear from the beginning that Android Studio will replace the existing Android Developer Tools (ADT) for Eclipse and that Google intends to cease support for Eclipse somewhen in the future. As we know now since the 26th of June from a blog post by Jamal Eason, Product Manager Android at Google, support for Eclipse will stop entirely by end of this year (2015).

How does that relate to you as a developer?

  • Migrate your projects to Android Studio projects
  • Get familiar with Android Studio. For example, check out the new memory or CPU profiler – an improvement over the DDMS feature in Eclipse
  • Get to know the new build system gradle and say goodbye to ant. You can start using the gradle plugin in Eclipse as a first step of the migration.
  • Have a look at the new parameter applicationID instead of package name
  • The format of Android libraries changes from a Java Archive (.jar) to an Android Archive Library (.aar), this has many advantages especially if you plan to distribute your application using the Google Play Store.

The last point from the list above of course also affects us at Wikitude. Internally we have already made the switch and are purely using Android Studio for any Android application development. For the Wikitude SDK will be do this switch starting with SDK 5, where we are fully embracing Android Studio. The Android library will be shipped as an Android Archive Library (.aar file). Sample projects for Android will be based on Android Studio.
Releasing the SDK by end of July will give you enough time to switch your own projects to Android Studio in time and don’t have to worry that the Wikitude SDK would not work.