Categories
3d Dev to Dev

Creating 3D content for augmented reality

Content is constantly changing. Designed for TVs and devices in the early 2000s, it now transcends the 2D realm and comes to the world around. 3D augmented reality content needs to be as immersive as VR advocates ever dreamed, minus the isolation from the outside world.

The more AR becomes part of our lives, the higher the need for content to adapt to the 3D world. It means the content needs to be realistic, spatial, and engaging. And while there are thousands of apps online, most companies are still figuring out what compelling content looks like in AR.

In this post, we’re diving into the role of content in ​​augmented reality, the challenges the industry faces, and the future of spatial content. 

Augmented reality content basics

Augmented reality content is the computer-generated input used to enhance parts of users’ physical world through mobile, tablet, or smart glasses. It can be user-generated (think of social media face filters) or professionally produced by designers working for brands and specialized agencies. 

AR content often comes as 3D models but can also come in visual, video, or audio format.

Whether you are using AR to buy a new IKEA sofa or play a game, the quality of the content you see in the app will make (or break) the AR experience.

Image source: IKEA

The role of 3D content in augmented reality experiences

Among the thousands of AR apps in the market today, the most successful ones have one thing in common: high-quality, engaging AR content. Fail to deliver that, and your project will risk joining the astonishing 99.9% of apps that flop or become irrelevant in the app stores

Content is the heart of ​​augmented reality. It ensures users have a reason to keep coming back. 

Users might be thrilled to scan an augmented wine bottle a few times and share the experience with friends. But how many times can we expect them to go back and watch the same video? 

Companies must see AR content as a critical component of long-term, well-thought-through digital strategies to ensure app longevity. It means constantly delivering fresh, contextual, and personalized content. 

Easier said than done. From high production costs to a scarcity of skilled professionals, building AR content at scale is one of the biggest challenges companies face that blocks them from keeping the apps relevant in the long run.

Challenges of building 3D content for augmented reality

3D models need to create perfect digital twins of the real world. Combined with other rendering elements (e.g. animation, audio, and physics), they make for AR’s most used type of content and provide an additional immersive layer for the user experience. 

What the user doesn’t see is the relatively complex process of creating such realistic visual assets. Their production can go from a detailed manual process and re-using computer-aided data to a photogrammetry-based creation process.

Size limits, file formats, and the total size of the application are just some of the plenty requirements developers need to understand to build great AR experiences. In addition, a lack of industry standards for AR content and a limited qualified workforce imposes significant industry challenges.

Building 3D assets: 3D model versus 3D scanning

Before we jump into the technicalities of creating content for AR, there are some basic concepts we need to clarify.

3D modeling x 3D scanning

3D modeling and 3D scanning are two ways of building 3D assets for augmented reality. 

3D modeling uses computer graphics to create a 3D representation of any object or surface. This technology is beneficial when used to recreate physical objects because “it does not require physical contact with the object since everything is done by the computer” (Skywell Software). Therefore, 3D modeling becomes ideal for creating virtual objects, scenes, and characters that don’t exist in the real world (think of Pokémons and other fantasy AR games).

3D scanning uses real-world objects and scenes as a base for the production of AR assets. Using this method, the content creators don’t craft the model from scratch using a program. Instead, they scan the object using one of two different methods: photogrammetry or scanning through a 3D scanner device (LiDAR or similar). 

GIF source: Apple.com

The main difference between the two is how they capture the data of the object. While photogrammetry uses images captured by regular smartphones, smart glasses, or tablets, scanning requires special devices equipped with depth sensors to map the object. 

It makes photogrammetry more accessible to the broader developer crowd when creating AR content, as no special equipment is required. On the flip side, 3D scanners are more reliable. 

Using either of two approaches, a point cloud can be extracted, which can be applied in the AR experience.  You can read more on the advantages of each method in the section 3D point cloud below. 

Ultimately, you can decide between using 3D modeling or 3D scanning by assessing the accessibility to the physical object to scan. If the selected AR object target is not available, then 3D modeling is the way to go. 

How is 3D content created for augmented reality?

There are plenty of AR content creation tools available on the market. Some are easy drag-and-drop that don’t require coding skills. Others are much more complex, and target experienced professionals.

Here’s an overview of the different possibilities:

Image source: DevTeam.Space


3D point cloud: In AR, a point cloud is a virtual representation of the geometry of real-world objects using a collection of points. Generated via photogrammetry software or 3D scanners, these points are captured based on the external surfaces of objects.

Because photogrammetry allows gathering 3D information out of 2D images, this method makes content creation more accessible. It overcomes ownership issues often faced with 3D models. As a result, anyone can create 3D models by simply recording or scanning the real object. 3D scanners (for example, LidAR-enabled devices) gradually become more available in the market and provide more detailed point clouds thanks to depth sensors.

Commercial tools such as Wikitude Studio, Apple Object Capture, and Amazon Sumerian are examples of photogrammetry-based programs.

AR Object Target Transformation in Wikitude Studio Editor

CAD (Computer-Aided Design): CAD models are commonly the first step to prototyping physical goods, bringing a first product view to life in the digital world. Assisted by software applications, AR developers can repurpose legacy CAD models for augmented reality-based solutions. Existing CAD data can then be used as the input method to create digital representations of the object or environment to be augmented.

Once uploaded into the selected program, CAD data is converted to an AR-compatible format in phones, tablets, and smart glasses. CAD models typically provide accurate information about the object, maximizing the potential for a reliable AR experience. Albeit being prevalent in the industrial sector, CAD-based AR experiences are progressively gaining popularity for consumer-facing apps.

Games, computer graphics: authoring software tools such as Blender, 3ds Max, and Maya are popular 3D design applications used by AR content creators. Unity, Unreal Engine, and even Apple’s Reality Composer are great tools to assemble the pieces of content and make them work together for augmented reality.

Other 3D models: beyond CAD, other popular 3D model formats can be leveraged to power augmented reality solutions, for example, glTF 2.0, FBX, Obj, etc. Compatible file formats will depend on the program used to build augmented reality. 

On the one hand, this wide variety of 3D assets formats has opened the doors to creators of many areas to put their existing models to work for AR. On the other hand, it creates confusion among developers, increasing the debate around the need for standardization in the AR industry and the creation of alternative tools that are intuitive and code-free.

What’s next for AR content creation?

With increased interest in augmented reality, we will see more tools emerging that help to create content, overcome workforce scarcity and deliver actual value through the technology. 

To facilitate content creation, AR companies invest in building platforms that don’t require technical skills (therefore closing the workforce gaps) to help brands optimize the AR content creation process. 

An example is Apple’s latest release: Reality Kit 2. This new framework includes a much-awaited Object Capture feature that allows developers to snap photos of real-world objects and create 3D models using photogrammetry. 

But if Apple’s announcement gives you déjà vu, you are not wrong. Last year, the AR media went crazy about an app that lets you copy and paste the real world with your phone using augmented reality.  

The topic of interoperability of experiences across platforms and devices is equally important. The ability to code an AR app once and deploy it in several devices and operating systems helps companies bring their projects to market as fast as possible.

The final and most crucial aspect is understanding how 3D content in augmented reality can deliver value to its users. That means getting clear goals for the AR project, understanding how it fits into your digital strategy, and having a deep knowledge of your customer. 

What are some of the trends you see in AR content creation? Let us know via social media (TwitterFacebook, and LinkedIn) and tag @wikitude to join the conversation.

Categories
SDK releases

Object Tracking 30% faster: Download SDK 9.5

Wikitude SDK 9.5 is out now, delivering unparalleled Object Tracking experiences.

Please welcome our last release of the year. SDK 9.5 brings support for Android 11, two new AR samples, and significant improvements in our computer vision engine for developers to enjoy in both the Professional and Expert editions of the Wikitude SDK.

You can test SDK 9.5 by downloading your SDK of choice here.  

Do you have a subscription license? Download our latest release to update your platform free of charge.

Wikitude SDK 9.5 highlights

30% faster Object Tracking

With this release, the Wikitude computer vision engine gets a significant upgrade, resulting in a 30% faster object tracking speed rate.

This upgrade delivers unparalleled AR experiences based on real-world objects and scenes.

The latest Object Tracking update delivers:

  • 30% speed improvements for object tracking
  • More accurate tracking under challenging conditions (light, noisy environment, etc.)
  • Faster initial recognition of objects
  • Overall performance improvements for recognition and tracking

Object & Scene Tracking covers a wide variety of use cases from supporting maintenance and remote assistance solutions, to augmenting museums pieces, enhancing consumer products like toys, and much more.

Watch our latest Unity tutorial on how to work with multiple object tracking to get started.

Developers can create object tracking AR experiences using images or 3D models as an input method (such as CAD, glTF 2.0 and more).

Check out our interactive AR tracking guide to see the best feature for your specific use case.

New Wikitude samples for Expert Edition

The Wikitude sample app gets an upgrade with SDK 9.5. Unity developers can hit the ground running with two new samples:

  • Advanced rendering making use of ARKit 4 and ARCore advanced functionality

With this sample, you will be able to use ARKit, ARCore, and Wikitude capabilities in a single development framework, making the best of Unity’s AR Foundation. With advanced rendering, AR experiences get more immersive and realistic – leveraging people occlusion, motion capture, scene geometry, and more.

  • Multiple extended images

Bring image tracking experiences to a whole new level.

This new sample enables creating more interactive AR experiences by allowing targets to interact with each other and perform specific actions according to the developer’s needs. It’s ideal for games, sales material, marketing campaigns, and enterprise use cases such as training and documentation.

For full details on this release, check out our change log.

How to get SDK 9.5?

Ready to start developing? Wikitude makes it easy. Select an SDK, get a free trial, and start with your next augmented reality experience today.

Ready to launch your project? Seize the chance to take advantage of our Cyber Week (until 4.12.2020). Use code cyberweek15 to get 15% OFF your Wikitude SDK via our online store.

New to Wikitude? Welcome! We have a free Wikitude trial version for testing purposes. 

If you’re already a user and like what you’re testing, reach out to our team to discuss your upgrade to SDK 9.5. 

Download Now

Categories
SDK releases

New in: Explore Wikitude SDK 9.3

Update: SDK 9.3.1 is now available including iOS 14 compatibility.

August flew by, and a new release of the Wikitude SDK is already available. Today we’re bringing you version 9.3 of our AR toolkit, the last update for the summer. 

For AR developers, this means more stability for experiences based on 3D models as an input method, lots of improvements for Unity Editor, NatCorder support, and expansion of our Alignment Initialization feature for Professional Edition (PE). 

You can test SDK 9.3 by downloading your SDK of choice here

Shout out to our awesome community for all the feedback and interaction via the forum this month. Want to have your say? Share your feedback about this release and what you’d like us to improve in the next SDK. 

Got a subscription license? Download Wikitude SDK 9.3 to update your platform free of charge.

What’s new in the Wikitude SDK 9.3?

Tracking improvements for 3D model-based AR experiences (Expert Edition)

In our last update, we introduced a new 3D Model as an input method for developers working with Object Tracking. This means you can use glTF 2.0 files, CAD models, and other 3D model formats exclusively or additional to images to build AR experiences based on all kinds of physical objects.

SDK 9.3 delivers a smoother tracking behavior and better alignment with real objects. 

In addition to significant stability improvements, we roll out a new SDK mode called ‘Assisted Tracking’ to compliment Object Tracking. This mode gathers complementary information from native AR frameworks (ARCore, ARKit) to provide further tracking stability in typically rough conditions for the engine, such as heavy shaking or erratic movements.   

With this update, the Wikitude SDK gives everything you need to build robust AR in the widest variety of objects possible: from buildings to industrial machines, toys, home appliances, and more. 

As we bring this new method of building AR experiences to the finish line (public release), you can request early access by applying to our beta program. All you need to do is provide details on what kind of object you’re looking to augment, your project goal, and company details.

Stay tuned for what’s to come! These are the initial steps that are preparing our technology for future improvements, expansion of objects that can be tracked, and enhancements in our Object Tracking feature moving forward.

APPLY FOR 3D MODEL OBJECT TRACKING BETA TESTING

3D Model Object Tracking benefits at a glance:

  • More Object Target Input methods available
  • Input source materials are interchangeable and combinable (images + 3D models)
  • Broader range of recognizable and trackable physical objects
  • Improved recognition and tracking performance

Find out which method (3D Model or Image) is best for your specific use case by using our AR Tracking Guide

Alignment Initializer now available for Professional Edition

Our Alignment Initializer feature now became available in the Wikitude Professional Edition and received a performance boost for Expert Edition. 

Based on initial experiments by some of our early beta testers, SDK 9.3 has received additional improvements in the initialization behavior. In particular,  the new release features improved accuracy when initialization is correctly triggered. This means that the alignment initialization process has become more selective, which guarantees significantly improved customer experience. With a smoother and more intuitive process, developers can expect more reliability when tracking objects what may look alike.

 When pointing a device to recognize an object, the improved Alignment Initialization guides the user towards a pre-defined viewpoint, rapidly kicking off the AR experience. 

You’ll benefit from more stability when tracking objects and less jitter for Object Tracking experiences based on 3D Models (Expert Edition) and images (Professional and Expert Editions). 

Improved experience for Unity developers

The Wikitude Unity plugin makes it easy to create cross-platform apps with immersive augmented reality functionalities within a single development environment. 

Thanks to the feedback of our devoted community in the forum and customer discussions, we were able to identify and improve several features within the Unity Editor, delivering a smoother development experience for Wikitude users.

SDK 9.3 brings the following improvements:

  • Point Cloud preview is now properly destroyed upon unloading of a scene 
  • Fixed Live Preview webcam augmentation pose
  • Removed duplicate event setter in all tracker inspectors (Expert Edition only)

Additionally, our team took the opportunity to include compatibility for the popular NatCorder, a lightweight API that allows you to record videos in just a few steps in Unity. 

Other main features developers can now enjoy include recording MP4 videos, animated GIF images, JPEG image sequences, and WAV audio files, as well as recording any texture (or anything that can be rendered to texture) or any pixel data.

Support for iOS 14

It’s official: Apple has released it’s the Golden Master to third-party developers and Wikitude developers get access faster. SDK 9.3.1 includes full compatibility for iOS JavaScript, iOS Native, Unity (Professional and Expert Editions), Flutter, Cordova and Xamarin.

How do I get SDK 9.3?

It’s easy! Just choose your SDK of choice, get a free trial, and start with your next augmented reality experience today.

New to Wikitude? Welcome! We have a free Wikitude trial version for testing purposes. 

If you’re already a user and like what you’re testing, reach out to our team to discuss your upgrade to SDK 9.3. 

Wikitude Download PAGE

Categories
News

Augmented World Expo 2016 was the biggest and best AR event ever – and here’s why

Couldn’t make it to Santa Clara for AWE 2016? Take five minutes to find out what you missed.

The biggest Augmented Reality event in the world has come and gone for 2016 – and there’s a whole lot to talk about for anyone interested in seeing the future. Wikitude was onsite to demonstrate its latest technology advancements– and of course, see what everyone else in the AR/VR/wearable industry was up to.

https://www.youtube.com/watch?v=16KPWVPcDi0&feature=youtu.be

Show me the money

The first thing you noticed? Money. Not literally – but lurking quietly below the surface. Booths were bigger, presentations were slicker, and everything and everyone was more professional. It’s a sign that people in the know are putting investments on the line – with full expectations of real returns. We’ve had our first glimpses of the future – and it’s one full of possibilities for the AR world. Of course, that was also reflected in another metric – people. Says Wikitude’s Phillipp Nagele: “There was just so much more happening this year! I think the show must have doubled in size since last year.”

Interactivity is evolving

The layman thinks of AR as a new way to consume content or information about the world – but what some of the visionaries in the field are most excited about it how it will change the ways we interface with computers. The computer has always required a tactile interaction. Keyboards and mice have evolved to touch screens, but what’s next? The answer: nothing. Augmented reality devices will let our fingers, hands and eyes interact with digital images in ways never seen before. Check out ODG’s R-7 smartglasses – for which we’ve designed an optimized version of the Wikitude SDK. It will help you make sure AR scenarios now work flawlessly on the ODG hardware. Read the official ODG / Wikitude partnership announcement.

We made a scene at the Auggies

The Oscars to the film industry is what the Auggies are to the AR industry. Not only was Wikitude a finalist in the Best AR Tool category, WIkitude also stole the show by augmenting the Auggie this year – yep, we augmented the Auggie. After all, it makes sense, right? Watch Ori Inbar, Founder and CEO of Augmented World Expo live on stage during the keynote and Auggie Awards ceremony.

See what we had to show off

For the people that work behind the scene on AR apps, Wikitude had plenty to offer from CTO Philipp Nagele – most importantly, technical insight and an in-depth tutorial on our SDK, plus an exploration of the various complimentary and powerful tools WIkitude offers including Studio, Cloud Recognition and the plugins API, which let’s the Wikitude SDK work with other libraries to create powerful, custom-built apps with features like QR and barcode recognition as well as OCR.

CEO Martin Herdina discussed one of the most important issues for everyone in the AR space: 3D recognition and tracking of objects, rooms, spaces, and structures. With the discipline still very much in its infancy, Herdina offered rare bits of real-word experience on best practices with devices already in the market.

And, in the parlance of the film industry – that’s a wrap. AWE 2016 was an incredible event highly indicative of an incredible future. What will be talking about next year? Whatever it is, it’s surely going to be even more exciting. See you in the new future!

Categories
Dev to Dev

3D Tracking for large-scale scenes

Update (August 2017): Object recognition, multi-target tracking and SLAM: Track the world with SDK 7

As part of our mission to keep augmenting the world around us, our tech team has been extending the capabilities of our 3D tracking. We started simple with the first public beta version of this feature a few months back, recognizing and tracking small-scale spaces. As things progress very quickly, Wikitude’s computer vision team is taking a step forward to track and map bigger environments or “large-scale” scenes as we like to call it.

In this post we will provide an insight on Wikitude’s 3D tracking technology, share our next move on the 3D tracking road and provide a hands on demo of our large scale feature, the WikiWings app for Android.

The Wikitude cross-platform 3D Tracking technology

Tracking objects and environments in 3D is a complex task. Particularly if this is done without depth sensors and by using just a single front facing the camera as found in the majority of smartphones out there today. We, humans, are actually at an advantage as we are equipped with two eyes (cameras) for sensing depth and understanding the third dimension.

The Wikitude team has been working tirelessly in the past three years with the aim to create a common base for recognizing and tracking three-dimensional objects, structures, and spaces. The market requirements range from being able to recognize objects of a few centimeters in size up to positioning a device in a sequence of rooms and corridors stretching several 100s of meters. There is no “one” computer vision algorithm out there that can support this broad set of requirements and use cases – yet. As the pioneer in the mobile augmented reality industry with a razor sharp focus on technology only, we will continue to address the demand for varying “flavours” of 3D recognition and tracking. The result of this approach is a strong common core for 3D tracking, which serves as the common base for a number of use cases.

151127_WT_Infografik_3D_Engine_02
The Wikitude 3D Tracking engine

In the past weeks, another building block of our 3D tracking has evolved, and today we are excited to share with you the second part of our journey to map and track the world around us.

From “small-scale” to “large-scale”

Wikitude’s first 3D tracking beta released a couple of months ago was the initial public release for mapping and tracking objects and environments on a small scale, such as an office desk for example, as previously described in our blog. We have gathered feedback from our developer community in the past weeks and worked on an updated version of our small-scale 3D tracking. Interested developers can request our updated (December 1st, 2015) version of the beta on our 3D tracking page feature page.

As a next step, we are preparing our next public releases for tracking and mapping larger indoor spaces to navigate users, display augmentations in rooms or show points of interest inside buildings.

To demonstrate the basics of our large scale 3D tracking feature already, our development team took the Wikitude office as an initial test environment. The video below is a first hands-on example of the current capabilities of our SLAM based 3D tracking applied to indoor navigation and localization.

The first step of the above demo is to identify objects and physical structures of the room that will provide key feature points to be tracked. As the user moves around, feature points are captured and become the base for the forming map on the device, see the box in the bottom right corner on the device in the video above. Once the algorithm has tracked key features of our office, it’s time to augment the scene! In the technical demo, we demonstrate a simple augmentation of an animated 3D model using our Native iOS SDK.

Large scale in action: use cases with the Wikitude 3D tracking

The demand for technology that “understands” and enhances the larger spaces around us inside of shopping malls, public buildings, airports, train stations etc. is tremendous. Often times we wonder what’s the shortest way to a departure gate inside of airports and train stations, where we can get the best deals in large shopping malls or find the nearest Starbucks, or locate a piece of machinery on complex industrial sites, even indoor gaming is frequently being requested for. Here are some of the many use cases where the Wikitude 3D tracking can be applied.

Gaming
Mockup 3D Tracking LivingRoom

One of the coolest applications of our large scale 3D tracking is the ability to make rooms highly interactive. This feature allows users to hunt for flying dragons in your living room, fight living creatures in your kitchen or follow an alien in a shopping mall. Any room can become the scene of your game!

 



Architecture151028_WT_Mockup_3D_Tracking_Facade_03_v4

What if we could see a design idea of a building structure in real time? Or plan industrial spaces before a single brick is being moved? Architects can use Wikitude’s large scale 3D tracking to display their plans on their client’s tablet, helping them easily visualise what things will look like upon completion of the project.

 



Indoor navigation (proof of concept)Indoor navigation - augmented reality app

Tracking and mapping indoor spaces enables powerful indoor navigation. Locating deals in the maze of big shopping centers, leading passengers to their boarding gates inside of airports are only the beginning.

 

 

Check out WikiWings, Wikitude’s large-scale demo app

The Wikitude large-scale capabilities will be available in our SDK soon, however, you can get an early taste of it already by downloading the WikiWings demo app for Android.

wikiwings logo

Screenshot 3d tracking app

(We have already shot quite a number of dragons at our office ;)

Update (August 2017): Start developing with Wikitude SDK 7

Getting started with SDK 7 has never been easier! Here’s how:

Help us spread the news on Twitter, Facebook and Linkedin using the hashtag #Wikitude.

Categories
AR features

Wikitude 3D Tracking (Instant Tracking)

2020 Update: Wikitude SDK now supports 3D tracking, Object tracking and 3D Model as input method (CAD). Learn more.

With this post, we are opening a new chapter on Wikitude’s journey towards augmenting the world! We are happy to share today the first version of the all new Wikitude 3D tracking technology. For our team, augmenting rooms, spaces, and objects around us is a natural progress after mastering augmentations on 2D surfaces. Clearly, tracking in 3D is a much more complex task as algorithms must be optimized for a variety of use cases and different conditions. With this release of our 3D tracking technology, developers will be able to map areas and objects of a rather small scale and place 3D content into the scene.

This is the first step of a sequence of releases Wikitude will roll out as our SLAM based 3D recognition and tracking technology evolves. The 3D tracking (instant tracking) feature is now available as a free trial and packaged in our SDK PRO products. This feature is currently available for the SDK 5 Native APIs only.

How does Wikitude 3D tracking work?

The Wikitude SDK tracks 3D scenes by identifying feature points of objects and environments. By identifying feature-rich environments, the SDK will map the scene by displaying a point cloud over the different feature points.
As an example of how the Wikitude 3D tracking works in a small scene, we will use the scenario of an office table. The richer the scene is equipped with feature points, the better the mapping and tracking will be.

151012_WT_SDK5_Infografiken_3DTracking_01_v1

In order to track and map the scene the following steps should be taken:

  1. Launch the Wikitude sample app, which is included in the Native SDK (iOS and Android) download package
  2. Record a tracking map by slowly moving the device from one side to the other of the scene, covering the whole area
  3. 3D point clouds will appear on the screen capturing key feature points of the scenario
  4. Save the tracking map
  5. Load the map in your augmented reality experience to relocalize the scene and visualize the augmentation in real time.

    Download 3D Tracking Trial!

The video below demonstrates the above described steps.

https://youtu.be/GWs2KK-Pv0Q

Important note: It is not possible to use both 2D and 3D tracking within one experience. If you use 3D tracking, recognition, and tracking of 2D markers will not be launched.

Developers can now try the Wikitude 3D tracking together with the SDK 5 free trial. A trial key that has been generated on Oct 15 2015 or later is required. License keys prior to that will show a warning Unlicensed feature when you try to use 3D tracking. If you generated a license key earlier than October 15th, just revisit the license page and a new key will be automatically generated.

The Wikitude 3D Tracking is included in the Native SDK download packages for iOS and Android. In our documentation section you can find all details of this new cross-platform technology and follow the step-by-step set up guide to get started (Android, iOS). We can’t wait to get your feedback and are happy to answer questions you have at sales@wikitude.com or in our developer forum.

And before we close this post, here is a sneak peek of what is coming next! Subscribe to our newsletter and stay tuned to our developments on our dedicated SLAM page.

Update 2017: 3D Instant Tracking now available for Multiple Platforms and Development Frameworks

The Wikitude SDK is available for both Android and iOS devices, as well as a number of leading augmented reality smart glasses.
Developers can choose from a wide selection of AR development frameworks, including Native API, JavaScript API or any of the supported extensions and plugins available.

Among the extensions based on Wikitude’s JavaScript API are Cordova (PhoneGap), Xamarin and Titanium. These extensions include the same features available in the JS API, such as location-based AR, image recognition and tracking and SLAM.
Unity is the sole plugin based on the Wikitude Native SDK and includes image recognition and tracking, SLAM, as well as a plugin API, which allows you to connect the Wikitude SDK to third-party libraries.

Categories
AR features

Shaping the future of technology with SLAM (simultaneous localization and mapping)

Update (August 2017): Object recognition, multi-target tracking and SLAM: Track the world with SDK 7

Wikitude’s SLAM (simultaneous localization and mapping) SDK is ready for download!

The world is not flat and so as our technology begins to spill out from behind the screen into the physical world it is increasingly important that it interacts with it in three dimensions. To do this, we are waking up our technology, equipping it with sensors to give it the ability to feel out its surroundings. But seeing the world as we do is only half the solution.

One secret ingredient driving the future of a 3D technological world is a computational problem called SLAM.

What is SLAM?

SLAM or simultaneous localization and mapping, is a series of complex computations and algorithms which use sensor data to construct a map of an unknown environment while using it at the same time to identify where it is located. This, of course, is a chicken-and-egg type problem and in order for SLAM to work the technology needs to create a pre-existing map of its surroundings and then orient itself within this map to refine it. The concept of SLAM has been around since the late 80s but we are just starting to see some of the ways this powerful mapping process will enable the future of technology. SLAM is a key driver behind unmanned vehicles and drones, self-driving cars, robotics, and augmented reality applications.

“As we are only at the very beginning of augmenting the physical world around us, visual SLAM is currently very well suited for tracking in unknown environments, rooms and spaces,” explains Andy Gstoll, CMO at Wikitude. “The technology continuously scans and “learns” about the environment it is in allowing you to augment it with useful and value-adding digital content depending on your location within this space.”

Prototype of Google’s self-driving car. Source: Google

SLAM use-cases

Google’s self-driving car is a good example of technology making use of SLAM. A project under Google X, Google’s experimental “moonshot” division, the driverless car void of a steering wheel or pedals, uses high definition inch-precision mapping to navigate. More specifically it relies on a ranger finder mounted on the top of the car which emits a laser beam generated to create detailed 3D maps of its environment. It then uses this map and then combines it with maps available of the world to drive itself autonomously.

From the roads to the skies, drones are also using SLAM to make sense of the world around it in order to add value. A great example of this is a concept from MIT research group, Senseable City Laboratory called Skycall. Skycall employs the use of a drone to help students navigate around the MIT campus. The concept sees students call a drone using a smartphone application and then tells it where it wants to go on campus. The drone then asks the student to “follow them” and guides the student to the location. Using a combination of auto-pilot, GPS, sonar sensing and Wi-Fi connectivity the drone is able to sense its environment in order to guide users along pre-defined paths or to specific destinations requested by the user.

Track the World with Wikitude Augmented Reality SDK

Here at Wikitude, we are using SLAM in our product lineup to further the capabilities of augmented reality in the real world which will open up new possibilities for the use of AR in large scale and outdoor environments. From architectural projects to Hollywood film productions, SLAM technology will enable a variety of industries to position complex 3D models in tracked scenes ensuring its complete visualisation and best positioning in the environment. In the demo below, we show you how SLAM was used to help us augment a 3D model of a church steeple that had been destroyed in World War II by allowing users to see what it looked like before the war.

“As the leader in augmented reality technology, it is a natural process for us to expand from 2D to 3D AR solutions as our mission is to augment the world, not only magazines and billboards. Visual SLAM and our very unique approach, you may call it our “secret sauce”, allow us to provide the state of the art AR technology our rapidly growing developer customer base demands,” says Gstoll.

The use of SLAM in our augmented reality product line will be the focus of our talk and demo at this year’s Augmented World Expo which takes place in Silicon Valley June 8-10. We encourage you to come by our booth to see this technology in action!

Start developing with an award winning AR SDK

Getting started with SDK 7 has never been easier! Here’s how:

Multiple Platforms and Development Frameworks

The Wikitude SDK is available for both Android and iOS devices, as well as a number of leading augmented reality smart glasses.

Developers can choose from a wide selection of AR development frameworks, including Native API, JavaScript API or any of the supported extensions and plugins available.

Among the extensions based on Wikitude’s JavaScript API are Cordova (PhoneGap), Xamarin and Titanium. These extensions include the same features available in the JS API, such as location-based AR, image recognition and tracking and SLAM.
Unity is the sole plugin based on the Wikitude Native SDK and includes image recognition and tracking, SLAM, as well as a plugin API, which allows you to connect the Wikitude SDK to third-party libraries.

Wikitude Augmented reality SDK development frameworks

– – – –

This post was written by Tom Emrich, co-producer of the sixth annual Augmented World Expo or AWE. AWE takes place June 8-10 at the Santa Clara Convention Center in California. The largest of its kind, AWE brings together over 3,000+ professionals and 200 participating companies to showcase augmented and virtual reality, wearable technology and the Internet of Things.

Help us spread the news on Twitter, Facebook and Linkedin using the hashtag #Wikitude.