Shaping the future of technology with SLAM (simultaneous localization and mapping)

Paula

Update (August 2017): Object recognition, multi-target tracking and SLAM: Track the world with SDK 7

Wikitude’s SLAM (simultaneous localization and mapping) SDK is ready for download!

The world is not flat and so as our technology begins to spill out from behind the screen into the physical world it is increasingly important that it interacts with it in three dimensions. To do this, we are waking up our technology, equipping it with sensors to give it the ability to feel out its surroundings. But seeing the world as we do is only half the solution.

One secret ingredient driving the future of a 3D technological world is a computational problem called SLAM.

What is SLAM?

SLAM or simultaneous localization and mapping, is a series of complex computations and algorithms which use sensor data to construct a map of an unknown environment while using it at the same time to identify where it is located. This, of course, is a chicken-and-egg type problem and in order for SLAM to work the technology needs to create a pre-existing map of its surroundings and then orient itself within this map to refine it. The concept of SLAM has been around since the late 80s but we are just starting to see some of the ways this powerful mapping process will enable the future of technology. SLAM is a key driver behind unmanned vehicles and drones, self-driving cars, robotics, and augmented reality applications.

“As we are only at the very beginning of augmenting the physical world around us, visual SLAM is currently very well suited for tracking in unknown environments, rooms and spaces,” explains Andy Gstoll, CMO at Wikitude. “The technology continuously scans and “learns” about the environment it is in allowing you to augment it with useful and value-adding digital content depending on your location within this space.”

Prototype of Google’s self-driving car. Source: Google

SLAM use-cases

Google’s self-driving car is a good example of technology making use of SLAM. A project under Google X, Google’s experimental “moonshot” division, the driverless car void of a steering wheel or pedals, uses high definition inch-precision mapping to navigate. More specifically it relies on a ranger finder mounted on the top of the car which emits a laser beam generated to create detailed 3D maps of its environment. It then uses this map and then combines it with maps available of the world to drive itself autonomously.

From the roads to the skies, drones are also using SLAM to make sense of the world around it in order to add value. A great example of this is a concept from MIT research group, Senseable City Laboratory called Skycall. Skycall employs the use of a drone to help students navigate around the MIT campus. The concept sees students call a drone using a smartphone application and then tells it where it wants to go on campus. The drone then asks the student to “follow them” and guides the student to the location. Using a combination of auto-pilot, GPS, sonar sensing and Wi-Fi connectivity the drone is able to sense its environment in order to guide users along pre-defined paths or to specific destinations requested by the user.

Track the World with Wikitude Augmented Reality SDK

Here at Wikitude, we are using SLAM in our product lineup to further the capabilities of augmented reality in the real world which will open up new possibilities for the use of AR in large scale and outdoor environments. From architectural projects to Hollywood film productions, SLAM technology will enable a variety of industries to position complex 3D models in tracked scenes ensuring its complete visualisation and best positioning in the environment. In the demo below, we show you how SLAM was used to help us augment a 3D model of a church steeple that had been destroyed in World War II by allowing users to see what it looked like before the war.

“As the leader in augmented reality technology, it is a natural process for us to expand from 2D to 3D AR solutions as our mission is to augment the world, not only magazines and billboards. Visual SLAM and our very unique approach, you may call it our “secret sauce”, allow us to provide the state of the art AR technology our rapidly growing developer customer base demands,” says Gstoll.

The use of SLAM in our augmented reality product line will be the focus of our talk and demo at this year’s Augmented World Expo which takes place in Silicon Valley June 8-10. We encourage you to come by our booth to see this technology in action!

Start developing with an award winning AR SDK

Getting started with SDK 7 has never been easier! Here’s how:

Multiple Platforms and Development Frameworks

The Wikitude SDK is available for both Android and iOS devices, as well as a number of leading augmented reality smart glasses.

Developers can choose from a wide selection of AR development frameworks, including Native API, JavaScript API or any of the supported extensions and plugins available.

Among the extensions based on Wikitude’s JavaScript API are Cordova (PhoneGap), Xamarin and Titanium. These extensions include the same features available in the JS API, such as location-based AR, image recognition and tracking and SLAM.
Unity is the sole plugin based on the Wikitude Native SDK and includes image recognition and tracking, SLAM, as well as a plugin API, which allows you to connect the Wikitude SDK to third-party libraries.

Wikitude Augmented reality SDK development frameworks

– – – –

This post was written by Tom Emrich, co-producer of the sixth annual Augmented World Expo or AWE. AWE takes place June 8-10 at the Santa Clara Convention Center in California. The largest of its kind, AWE brings together over 3,000+ professionals and 200 participating companies to showcase augmented and virtual reality, wearable technology and the Internet of Things.

Help us spread the news on Twitter, Facebook and Linkedin using the hashtag #Wikitude.

Previous Article
Moving from PhoneGap to the Cordova Plugin
Next Article
Moving from PhoneGap to the Cordova Plugin