Categories
Dev to Dev

Augmented Reality Glossary: from A to Z

Augmented reality technology becomes a driving force behind tectonic changes in business methods by and large. We created a comprehensive AR glossary with the most common terms and definitions to help you understand the lingo better.

Augmented Reality Glossary

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

A

Augmented Reality (AR)

Technology that uses software to superimpose various forms of digital content – such as videos, photos, links, 3D models, and others – in the real environment, predefined images or object targets. The realistic augmentation is achieved by making use of the device camera and its sensors.

AR Bridge

A feature that allows developers to integrate native AR SDKs such as ARKit and ARCore with advanced image and object tracking functionality from the Wikitude SDK. When enabled, the camera configured as AR Camera will be driven by AR Bridge, while the Drawables will be driven by the Wikitude SDK. The Wikitude SDK provides a built-in connection to these native SDKs through the Internal AR Bridge implementation. This is a ready-made solution that just needs to be enabled. As an alternative, a Plugin implementation can be chosen, which allows the developer to integrate with other positional tracking SDKs.

AR Overlay

An overlay principle is indispensable for the augmented reality technology.  Overlay happens when the formats such as images, videos, or 3D are superimposed over an Image or Object Target.

ARKit and ARCore

These are, respectively, Apple’s and Google’s AR development platforms. Fully integrable with the Wikitude SDK, ARKit and ARCore can be extended with features that are not natively available in those AR frameworks or come with different quality standards (compared to the implementation in the Wikitude SDK).

Automatic Initialization

Automatic initialization is the default mode of the Wikitude SDK for both image and object targets. It is the most natural behavior for users, and as they point the camera towards the target, position and orientation will be detected automatically. The tracking of the target will start seamlessly. 

 

Alignment Initialization

The alignment initializer is a visual element in the UI that signals the user from which viewpoint (perspective) the object can be recognized and tracking can be started. This feature can be used for objects that are hard to recognize automatically (usually objects with unclear or unreliable texture). An unreliable texture could be an object that has different colors or areas that keep changing (e.g. mud, stickers). 

Assisted Reality 

Assisted Reality is a non-immersive visualization of various content (e.g. text, diagrams, images, simple videos).  Being considered experience within the augmented reality range, the assisted reality is often delivered through wearable hardware and serves to enhance personal awareness in given situations or scenes.

Assisted Tracking

Assisted tracking is a term describing a technology where the performance of Image, Cylinder, and Object targets benefit from the fact that a native AR framework is run in parallel. This results in increased stability of the mentioned trackers even when they move independently. Assisted tracking is enabled by default when using AR Bridge or AR Foundation.

B

C

Computer-Aided Design (CAD)

CAD or computer-aided design and drafting (CADD), is a technology for design and technical documentation. In AR, CAD is a common asset format used as an input method for augmented reality experiences. The format digitalizes /automatizes designs and technical specifications for built or manufactured products. 

Combine Trackers

The feature that allows developers to combine different trackers such as Positional Tracking from ARKit/ARCore, Image Tracking, and Object Tracking in a single AR experience.

Computer Vision (CV)

Computer vision is the ability of machines to recognize, process and understand digital images and objects, as well as scenes of the world around us. CV is one of the bases of augmented reality and the core of Wiktiude’s AR SDK. 

Cloud Recognition

Cloud Recognition is a cloud-based service that hosts predefined images online and allows recognition of many targets through a smartphone or smart glasses. This service allows fast, scalable, and reliable online recognition for ever-changing, dynamic, and large-scale projects.

Cylinder Tracker

Cylinder Tracker (or cylinder targets) is a special form of an Image Target. With it, images that are wrapped around a cylindrical shape can be recognized and tracked. This can range from labels on a wine bottle to prints on a can or any other graphical content. Cylinder Recognition and Tracking extend the capabilities of the Wikitude SDK to recognize cylinder objects. The feature is based on the Image Tracking module, but instead of recognizing plane images, it is able to recognize cylinder objects like cans through its images.

D

Drawable

An instance of an augmentation prefab that is instantiated in the scene when a target is detected.

E

Extended Tracking

Extended Tracking allows digital augmentations, attached to objects, scenes, or images, to persist in the user’s field of view even when the initial target is no longer in the frame. That is particularly useful when showing large augmentations that exceed the target. 

F

Field of view

The field of view is an area that can be observed either physically by a person or through a device lens. Depending on the lens focus, the field of view can be adapted and can vary in size. 

G

Geo AR

Location-based augmented reality allows developers to attach interactive and useful digital content to geo-based markers. This means that unlike the typical marker-based AR features – like Image Tracking and Object Tracking, Geo AR does not need a physical target to trigger the AR experience. Wikitude has been developing augmented reality technology since 2008 and pioneered in launching the world’s first location-based AR app for mobile. 

H

Holograms 

A hologram is a digital content formed by light that is projected on a transparent display or into open space. This type of content can be static or interactive, is usually three-dimensional and commonly used for smart glasses/mixed reality devices such as HoloLens. 

HoloLens 

HoloLens is a Microsoft’s head-mounted display, also referred to as mixed reality smart glasses. A popular device for industrial use cases and compatible with the Wikitude SDK.

I

Instant  Targets

Instant Targets is a feature within AR Instant Tracking which allows end-users to save and load their AR sessions. It means the important digital notes, directions, visual augmentations, and the whole AR experience itself can be accessed and experienced by multiple users across devices and operating systems (iOS, Android, and UWP) at different points in time. This makes sharing and revisiting the AR experience easy and meaningful. Instant Targets also allows users to load, edit, and re-save the AR experience on the fly.  The versatility of the feature use makes it very practical for remote assistance and maintenance use cases

Image Target

Image Target is a known planar image which will trigger an AR experience when recognized through the camera view from a smartphone or smart glasses. Targets are preloaded to the Wikitude system and are associated with a target collection for recognition.

Image Recognition and Tracking

This feature enables the Wikitude SDK to recognize and track known images (single or multiple) to trigger augmented reality experiences. Recognition works best for images with characteristics described on Wikitude’s best practice Image Target guideline. Suitable images can be found on product packaging, books, magazines, outdoors, paintings, and other 2D targets.  

Instant Tracking 

Instant Tracking technology (also known as ‘dead reckoning’) makes it possible for AR applications to overlay interactive digital content onto physical surfaces without requiring the use of a predefined marker to kick off the AR experience. Instant Tracking does not need to recognize a predefined target to start the tracking procedure thereafter. Instead, it initializes by tracking the physical environment itself. This markerless augmented reality is possible thanks to SLAM – Simultaneous Localization and Mapping technology. 

J

K

L

M

Machine Learning

Machine learning is a subset of artificial intelligence, that provides computer algorithms with the ability to learn and constantly improve the learning outcome based on the knowledge collected.

Markup 

Markup is the method of creating a composed scene by using the augmentations, triggers, or other information.

N

O

Object Target 

Objects can be used as targets to trigger the AR experience upon recognition via the camera view. The target is a pre-recorded map of the object. Object Targets can be created using two different ways: images or 3D models as input methods. The source material in both cases is converted into a Wikitude Object Target Collection, which is stored as a .wto file.

Object Recognition and Tracking

This feature enables the Wikitude SDK to recognize and track arbitrary objects for augmented reality experiences. Object Recognition and Tracking let users detect objects and entire scenes that were predefined. Recognition works best for objects that have only a limited number of changing/dynamic parts. Suitable objects for recognition and tracking include toys, monuments, industrial objects, tools, and household supplies.

Optical character recognition (OCR) 

OCR, or optical character reader, is the electronic conversion of images of handwritten or printed texts into machine-encoded text.

P

Positional Tracker (from Native AR frameworks)

The Wikitude SDK can use native AR frameworks (like ARKit or AR Core) in parallel to other trackers. This can be either through an existing connection to Unity’s AR Foundation or through Wikitude’s own AR Bridge. Positional tracking is the process of tracking the position and orientation of the device continuously by the device itself. This is sometimes referred to as World Tracking (Apple), Motion Tracking (Google), Head Tracking (VR headsets), or Instant Tracking (Wikitude Professional Edition).

Q

R

Range Extension

The Wikitude SDK Image Recognition engine can make use of HD camera frames to detect images from further away. Further away in this context means distances 3x further away, compared to not enabling this mode (e.g. A4-sized target can reach recognition distances in the area of 2.4 meters/ 8 feet). This feature is called Image Recognition Range Extension and can be activated through a setting in the Image Tracker class. 

Real-world Scale

The Wikitude SDK can be configured to work with a real-world scale, which has the benefit that augmentations can be authored with a consistent scale that will be preserved when used on different targets.

Recognition

Recognition describes the process of finding an image or object in the camera viewfinder. For augmented reality purposes, it is not enough to only identify the object or the bounding box of the object. The position and orientation of the object need to be detected accurately as well. This capability distinguishes AR recognition significantly from other recognition or classification services. Recognition acts as the starting point for tracking the object in real-time – this is also referred to as initialization. The Wikitude SDK has two recognition methods: Automatic Initialization and Alignment initialization. 

Remote Assistance

Remote Assistance in the context of augmented reality is the offering of a platform or application with features such as live video streaming of images and videos. The digital content is overlaid on the user’s view of the real-world environment, making it essential for frontline and field workers in various industries.  

S

Scene Recognition

The object recognition engine of the Wikitude SDK is used to recognize and track larger structures that go beyond table-sized objects. The name Scene Recognition reflects this in particular. The feature is ideal for augmented reality experiences using rooms, building facades, as well as squares and courtyards as targets.

Software Development Kit (SDK)

Group of development tools used to build an application for a specific platform.

Spatial Computing

This term is defined as human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.

SLAM

SLAM is an abbreviation for Simultaneous Localization and Mapping technology. SLAM is a technology that Computer Vision uses to receive visual data from our physical world (usually in the form of tracked points). Devices then use this visual input to understand and appropriately interact with the environment. 

SMART  

SMART is a seamless API within Instant Tracking that integrates ARKit, ARCore, and Wikitude’s SLAM engine in a single cross-platform AR SDK. By using it, developers do not have to deal with specific ARKit / ARCore code and can create their projects in either JavaScript, Unity, Xamarin, and Cordova. SMART works by dynamically identifying the end user’s device and deciding which should be used for each particular case.  

T

Target

A target image and associated extracted data are used by the tracker to recognize an image.

Target collection

An archive storing a collection of targets that can be recognized by the tracker. A target collection can come from two different resource types: as plain (a regular ZIP file containing images in plain JPG or PNG) or preprocessed (regular images that are converted into a WTC file (Wikitude Target collection) for faster processing and optimized storing offline).

Tracking 

The AR experience should “understand and follow” where a specific object is placed in the real-world to anchor content to it. This process is commonly referred to as tracking. Tracking in ideal cases happens in real-time (minimum every 33ms) so that the object is followed very accurately. There are many trackers available today, ranging from trackers that follow a face, hands, fingers, images, or generic object. All of them are based on a reference that is later understood by the software.

U

Unity3d

Unity is a cross-platform game engine developed by Unity Technologies.

V

W

Wikitude SDK

The Wikitude SDK script handles the initialization and destruction of the SDK and its native components. It additionally needs to be configured with the correct license key for your application. You can either buy a commercial license from our web page or download a free trial license key and play around with our SDK.

X

XR (Extended reality)

Extended Reality is an umbrella term that covers all computer-generated environments, either superimposed onto the physical world or creating immersive experiences for the user. XR includes AR, VR, MR, and any other emerging technologies of the same type.

Y

Z

Zebra 

Barcode scanning software can be combined with the Wikitude SDK via Plugins API allowing developers to integrate barcode identification to AR apps.  

3D model based generation 

3D models of objects are a great source for information, that can be used as a reference for recognizing and tracking an object for augmented reality experiences. The huge variety of 3D models in today’s market ranging from precise CAD/CAM data for manufacturing to runtime assets defined in FBX glTF or others brought us to the conclusion to launch this feature in closed BETA. For more details please contact Wikitude directly.

Would you like us to include other terms and concepts in Augmented Reality Glossary? Let us know.

Contact us

Categories
AR features

Object & Scene Tracking: Augmented Reality Use Cases and How-to

New: Create object tracking AR experiences using 3D models as an input method (such as CAD, glTF 2.0 and more). Get started with CAD tracking.

As augmented reality technology expands its capabilities it is important, as a developer, to be up to date with which AR features are currently available.

In this first edition of our new AR-technology series, Wikitude is presenting its main augmented reality features one by one. Starting off with Object & Scene Tracking AR.

Object & Scene Recognition and Tracking augmented reality

Object & Scene Tracking AR Use Cases

Before we start with formal introductions, here is a video containing different Object & Scene Tracking AR features being used in context.

As seen in the video, Object & Scene Tracking has a wide variety of use cases: maintenance and remote assistance, tourism, museums, gaming, consumer products, toys and more.

For this marker-based AR experience to trigger, it needs to detect a target. The target is a pre-recorded map of the object. Let’s break down the AR feature categories and talk about types of object targets that work well.

Object Tracking

This AR feature is used to recognize and track smaller arbitrary objects, superimposing digital content to produce augmented reality experiences.

Objects that can be pre-mapped as AR targets include but are not limited to:

  • Toys
  • Monuments and statues
  • Industrial objects
  • Tools
  • Household supplies

Scene Tracking

This AR feature is used to recognize and track larger structures that go beyond small-sized objects as well as area targets and machinery. Digital content can be added in the form of, annotations, videos, step-by-step instructions, links, directions, text, 3D augmentations, and more.



Structures that can be pre-mapped as AR targets include but are not limited to:

  • Factory floors and industrial sites
  • Large complex machinery
  • Large indoor spaces
  • Exhibition booths and showrooms
  • Rooms and apartments
  • Building façades
  • Museums
  • Squares, fountains, courtyards

Scene tracking enables the creation of continuous and persistent AR experiences for a scanned space or large-scale object. It identifies and tracks the features in your chosen scene/area to be accessed on a wide variety of phones, tablets, and AR smartglasses.

For optimal performance, scanned spaces should have little to no variation compared to the 3D map generated for the target. Extension or alterations of maps are possible to reflect changes in the environment (learn more).

Object Targets: how to create a 3D target map reference

In order to build Object Targets, it is necessary to create a pre-recorded map of the object that will then be used to trigger the AR experience.

The Object Target map creation workflow is simple:

  • Collect images* or 3D models of the object or scene (best practices)
  • Easily convert images into a Wikitude Object Target Collection (.wto) using Studio Editor
  • Use the .wto file in your AR app project

Once the reference map is done, developers still have the option of extending the map with images from different backgrounds and that cover additional areas of the object to increase recognition accuracy.

For detailed instructions, access the how to create Object Targets section of the Wikitude Documentation.

*Keep in mind the new and improved object mapping process (SDK 8 and up) uses images or 3D models such as CAD, glTF 2.0, and others, as source material. Previous SDK versions use video-based materials instead.

Object & Scene Tracking technology is progressively evolving to include a wider variety of real-world environments and gadgets. Going beyond objects, it is even possible to use Extended Tracking to continue viewing the AR experience when the target is no longer in sight.

Download the Wikitude SDK

To start creating your AR projects with a free Wikitude SDK trial key, create a Wikitude account and download the platform of your choice. This account also gives you access to Studio Editor, our web-based tool that allows you to generate, manage and publish your AR experiences without coding.

For commercial purposes, access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

Check the articles below to review other AR features in detail:

Categories
AR features

3D Model Tracking: Leveraging CAD Models for Augmented Reality

3D models are powerful assets to design, assemble, and visualize products. But they are also unique resources that can be transformed into immersive augmented reality experiences in and out of the factory floor. 

Object and scene tracking technologies connect people to places and objects around them. 

Whether you’re a toy producer looking to create augmented play or a manufacturing process engineer seeking workflow optimization, this technology can help you leverage existing CAD data and other 3D models to achieve your business goals.

What is CAD?

Widely used for architecture, engineering, construction, manufacturing, and product design, CAD (Computer-Aided Design) files allow engineers to build realistic models for machinery and general products. It also helps increase designers’ productivity, improve design quality, better communications through documentation, and create a manufacturing database. 

CAD models can be produced by digital designers in-house or ordered from external partners specialized in CAD modeling. According to MJSA, CAD models can range from $150 to $3,800,  depending on the item’s complexity. Beyond construction and documentation, enterprises can make the best out of their CAD investment by repurposing models for augmented reality-based solutions.

Additional to CAD, other popular 3D model formats can equally be leveraged to power augmented reality solutions, for example, glTF 2.0, FBX, .Obj, and more.

Leveraging CAD-based augmented reality in the industry sector

IT leaders across industries are embracing augmented reality as part of their digitization process.  

For the past five years, AR has proved its value helping optimize workflow, increase safety and productivity, and facilitate knowledge sharing along the productivity chain. 

CAD data and other 3D models can be used as the input method to create digital representations of the object or environment to be augmented. This can help optimize these goods and machinery production, build documentation around them, assemble, operate, inspect, use, and maintain them. 

With AR graduating from the Garter Hype Cycle, enterprises are ready move from POC to commercially relevant solutions tightly integrated in the business workflow. 

In this sense, AR is shifting the way enterprises utilize their physical assets and environments by creating immersive workspaces that are layered with digital content. A few examples include:

  • • Machines enhanced with step-by-step guides;
  • • Factory floors embedded with AR navigation systems;
  • • Mobile devices helping workers register and communicate issues across shifts;
  • • CAD data repurposed for AR inspection and training;
  • • Smart glasses remotely connecting experts to workers and more.
  • Leveraging CAD-based augmented reality in the consumer-facing sector

    Augmented reality opens new opportunities for retail, consumer goods, toys, and entertainment industries to engage with their target audience. In fact, 40% of shoppers would pay more for a product if they could experience it in AR, according to research from Retail Perceptions.

    Similarly to the industrial sector, consumer-facing industries turn to 3D models to prototype, visualize, produce, assemble, and create instructions for a wide variety of goods.   




    This allows for fewer iterations and testing before a particular toy or electrical domestic appliance are ready for production. 

    AR allows companies to explore digital channels’ untapped potential to deliver information compellingly, enhance storytelling, and effectively capture customer attention. Here are a few examples deployed in the market today:

  • • Augmented reality instruction manuals for home appliances such as coffee machines, vacuum cleaners, air conditioners, and more;
  • • Augmented toys with interactive play, such as LEGO’s Hidden Side and Disney’s AR Mech-X4 robot;
  • • Car feature demonstration with AR like Nissan LEAF;
  • • Product enhancements and variations with AR view;
  • • Product recognition and description with digital layering on retail stores;
  • • Step-by-step guidance on how to use appliances or environment features;
  • • Augmentation of buildings and façades to attract visitors (see Mumok ;
  • • Augmented reality art using urban landscapes and more.
  • Advantages of using 3D model-based augmented reality

    An essential dependency of all these complex business solutions is accurately recognizing and tracking objects and scenes (areas). 

    CAD and other 3D models typically provide accurate information about the object, maximizing the potential for reliable AR experience. 

    Furthermore, using 3D models as an input method for AR expands the variety of objects and scenes to be recognized. Some advantages include:

  • • 3D models provide accurate information about the object
  • • Allow recognition and tracking of objects with varying colors
  • • Can support texture-less objects with uniform surfaces and a high amount of reflective parts
  • • Easy to integrate into existing CAD/CAM workflow
  • • Delivers robust against light changes
  • Unlock the power of CAD + AR

    Interested in integrating AR in your process or product using CAD data or other 3D models? Wikitude’s beta program is open for testers! 

    Applicants will receive dedicated support from our expert engineers and win a free customized Object Target map for testing your solution. Start working with 3D Model-based Object Tracking today.



    No CAD or 3D models in-house? Try Wikitude’s image-based object and scene recognition and tracking technology. With this alternative, anyone is able to work with AR combined with physical assets.

    If you need help choosing which method is best for your particular solution, check out our handy AR tracking guide

    Categories
    AR features

    Scene Recognition and Tracking: Augmented Reality Use Cases and How-to

    From large industrial machinery to showrooms, scene tracking enables the creation of continuous and persistent AR experiences for areas and large-scale objects.

    As the complexity of augmented reality use cases grows, computer vision technology evolves to fulfill new requirements and expand the understanding of the world around us.

    Catalyzed by COVID-19, immersive outdoor environments, homes, and workspaces have gained significant momentum, opening the possibility for people to connect remotely and digitally enhance their surroundings.

    A crucial functionality in augmented reality and spatial computing is Scene Tracking – a unique feature that can be used to track pre-determined environments and large-scale objects.

    By identifying reference points and features in your chosen scene or area, augmented reality content can be displayed and accessed on a wide variety of phones, tablets, and AR smartglasses.

    Scene Recognition and Tracking augmented reality

    Scene Tracking AR feature and supported targets

    Scene Tracking, sometimes referred on the market as Area Targets, empowers a wide variety of use cases: maintenance and remote assistance, training and onboarding, digitalization of the visitor/user experience in the context of retail, tourism, museums, gaming, and more.

    In order to trigger this marker-based AR, the device must detect a known target, therefore mapping the object or scene is needed.

    Structures that can be mapped as AR targets include but are not limited to:

    • Exhibition booths and showrooms
    • Selected environments on retail stores, rooms and apartments
    • Large complex machinery on factory floors
    • Building façades
    • Sections of indoor spaces
    • Museums
    • Squares, fountains, feature-rich courtyards

    Scene Tracking Use Cases

    This AR feature is used to recognize and track larger structures that go beyond small-sized objects as well as area targets and machinery. Digital content can be added in the form of, annotations, videos, step-by-step instructions, links, directions, text, 3D augmentations, and more.

    Enterprise and Industrial Setting

    Scene Tacking can help digitalize workspaces by providing frontline workers access to immersive workflows with real-time information on the industrial setting. On-demand AR instructions and assistance on the factory floor help staff work faster and safer.

    Navigation on the factory floor can help teams involved in multiple procedures to work more intuitively. In addition, step-by-step guidance and on-the-fly virtual notes can be attached to machines, facilitating knowledge transfer and streamlining communication across shifts.

    AR-enabled training and onboarding can help companies save time and money when welcoming new workforce by guiding technicians throughout tasks and helping them connect with remote experts via video calls. 

    AR-Enabled User Experience

    Scene Tracking is a powerful tool to connect people to places. Hotels, museums and retail stores can attract visitors by digitally enhancing the exterior of the building or store façade with a digital sneak peek at what’s inside. Promotion, gamification, and additional AR touchpoints incentivize visitors to come inside.

    Touristic destinations can offer information on demand in multiple languages, so visitors can, for example, learn about historical sites, monuments, squares, fountains and more.

    Object Targets with scenes: how to create a 3D target map reference

    In order to build Object Targets based in scenes, it is necessary to create a pre-recorded map of the object that will then be used to trigger the AR experience.

    The map creation workflow is simple:

    • Collect images* or 3D models of the object or scene (best practice guide)
    • Convert images into a Wikitude Object Target Collection (.wto) using Studio Editor
    • Use the .wto file in your AR app project

    For optimal performance, scanned spaces should have little to no variation compared to the 3D map generated for the target. Extension or alterations of maps are possible to reflect changes in the environment (learn more) – developers still have the option of extending the map with images from different backgrounds and that cover additional areas of the object to increase recognition accuracy.

    For detailed instructions, access the how to create Object Targets section of the Wikitude Documentation.

    *Keep in mind the new and improved object mapping process (SDK 8 and up) uses images as source material. Previous SDK versions use video-based materials instead.

    Get Started with Scene recognition and Tracking

    To start creating your AR projects with a free Wikitude SDK trial key, create a Wikitude account and download the platform of your choice.

    This account also gives you access to Studio Editor, our web-based tool that allows you to generate, manage and publish your AR experiences without coding.

    For commercial purposes, access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

    Check the articles below to review other AR features in detail:

    Categories
    Toys & Games

    Augmented Reality Toys

    Building the next generation of immersive play.

    Augmented Reality Toy experience feat. Remote Control Carrera® Car

    Augmented Reality is shaping the future of play. From physical toys to card games, 2020 has seen a rapid expansion of AR-powered experiences for kids.

    To demonstrate how toy manufacturers can leverage the technology, Wikitude created an impressive Augmented Reality Toy experience for a remote control car. It not only shows the reliability of Object Tracking but also introduces innovative simultaneous AR Tracking functions (Object + Image + Positional Tracking).

    What is real and what is digitally augmented?

    The objective of the toy racer is to complete two full laps in the least amount of time without surpassing the digital boundaries of the augmented track.

    Augmented Reality Toy Experience Analysis

    Technology Used in the AR Experience

    The interaction between the real-world toy car and digital elements is powered by Wikitude Object and Image Tracking augmented reality combined with ARKit and HoloLens Positional Tracking (also compatible with tablets and smartphones).

    Augmented Reality Demo with Simultaneous AR Tracking: Object + Image + Positional

    Wikitude Object Tracking

    Object Target: Carrera® Remote Control Toy Car

    With the advances in Wikitude Object Tracking technology, it is possible to create reliable AR experiences that perform exceptionally well even under challenging circumstances.

    The Object is detected and continues to be precisely tracked. The interactive augmentations appear in accordance with the toy’s position, speed, and movements.

    Augmented Reality Demo: Object Tracking

    Object augmentations to take notice:

    • blue target indicating that the object is being tracked
    • exhaust fumes released when the car is parked
    • exhaust flames that react according to direction and speed
    • tire marks on the asphalt

    Wikitude Image Tracking

    Image Target: Start/Finish Line

    The image target plays an important role in the AR demo as it functions as an anchor point to properly position and lock the digital race track in the desired place.

    The augmented finish line board also records lap times and lap turns. A ranking board appears at the end of the race as well.

    Augmented Reality Demo: Image Tracking

    External Positional Tracking 

    For this demo, ARKit and HoloLens positional tracking was used. Keep in mind, however, that other Positional Tracking providers like ARCore or our own Wikitude Instant Tracking SLAM technology could have been used as well.

    ARKit Positional Tracking: Digital Race Track

    The augmented race track and other digital elements remain steady on the physical asphalt.

    Augmented Reality Demo: ARKit Positional Tracking

    Microsoft HoloLens Positional Tracking

    Wikitude has optimized its AR SDK to support Microsoft HoloLens 1 mixed reality headset devices. In this demo it allows racers to visualize the AR experience in real time.

    Augmented Reality Demo: Microsoft HoloLens Positional Tracking

    Interactive Digital Augmentations

    As seen in the images, the race track has a series of extra digital components apart from the track boundaries itself. 

    Observe how the toy car knocks down piles of tires, flags, signs and any digital content in its path.

    Augmented Reality Demo: Interactive 3D augmentations Tracking

    AR Features Running Simultaneously

    Below you can see all AR components – object, image and the positional feature being displayed, tracked and working simultaneously.

    Check how the flags on the track turn green as the car passes by.

    Augmented Reality Demo: Wikitude Object Tracking + Wikitude Image Tracking + External Positional Tracking

    360° Object + Image + Positional AR Tracking

    Tracking and augmentations remain steady and persistent at any angle.

    360 degree Object + Image + Positional Tracking

    First seen at AWE – Live AR Demonstration

    Many of the attendees of AWE USA 2019 got to see this awesome AR toy experience in action at the Wikitude booth as well as during Wikitude CTO Phil Nagele’s talk

    AWE USA 2019 - Live AR Demonstration

    Interested in creating an AR project of your own? Talk to one of our specialists and learn how to get started.