Categories
AR features

Markerless AR: how and where to use it

Markerless AR functionality allows developers to create digital applications that overlay interactive augmentations on physical surfaces, without the need for a marker.

We can all agree that computer vision is a key part of the future of augmented reality, mobile or not. That’s why we’ve been working so hard on our Instant Tracking over the last years. If you are not yet familiar with this feature, Instant Tracking creates the perfect digital recreation of real world things anytime, anywhere without the user having to scan any image.

Instant Tracking is also the first feature using Wikitude’s Simultaneous Localization and Mapping (SLAM) technology. SLAM identifies the user’s precise location within an unknown environment by simultaneously mapping the area during the Instant Tracking AR experience.

This allows developers to easily map environments and display augmented reality content without the need for target images or objects (markers). Wikitude’s SLAM markerless augmented reality tracking is one of the most versatile cross-platform 3D-tracking systems available for mobile.

Our SDK also offers its own SLAM Instant Tracking technology which can be dynamically connected to ARKit and ARCore (Wikitude SMART). SMART is a seamless API which integrates ARKit, ARCore and Wikitude’s SLAM engine in a single cross-platform AR SDK.

This feature helps developers create their projects in either JavaScript, Unity, Xamarin, PhoneGap, and Flutter without the need toto deal with specific ARKit/ARCore code. SMART dynamically identifies the end user’s device and decides if ARKit, ARCore or Wikitude SLAM should be used for each particular case.

Here are some interesting use cases for Wikitude’s markerless augmented reality feature:

Where you need to grab someone’s attention, immediately

Getting someone to look at your product is the first step of a good marketing strategy. For both marketing and retail implementations, augmented reality offers immense opportunity to do that. It’s new, easy to understand, and impossible to ignore.

Do you know the first time most of the living population saw the concept of augmented reality (although they probably didn’t know it then)? This scene with Michael J Fox in Back to the Future II.

Maybe it’s not as slick and refined as today’s visual effects, but back in 1989, it was certainly surprising – and attention-grabbing. That’s part of the way AR still works today – especially for the next couple years as wide-spread adoption still continues to grow. The most important thing to remember? If you truly surprise someone, they’ll be sure to tell everyone they know all about it.

The potential here for both retail outlets (online and in physical world) is clear – customers can interact directly with the product, and come as close they can to touching and feeling it without having it in their actual hands.

Even more opportunity exists in the gaming and entertainment – check how Socios.com gives sport fans an opportunity to collect crypto tokens, earn reward points and unlock experiences with their favorite sport clubs.

When you need to add one small piece of information

AR is at its best when it does just what it says: augment. AR can turn your phone into a diagnostic tool of unparalleled power – perceptive and reactive, hooked into the infinite database of the world wide web.

Adding a few, small, easy to understand bits of information to a real scene can simply help our mind process information so much more quickly – and clearly. Here’s a great example where an automobile roadside assistance service can help a customer diagnose a problem – without actually being anywhere on the roadside.

The opportunities here are endless – factory floor managers, warehouse workers, assembly-line technicians – anyone who needs real-time information, in a real-world setting. It’s a huge technological leap forward for the enterprise – just like when touchscreen mobile devices with third-party apps first appeared.

Where you need to show a physical object in relation to other objects

There’s a reason this idea keeps coming up – it solves a real-world problem, instantly, today.
Architecture, interior design – any creative profession that works in real world spaces can take advantage of augmented reality.

From visualizing artworks or virtually fitting furniture in your living room , the benefit here is clear – we can understand how potential real-world space will look and function so much better when we can actually see the objects we’re thinking of putting there – while we are there.

This last bit is why mobile AR is so important – if we want to make AR a practical technology, we have to be able to use it where we live, work, build and play, and we don’t want to drag a computer (or at least, a computer larger than your smartphone) everywhere we go to do it.

Here’s an example of placing designer clothing in a real-world setting, done by ARe app and powered by Wikitude:

Opening up an endless opportunity to showcase products of any size (from industrial machines to cars and jewellery), markerless AR enables a new level of shopping experience, that can take place directly on the customer’s mobile device at any time. Such options as 360 degrees product viewer, custom features annotations and 24/7 access allows customers to configure and compare products, communicate with merchants and shop in the comfort of their homes.

So be creative in your AR applications – and do something surprising. Developers all over the world are already using Wikitude technology to build AR apps that grab attention and customers – and it’s already making their lives easier.

Markerless AR infographic

Want to dig in deeper? We’ve collected a few of our favorite use cases in the infographic and a list of apps already using the technology on this YouTube playlist. Have a look and see what inspires you to make something inspiring! 

Looking how to get started with Markerless AR?

Interested in creating an AR project of your own?
Talk to one of our specialists and learn how to get started.

Contact The Wikitude Team
Categories
AR features

Object & Scene Tracking: Augmented Reality Use Cases and How-to

New: Create object tracking AR experiences using 3D models as an input method (such as CAD, glTF 2.0 and more). Get started with CAD tracking.

As augmented reality technology expands its capabilities it is important, as a developer, to be up to date with which AR features are currently available.

In this first edition of our new AR-technology series, Wikitude is presenting its main augmented reality features one by one. Starting off with Object & Scene Tracking AR.

Object & Scene Recognition and Tracking augmented reality

Object & Scene Tracking AR Use Cases

Before we start with formal introductions, here is a video containing different Object & Scene Tracking AR features being used in context.

As seen in the video, Object & Scene Tracking has a wide variety of use cases: maintenance and remote assistance, tourism, museums, gaming, consumer products, toys and more.

For this marker-based AR experience to trigger, it needs to detect a target. The target is a pre-recorded map of the object. Let’s break down the AR feature categories and talk about types of object targets that work well.

Object Tracking

This AR feature is used to recognize and track smaller arbitrary objects, superimposing digital content to produce augmented reality experiences.

Objects that can be pre-mapped as AR targets include but are not limited to:

  • Toys
  • Monuments and statues
  • Industrial objects
  • Tools
  • Household supplies

Scene Tracking

This AR feature is used to recognize and track larger structures that go beyond small-sized objects as well as area targets and machinery. Digital content can be added in the form of, annotations, videos, step-by-step instructions, links, directions, text, 3D augmentations, and more.



Structures that can be pre-mapped as AR targets include but are not limited to:

  • Factory floors and industrial sites
  • Large complex machinery
  • Large indoor spaces
  • Exhibition booths and showrooms
  • Rooms and apartments
  • Building façades
  • Museums
  • Squares, fountains, courtyards

Scene tracking enables the creation of continuous and persistent AR experiences for a scanned space or large-scale object. It identifies and tracks the features in your chosen scene/area to be accessed on a wide variety of phones, tablets, and AR smartglasses.

For optimal performance, scanned spaces should have little to no variation compared to the 3D map generated for the target. Extension or alterations of maps are possible to reflect changes in the environment (learn more).

Object Targets: how to create a 3D target map reference

In order to build Object Targets, it is necessary to create a pre-recorded map of the object that will then be used to trigger the AR experience.

The Object Target map creation workflow is simple:

  • Collect images* or 3D models of the object or scene (best practices)
  • Easily convert images into a Wikitude Object Target Collection (.wto) using Studio Editor
  • Use the .wto file in your AR app project

Once the reference map is done, developers still have the option of extending the map with images from different backgrounds and that cover additional areas of the object to increase recognition accuracy.

For detailed instructions, access the how to create Object Targets section of the Wikitude Documentation.

*Keep in mind the new and improved object mapping process (SDK 8 and up) uses images or 3D models such as CAD, glTF 2.0, and others, as source material. Previous SDK versions use video-based materials instead.

Object & Scene Tracking technology is progressively evolving to include a wider variety of real-world environments and gadgets. Going beyond objects, it is even possible to use Extended Tracking to continue viewing the AR experience when the target is no longer in sight.

Download the Wikitude SDK

To start creating your AR projects with a free Wikitude SDK trial key, create a Wikitude account and download the platform of your choice. This account also gives you access to Studio Editor, our web-based tool that allows you to generate, manage and publish your AR experiences without coding.

For commercial purposes, access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

Check the articles below to review other AR features in detail:

Categories
AR features

3D Model Tracking: Leveraging CAD Models for Augmented Reality

3D models are powerful assets to design, assemble, and visualize products. But they are also unique resources that can be transformed into immersive augmented reality experiences in and out of the factory floor. 

Object and scene tracking technologies connect people to places and objects around them. 

Whether you’re a toy producer looking to create augmented play or a manufacturing process engineer seeking workflow optimization, this technology can help you leverage existing CAD data and other 3D models to achieve your business goals.

What is CAD?

Widely used for architecture, engineering, construction, manufacturing, and product design, CAD (Computer-Aided Design) files allow engineers to build realistic models for machinery and general products. It also helps increase designers’ productivity, improve design quality, better communications through documentation, and create a manufacturing database. 

CAD models can be produced by digital designers in-house or ordered from external partners specialized in CAD modeling. According to MJSA, CAD models can range from $150 to $3,800,  depending on the item’s complexity. Beyond construction and documentation, enterprises can make the best out of their CAD investment by repurposing models for augmented reality-based solutions.

Additional to CAD, other popular 3D model formats can equally be leveraged to power augmented reality solutions, for example, glTF 2.0, FBX, .Obj, and more.

Leveraging CAD-based augmented reality in the industry sector

IT leaders across industries are embracing augmented reality as part of their digitization process.  

For the past five years, AR has proved its value helping optimize workflow, increase safety and productivity, and facilitate knowledge sharing along the productivity chain. 

CAD data and other 3D models can be used as the input method to create digital representations of the object or environment to be augmented. This can help optimize these goods and machinery production, build documentation around them, assemble, operate, inspect, use, and maintain them. 

With AR graduating from the Garter Hype Cycle, enterprises are ready move from POC to commercially relevant solutions tightly integrated in the business workflow. 

In this sense, AR is shifting the way enterprises utilize their physical assets and environments by creating immersive workspaces that are layered with digital content. A few examples include:

  • • Machines enhanced with step-by-step guides;
  • • Factory floors embedded with AR navigation systems;
  • • Mobile devices helping workers register and communicate issues across shifts;
  • • CAD data repurposed for AR inspection and training;
  • • Smart glasses remotely connecting experts to workers and more.
  • Leveraging CAD-based augmented reality in the consumer-facing sector

    Augmented reality opens new opportunities for retail, consumer goods, toys, and entertainment industries to engage with their target audience. In fact, 40% of shoppers would pay more for a product if they could experience it in AR, according to research from Retail Perceptions.

    Similarly to the industrial sector, consumer-facing industries turn to 3D models to prototype, visualize, produce, assemble, and create instructions for a wide variety of goods.   




    This allows for fewer iterations and testing before a particular toy or electrical domestic appliance are ready for production. 

    AR allows companies to explore digital channels’ untapped potential to deliver information compellingly, enhance storytelling, and effectively capture customer attention. Here are a few examples deployed in the market today:

  • • Augmented reality instruction manuals for home appliances such as coffee machines, vacuum cleaners, air conditioners, and more;
  • • Augmented toys with interactive play, such as LEGO’s Hidden Side and Disney’s AR Mech-X4 robot;
  • • Car feature demonstration with AR like Nissan LEAF;
  • • Product enhancements and variations with AR view;
  • • Product recognition and description with digital layering on retail stores;
  • • Step-by-step guidance on how to use appliances or environment features;
  • • Augmentation of buildings and façades to attract visitors (see Mumok ;
  • • Augmented reality art using urban landscapes and more.
  • Advantages of using 3D model-based augmented reality

    An essential dependency of all these complex business solutions is accurately recognizing and tracking objects and scenes (areas). 

    CAD and other 3D models typically provide accurate information about the object, maximizing the potential for reliable AR experience. 

    Furthermore, using 3D models as an input method for AR expands the variety of objects and scenes to be recognized. Some advantages include:

  • • 3D models provide accurate information about the object
  • • Allow recognition and tracking of objects with varying colors
  • • Can support texture-less objects with uniform surfaces and a high amount of reflective parts
  • • Easy to integrate into existing CAD/CAM workflow
  • • Delivers robust against light changes
  • Unlock the power of CAD + AR

    Interested in integrating AR in your process or product using CAD data or other 3D models? Wikitude’s beta program is open for testers! 

    Applicants will receive dedicated support from our expert engineers and win a free customized Object Target map for testing your solution. Start working with 3D Model-based Object Tracking today.



    No CAD or 3D models in-house? Try Wikitude’s image-based object and scene recognition and tracking technology. With this alternative, anyone is able to work with AR combined with physical assets.

    If you need help choosing which method is best for your particular solution, check out our handy AR tracking guide

    Categories
    AR features

    Scene Recognition and Tracking: Augmented Reality Use Cases and How-to

    From large industrial machinery to showrooms, scene tracking enables the creation of continuous and persistent AR experiences for areas and large-scale objects.

    As the complexity of augmented reality use cases grows, computer vision technology evolves to fulfill new requirements and expand the understanding of the world around us.

    Catalyzed by COVID-19, immersive outdoor environments, homes, and workspaces have gained significant momentum, opening the possibility for people to connect remotely and digitally enhance their surroundings.

    A crucial functionality in augmented reality and spatial computing is Scene Tracking – a unique feature that can be used to track pre-determined environments and large-scale objects.

    By identifying reference points and features in your chosen scene or area, augmented reality content can be displayed and accessed on a wide variety of phones, tablets, and AR smartglasses.

    Scene Recognition and Tracking augmented reality

    Scene Tracking AR feature and supported targets

    Scene Tracking, sometimes referred on the market as Area Targets, empowers a wide variety of use cases: maintenance and remote assistance, training and onboarding, digitalization of the visitor/user experience in the context of retail, tourism, museums, gaming, and more.

    In order to trigger this marker-based AR, the device must detect a known target, therefore mapping the object or scene is needed.

    Structures that can be mapped as AR targets include but are not limited to:

    • Exhibition booths and showrooms
    • Selected environments on retail stores, rooms and apartments
    • Large complex machinery on factory floors
    • Building façades
    • Sections of indoor spaces
    • Museums
    • Squares, fountains, feature-rich courtyards

    Scene Tracking Use Cases

    This AR feature is used to recognize and track larger structures that go beyond small-sized objects as well as area targets and machinery. Digital content can be added in the form of, annotations, videos, step-by-step instructions, links, directions, text, 3D augmentations, and more.

    Enterprise and Industrial Setting

    Scene Tacking can help digitalize workspaces by providing frontline workers access to immersive workflows with real-time information on the industrial setting. On-demand AR instructions and assistance on the factory floor help staff work faster and safer.

    Navigation on the factory floor can help teams involved in multiple procedures to work more intuitively. In addition, step-by-step guidance and on-the-fly virtual notes can be attached to machines, facilitating knowledge transfer and streamlining communication across shifts.

    AR-enabled training and onboarding can help companies save time and money when welcoming new workforce by guiding technicians throughout tasks and helping them connect with remote experts via video calls. 

    AR-Enabled User Experience

    Scene Tracking is a powerful tool to connect people to places. Hotels, museums and retail stores can attract visitors by digitally enhancing the exterior of the building or store façade with a digital sneak peek at what’s inside. Promotion, gamification, and additional AR touchpoints incentivize visitors to come inside.

    Touristic destinations can offer information on demand in multiple languages, so visitors can, for example, learn about historical sites, monuments, squares, fountains and more.

    Object Targets with scenes: how to create a 3D target map reference

    In order to build Object Targets based in scenes, it is necessary to create a pre-recorded map of the object that will then be used to trigger the AR experience.

    The map creation workflow is simple:

    • Collect images* or 3D models of the object or scene (best practice guide)
    • Convert images into a Wikitude Object Target Collection (.wto) using Studio Editor
    • Use the .wto file in your AR app project

    For optimal performance, scanned spaces should have little to no variation compared to the 3D map generated for the target. Extension or alterations of maps are possible to reflect changes in the environment (learn more) – developers still have the option of extending the map with images from different backgrounds and that cover additional areas of the object to increase recognition accuracy.

    For detailed instructions, access the how to create Object Targets section of the Wikitude Documentation.

    *Keep in mind the new and improved object mapping process (SDK 8 and up) uses images as source material. Previous SDK versions use video-based materials instead.

    Get Started with Scene recognition and Tracking

    To start creating your AR projects with a free Wikitude SDK trial key, create a Wikitude account and download the platform of your choice.

    This account also gives you access to Studio Editor, our web-based tool that allows you to generate, manage and publish your AR experiences without coding.

    For commercial purposes, access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

    Check the articles below to review other AR features in detail:

    Categories
    AR features

    Image Recognition and Tracking: Augmented Reality Use Cases And How-to

    Image Recognition and Tracking is a Wikitude augmented reality feature that gives apps the ability to detect 2D images, triggering digital content to appear in the form of videos, slideshows, 360° panoramas, sound, text, 3D animations, and more.

    This article contains impressive Image Recognition and Tracking AR use cases, helpful best practices, and documentation guides, and allows developers to explore the various image-based augmented reality features made possible with the Wikitude AR SDK.

    Image Recognition and Tracking Augmented Reality Use Cases

    Before we get into specifics, let’s start with classic Image Recognition and Tracking examples.

    The video below contains a selection of Image Recognition AR showcases powered with Wikitude augmented reality technology.

    Brands are using Image Recognition and Tracking augmented reality technology to tell their stories (Jack Daniel’s), display their products (Busch-Jaeger), explain their technology (Lufthansa Technik), entertain (Mirage By City Social), augment printed magazines (Abenteuer und Reisen), and even to deliver product messages (Francesco Rinaldi).

    Museums and Cultural Institutions are using Image Recognition and Tracking to digitally display insects (Butterfly Pavilion) and handmade artifacts (Terracotta Warriors), to provide exposition and art piece overviews (Grand Palais), and much more.

    Image Recognition and Tracking AR Technology Introduction

    As seen in the use case section above and GIF below, Image Recognition and Tracking is the AR feature that enables apps to recognize and track specific images to properly superimpose digital content onto them.

    Without getting too deep into technical aspects, Image Tracking relies on advanced Computer Vision technology to detect and augment images. To implement this functionality, developers must first predetermine which images they would like to use as AR triggers – also known as Image Targets.

    The preestablished Image Targets are stored in the form of a Target Collection and are identified by the Wikitude ImageTracker, which is responsible for analyzing the live camera view.

    For an in-depth explanation and review, access our documentation section according to your platform of choice.

    Keep in mind the Image Target to be recognized and augmented can be the identification tag of an electronic device or industrial machine, a cereal box, or the label of an everyday product like the shampoo bottle example below:

    Herbal Essences AR Experience – Google PlayApp Store

    Image Recognition distance

    The Wikitude SDK performs exceptionally well with Image Targets that are not close to the user. Targets (size: A4 / US-letter) can be recognized from more than three meters away. Moreover, image targets can be recognized and augmented even when they occupy a mere 1% of the device screen area.

    This makes it ideal for use cases in which users do not have the target image within hand reach. Think electronic screens at concerts which could display a target to be scanned by the audience, marketing posters in the distance, industrial MRO practices with image targets spread out the production line indicating the next steps to ensure quality and compliance, etc.

    Transparent Areas in Image Targets

    The Transparent Area feature in Image Tracking is used to support those particular types of image targets that do not fit the typical rectilinear image shape.

    A good example is the personalized Wikitude wallet ninja, containing cutouts and irregular outlines, as seen in the image and demo below.

    Wikitude wallet ninja

    The importance of this Image Recognition function lies in the fact that the areas of the image target that are transparent vary according to the background they are placed in. The instability of the image target can poorly affect the AR experience.

    The demo below compares the AR image-based performance of platforms with and without support for Transparent Areas for Image Targets. The stability of the augmentation speaks for itself.

    AR demo using Image Target with Transparent Areas

    Other examples of images containing transparent areas beyond the main outlines include tattoos, stickers, logos, images with cutouts and basically any image file containing parts with alpha channel transparency.

    The Image Recognition and Tracking use cases displayed above involve augmenting one single image target at a time. However, the Wikitude SDK allows developers to create image-based AR experiences that go beyond.

    Multiple Image Recognition and Tracking

    As the name suggests, this AR feature is not limited to working with one single image at a time.

    AR developers can use Multiple Image Recognition and Tracking to simultaneously recognize and track several different or identical image targets. The augmentations can be static or interactive, being possible to adjust distance and transformation (translation and rotation) settings in the development phase.

    The best way to grasp this concept is by checking some visual representations:

    Static and Identical Augmentations on Multiple Image Targets

    Interactive Augmentations on Multiple Image Targets

    Multiple Image Targets shelving solution in retail

    Having the ability to augment multiple images at a time, identical or not, greatly expands the AR use case possibilities and solutions. But it doesn’t stop there. The Wikitude SDK allows AR experiences to persist beyond the initial image target.

    Extended Tracking

    There are certain AR use cases in which the digital augmentations should remain even when the image target is no longer in sight. That is when Extended Tracking steps in. Developers can activate this function for each target individually when needed.

    Extended Tracking showing augmentations beyond the Target

    This Extended Image Recognition functionality is ideal for digitally projecting subsurface utilities, like underground pipelines to avoid during excavations or tubulation systems behind walls, as seen above. It can also be used for displaying augmented instructions and path guides, adding digital continuation to paintings, and more.

    Cloud Recognition

    Augmented reality technology works perfectly fine on device and offline. As a matter of fact, with the Wikitude SDK, apps have the ability to recognize up to 1,000 images without a network connection. However, for bigger projects that surpass this number, we offer Cloud Recognition.

    Cloud Recognition is the online storage solution for large-scale AR projects which allows developers to host up to 100,000 target images in one collection to enable fast, reliable and scalable cloud-based online AR experiences. Moreover, cloud recognition allows you to change the target images and augmentations without having to republish the app in the stores.

    Wine label recognition with large target database requirements

    Regardless of which Image Recognition feature you choose to work with, be it cloud-based, classic single target, extended, or multiple, it is important to have a solid Image Target base.

    Image Target Guidelines

    At the root of all Image Recognition AR experiences lies a target. Therefore, to achieve the best Image Recognition & Tracking AR results it is important to work within the Image Target guidelines.

    For high-performing image-based AR experiences, access: Image Targets: Guidelines, Tips, and Tricks to learn more about optimal image dimensions, aspect ratios, contrast, patterns, textured areas, image ratings and more.

    Once you have the ideal Image Targets in hand, the Target Management documentation will guide you through the Target Collection creation process.

    In the video below, we explain the ground rules of Image Tracking and how you can use it.

    Wikitude AR SDK download (free trial)

    To create an Image Recognition and Tracking AR experience yourself, download a free trial of the Wikitude SDK – and follow the instructions listed in the Sample section inlcuded in each set-up guide.

    Visit our download page to view other platforms and smart glass SDK options like HoloLens.

    How-to: sample instructions

    Access the links below for detailed code and instructions on how-to enable Image Recognition and Tracking AR experiences.

    Image Recognition and Tracking augmented reality is ideal for augmenting magazines, books, user manuals, packaging, catalogs, coasters, posters, gaming cards, machinery labels, logistic tags, you name it!

    However, if your AR project calls for augmenting objects & larger structures, or should your augmentations need to seamlessly appear out of the blue – without a target marker, check out these articles:

    For commercial purposes, access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

    Categories
    AR features

    Geo AR: Location-based Augmented Reality Use Cases and How-to

    Location-based augmented reality allows developers to attach interactive and useful digital content to geo-based markers. Continue reading to learn how Geo AR is used, review its technology and origins, gain access to free SDK trial links, and follow geographic-based AR sample instructions.

    • Geo AR Use Cases
    • Location-based augmented reality technology
    • Download AR SDK (free trial)
    • How-to Geo AR: sample instructions

    Geo AR Use Cases

    Wikitude has been developing augmented reality technology since 2008 and is responsible for having launched the world’s first pedestrian and car navigation system that integrated an AR display – eliminating the need for a map.

    Launched over a decade ago, it was awarded multiple times for being a “revolutionary step forward” in the navigation and guidance field.


    Even though augmented reality has been around for quite a while (with its first recorded conceptual thoughts dating back to 1901), it was a Geo AR location-based mobile game that put the technology on the map in 2016.

    Ever heard of Pokémon GO?


    Pokémon Go is not only a big hit in the AR community; it‌ ‌is actually one of the most successful mobile games ever created.

    Players across the globe use the app to discover digital Pokémons attached to specific points of interest as they explore the world around them. The Pokémon GO Geo AR app broke impressive records taking just 20 days to gross $100 million in revenue after its launch.

    The success of Pokémon GO opened doors and incentives to many investments in Geo AR gaming and location-based AR use cases in general. That is why geo-based augmented reality apps can be found in various sectors: entertainment, navigation, retail, tourism, communication, real estate, and more.

    Have a Geo AR game idea and need a little push to get started? Learn how to build an app like Pokémon GO in three simple steps.

    Geo AR: Location-based Augmented Reality Technology

    As seen above, Geo AR technology allows developers to add digital content to geographical points of interest. This means that unlike the typical marker-based AR features – like Image Tracking and Object Tracking, Geo AR does not need a physical target to trigger the AR experience. 

    The augmentations are attached to and appear at specific predefined geolocations.

    With a smart device, users can scan geographical locations to view or interact with various types of content: 3D augmentations, video, text, audio, links, and more.

    The Wikitude AR SDK includes convenient features that make geo-referenced data easy to work with. Depending on the use case, the sensor-based location tracking will be activated via GPS, compass and accelerometer, network, or beacon. 

    Download Wikitude AR SDK (free trial)

    The best way to get started with Geo AR is by downloading a free Wikitude SDK trial and following the instructions listed in the sample section below.

    Click the button to view all downloading options:


    Wikitude Geo AR is available on iOS JavaScript, Android JavaScript, Cordova, Xamarin, Flutter, Epson, Vuzix, and via the 3rd party plugins (such as LBAR for Unity, Ionic, or ReactNative).

    How-to Geo AR: Sample Instructions

    After downloading the Wikitude SDK, the sample instructions will explain how to create and place a marker at specific geolocations. 

    The sample is split into four different interconnected sections. At the end of the series, you will have a complete and reusable marker that has a title, a description, a selected and an idle state which animate smoothly from one to another.

    For additional SDK information and help, feel free to access our documentation section and forum.

    Apart from working with location-based augmented reality, developers can also explore Wikitude’s wide variety of AR features and check the articles below to review other AR features in detail:

    For commercial purposes, access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

    Categories
    AR features

    Instant Tracking: best practices and tracked environment guidelines

    Unlike Image and Object Recognition, which rely on pre-mapped targets to trigger the display of digitally augmented elements, Instant Tracking is markerless. So, instead of requiring a mark, it tracks features of the physical environment itself to overlay AR content. SLAM-based Instant Tracking is, therefore, highly dependent on the characteristics of the physical scene in which the AR experiences are taking place.

    After discussing Image Target guidelines, in this post we will briefly talk about how Instant Tracking works, sharing best use practices and the characteristics that make up for a good tracking environment.

    Instant Tracking: best use practices and environment guidelines

    The ability to track arbitrary environments without the need for a marker is a trait of the Instant Tracking algorithm that enables very specific use cases. A classic application is furniture product placement as implemented in this 3D Model on Plane example. But, how does it work?

    The Instant Tracking algorithm works in two distinct states:

    • Initialization State: the origin of the tracking procedure is defined by pointing the device and aligning an indicator. The user must actively confirm when the alignment is satisfactory before transitioning to the tracking state.
    • Tracking State: in this state the environment is being tracked continuously, allowing augmentations to be placed within the scene.

    Ideal Scene

    For best results during the initialization state, a good tracking environment must be used. Since instant tracking creates a point cloud of the scene, it relies on detecting capturable features. For that reason, an ideal scene is one that contains distinguishable elements and good lighting.

    Augmented Reality: Wikitude-Instant Tracking-Ideal-Scene

    A structured floor and/or carpet work best to detect ground planes, while clean surfaces might be a problem.

    Scene Mapping

    To get a good map of the scene, different viewing angles and perspectives are necessary.

    Augmented Reality: Wikitude-Instant Tracking-Scene-Mapping

    For best results take a step back after starting initialization and move left and right around the scene.

    Augmentation Placement

    Typically, one can walk 90 degrees to the left and 90 degrees to the right of a 3D model placed in the scene (180 degrees). However, there are cases in which a full 360-degree rotation is possible. It will depend on the trackability of the environment and the performance of the device being used.

    Learn more about setting up an ideal sample scene for Instant Tracking with the Wikitude SDK in Unity by watching this Unity video tutorial.

    Following these guidelines should result in great Instant Tracking results. For maximum performance update your Wikitude SDK to the latest available version and make sure to use supported devices during the AR experience.

    To continue reading the Wikitude AR Guide Series, access the Image Target installment.

    For more AR information, access the Wikitude Forum to browse through various AR topics discussed by active developers worldwide. Should you have any further questions, please contact studio@wikitude.com for extra support.





    Interested in creating an AR project of your own?
    Talk to one of our specialists and learn how to get started.

    Contact The Wikitude Team

    Categories
    AR features

    Image Recognition & Tracking: best practices and target guidelines

    Augmented reality experiences come in all shapes and sizes. Some have a location-based trigger, while others are activated by features of the physical scenery itself. Some rely on a 3D object to make the augmentation come to life, while others depend on a 2D target image to reveal the digital content. Within the various AR-activating mechanisms, however, it is safe to say that Image Recognition & Tracking is the classic go-to feature when the subject is augmented reality.

    If you are working with the Wikitude SDK, the Wikitude App or the Wikitude Studio, this Wikitude AR Guide series is for you. In this first installment, Wikitude shares Target Image guidelines as well as tips and tricks to achieve the best AR results for Image Recognition & Tracking.

    And make sure to take advantage of the star rating available on Wikitude Studio, which indicates the quality of the image target. The more stars a target has, the better it can be recognized and tracked. You just need to log in to Studio in order to use it for free.

    Image Targets: Guidelines, Tips, and Tricks

    Optimal Image Dimensions (500 ≥ 1000 pixels)

    Images sized between 500 and 1000 pixels in each direction (width or height) are within the optimal range for achieving high-performing recognition and tracking.

    Smaller images do not contain enough graphical information to extract the so-called feature points and larger images do not improve the tracking quality. The uniqueness, amount, and distribution of feature points are the key indicators for good detection and tracking quality

    Aspect Ratio (1:1)

    Images with a squarish aspect ratio (around 1:1) are the ideal proportion for optimal AR results.
    Other aspect ratios such as 3:4, 2:3 up to 16:9, however, will also perform well.

    Panorama images or other images with extreme aspect ratios, on the other hand, won’t deliver an optimal tracking performance.

    Tip: Crop the most prominent squarish part of your image to use as the target image.

    Image Contrast (high)

    Images with high local contrast and a large amount of rich textured areas are best suited for reliable detection and tracking.

    Color contrast (i.e. green to red edge) only appears as high contrast to the human eye but is not discriminative to computer vision algorithms as they are operating on grayscale images.


    Tip: Use a photo editing tool to increase the contrast of a low-contrast target image to improve detection and tracking quality. Keep in mind that the digital and printed version of the image should be exactly the same.

    Distribution of textured areas (even)

    Images with evenly distributed textured areas are good candidates for reliable detection and tracking. This might be the hardest part to be in control of and often can’t be changed.


    Tip: Check the feature distribution by using the heat map function of Studio and crop the most prominent part of your image to use as the target image.

    Whitespace (minimum)

    Single-colored areas or smooth color transitions often found in backgrounds do not exhibit graphical information suitable for detection and tracking.


    Tip: Crop the most prominent part of your image to use as the target image.

    Vector-based graphics

    Logos and vector-based graphics usually consist of very few areas with high local contrast and/or textured structures and are, therefore, hard to detect and track.


    Tip: Try adding additional elements to the graphic such as a logotype or other elements.

    Images with a lot of text

    Images consisting primarily of large areas of text are hard to detect and track.


    Tip: Try to have at least some graphical material and/or images next to your text for your target image.

    Repetitive patterns

    Repetitive patterns exhibit the same graphical information at each feature point and therefore cannot be localized reliably.

    Images with slightly irregular structures can convey similar information to the target audience while providing enough unique feature points to be detected.


    Tip: Try a different selection of your image including non-patterned parts or use images with irregular patterns.

    Star rating – Wikitude Studio

    The star rating that appears when an image is uploaded to our target management tool is only a first estimation of how well the target is expected to work. Even images with a low initial rating (1 star) may work fairly well. An image does not need 3 stars to work well. A 2-star rating is already very good and will deliver good results in most conditions.


    Tip: Test your image even if it gets an initial 1-star rating. Images with 2-stars do not need any further optimization for most use cases. The heat map tool can help differentiate between a workable 0-star image and an image that would not work at all.

    Supported Devices with optimal performance

    Fine-tuning your target image within these guidelines should result in a smooth and steady AR experience. Update to the latest Wikitude SDK version and make sure to use supported devices for maximum performance.

    For more AR information, access the Wikitude Forum to browse through various AR topics discussed by active developers worldwide. Should you have any further questions, please contact studio@wikitude.com for extra support.

    Stay tuned for the next installments of the Wikitude AR Guide series to learn more about best practices regarding Object Recognition and Tracking and Instant Tracking.





    Interested in creating an AR project of your own?
    Talk to one of our specialists and learn how to get started.

    Contact The Wikitude Team

    Categories
    AR features

    Instant Tracking: Augmented Reality Uses Cases And How-to

    Instant Tracking augmented reality technology makes it possible for AR applications to overlay interactive digital content onto physical surfaces without requiring the use of a predefined marker to kick off the AR experience.

    To better understand how Instant Tracking works and what is possible to create with it, continue reading to review the following topics:

    • Use Cases
    • Introduction
    • Instant Targets
    • SMART – Seamless AR Tracking with ARKit and ARCore
    • Download AR SDK (free trial links)
    • How-to: sample instructions

    Instant Tracking Augmented Reality Use Cases

    The video below contains several segments of augmented reality use cases using Instant Tracking AR technology.

    As seen in the video, Instant Tracking technology can be used for various applications: retail, journalism, marketing campaigns, furniture placement, museums – or simply just for fun, like the majestic sea turtle swimming about in the air.

    Instant Tracking AR Technology Introduction


    Unlike Object & Scene Tracking – covered in the first article of the Wikitude AR-technology series, Instant Tracking does not need to recognize a predefined target to then start the tracking procedure thereafter.

    Instead, it initializes by tracking the physical environment itself. This markerless augmented reality is possible thanks to SLAM – Simultaneous Localization and Mapping technology.

    SLAM is a technology that Computer Vision uses to receive visual data from our physical world (usually in the form of tracked points). Devices then use this visual input to understand and appropriately interact with the environment.

    To achieve this, the algorithm behind Instant Tracking works in two distinct states:

    • The initialization state: the end user is required to define the origin of the tracking procedure by pointing the device to align an indicator. Once the user confirms the alignment is satisfactory, a transition to the tracking state takes place.
    • The tracking state: the environment is being continuously tracked, allowing augmentations to be properly placed within the physical scene.

    This environment tracking capability enables very specific use cases, like the ones demonstrated in the video above.

    Towards the end of this article, we will share instructions on how to create a furniture placement sample app that will help you understand and explore the full potential of Instant Tracking technology.

    But first, let’s talk about Instant Targets and SMART, two important Instant Tracking AR features.

    Instant Targets

    Instant Targets is a feature within AR Instant Tracking which allows end users to save and load their AR sessions.

    This means, important digital notes, directions, visual augmentations – and the whole AR experience itself – can be accessed and experienced by multiple users across devices and operating systems (iOS, Android, and UWP) at different points in time.

    This makes sharing and revisiting the AR experience easy and meaningful. Instant Targets also allows users to load, edit, and resave the AR experience on the fly. Very practical, especially for remote assistance and maintenance use cases.

    While Instant Target helps users share AR experiences, SMART greatly expands device AR capability.

    SMART – Seamless AR Tracking – with ARKit and ARCore

    SMART is a seamless API within Instant Tracking which integrates ARKit, ARCore and Wikitude’s SLAM engine in a single cross-platform AR SDK.

    With it, developers do not have to deal with specific ARKit/ARCore code and can create their projects in either JavaScript, Unity, Xamarin, and Cordova. SMART works by dynamically identifying the end user’s device and deciding which should be used for each particular case.  

    One of the best advantages, apart from not having to deal with different codes during the development phase, being the expanded compatibility with a wider range of devices available in the market.

    Wikitude AR SDK download (free trial)

    To create an Instant Tracking experience yourself, download a free trial of the Wikitude SDK – and follow the instructions listed in the Sample section below.

    Wikitude SDK for Android

    Wikitude SDK for iOS

    Wikitude SDK for Windows

    Wikitude SDK for Unity

    Wikitude SDK for Cordova

    Wikitude SDK for Xamarin

    How-to: Instant Tracking Sample and Instructions

    Access the Wikitude documentation of your preferred platform to follow instructions on how to create an Instant Tracking sample experience.

    The instructions start with a simple implementation for basic understanding, moving forward with 3D model additions and preliminary interaction, working its way to the final fully fledged furniture placement use case example.

    Create an Instant Tracking AR experience

    Learning how to work with Instant Tracking technology is a must in the modern AR world. It not only allows AR projects to go beyond targeted locations, images and objects but it also enables AR experiences to happen anywhere anytime across different devices and platforms.

    For commercial purposes, access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

    Categories
    AR features

    Get SMART – Seamless AR Tracking with ARCore and ARKit

    Simultaneous localization and mapping (SLAM) was one of the most impactful technologies in recent years. It opened a wider spectrum of augmented reality experiences going beyond image & object targets by instantly tracking real-world scenes and placing digital layers in everyday environments.

    With the release of Wikitude’s markerless AR feature, followed by Apple’s ARKit and Google’s ARCore, lots of new eyes turned to augmented reality. The impressive markerless apps developed in the past year spoke for themselves: SLAM is here to stay.

    Now that the dust has settled, an inconvenient truth arises: dealing with different SDKs and APIs for different platforms is cumbersome, time-consuming and not cost-efficient.

    Determined to solve developers’ hardship of dealing with multiple APIs, our team created SMART – Wikitude’s ‘Seamless AR Tracking’ Feature for the Wikitude SDK.

    Track, save and share: download the new Wikitude SDK 8.

    SMART – Wikitude’s ‘Seamless AR Tracking’

    SMART is a seamless API which integrates ARKit and ARCore on top of Wikitude’s SLAM engine in a single augmented reality SDK, cross-platform, for any device. This new SDK feature does more than a simple fusion of augmented reality platforms: it wraps them all on Wikitude’s intuitive API. This means developers won’t have to bother dealing with specific ARKit/ARCore code. The Wikitude SDK will dynamically identify the end user’s device and decide if ARKit, ARCore or Wikitude’s SLAM should be used for each particular case.

    The SMART feature is part of SDK 7.2 (and latest versions) and makes ARKit and ARCore accessible to developers working with JavaScript, Unity, Xamarin, Titanium, PhoneGap, and Cordova. If you are just getting started with your AR development journey, don’t miss our blog post covering all the development tools and extensions supported by Wikitude.

    A free trial the SMART is now available for download.

    Expanded Device Compatibility

    ARKit and ARCore surely contribute to making AR more readily available for developers, but up to now, both SDKs have restricted device compatibility.

    SMART ensures the delivery of the best possible augmented reality experience on a wider range of devices, covering 92,6% of iOS devices and about 95% of all suitable Android devices available in the market. This makes Wikitude the first toolkit in the market to truly democratize markerless AR for developers.

    No matter if you are tracking horizontal or vertical planes, SMART is able to place 3D objects, videos, buttons, widgets and more using Wikitude’s SLAM engine.

    For those interested in a wider variety of augmented reality experiences, check out our SDK’s geo-location, image recognition and tracking, multi-target recognition, object recognition and more with ample device coverage (smartphones, smart glasses, and tablets) and multiple platform deployment (iOS, Android).

    Are you ready to get SMART?

    • Download the SDK of your choice
    • Read all about SMART (part of Wikitude’s Instant Tracking feature)
    • Get the setup guide
    • Use Wikitude’s sample app to try your project

    We can’t wait to hear your feedback! Please let us know via Twitter, Facebook and in the comments below how you like SMART. Share the good news using #SMART & #wikitude.