Dev to Dev

7 statistics every app developer should know about augmented reality

Augmented reality statistics and predictions

The concept of augmented reality is not as new as you might think. As a matter of fact, back in 1901 Lyman Frank Baum first mentioned the idea of an electronic display that could overlay data onto real-life objects and named it a ‘character marker’. The maturity and history of augmented reality, however, is a topic that deserves a post of its own.

For now, let’s fast-forward 100+ years and better understand why Apple, Google, Facebook, Amazon, Microsoft, Adobe and other tech giants are fiercely investing in augmented reality as it is known today.

Starting with what the tech giant leaders and representatives are saying about AR:

• Apple CEO Tim Cook has been claiming to be AR’s biggest fan and called it “critically important” :

“I am so excited about AR. I think AR is one of these very few profound technologies that we will look back on one day and, ‘How did we live out without it?’ Simple things today that you can use it for, like if you’re shopping for a sofa or a chair or a lamp, in terms of experiencing it in your place… We’ve never been able to do that before until the last couple of years or so. And that’s at the early innings of AR, it will only get better”.

• Meta’s CEO Mark Zuckerberg betted the company’s future on a metaverse that will have augmented reality as its essential part:

“I believe that that augmented and virtual reality are going to enable a deeper sense of presence and social connection than any existing platform. And they’re going to be an important part of how we will interact with computers in the future.”

It’s undeniable that AR has gained serious momentum in the last years. Still, if your client is hesitant if he/she should give augmented reality a go, here are 7 stats that will help you win your next app development project.

7 augmented reality statistics every app developer should know

      1. By 2024 there will be an estimated 1.7 billion mobile augmented reality (AR) users worldwide, a rise of 1.5 billion from the 200 million seen in 2015(Statista)

      2. Consumers believe AR is the most disruptive emerging technology, overtaking AI (GlobalData).

      3. Around 75% of the global population and almost 100% of the smartphone user population are expected to turn into AR frequent users by 2025 (Snap Consumer Global AR report).

      4. The smartphone and handheld devices display segment is predicted to see the highest CAGR up to 2028 (Grand View Research).

      5. Expectations for AR technologies continue to outpace VR in terms of expected revenue, market penetration, and consumer adoption, with 3/4 of respondents expecting the AR market to eventually surpass VR in total revenue (Perkins Coie).

      6. The retail sector continues to benefit from AR, with virtual fitting room market to exhibit 20.6% CAGR till 2028 (Fortune Business Insights).

      7. The Metaverse market may reach $783.3 billion in 2024 vs. $478.7 billion in 2020 representing a compound annual growth rate of 13.1% (Bloomberg)

As data shows, augmented reality is growing exponentially. New creative AR apps, solutions, and devices are stirring up the market and gaining the trust and interest of users worldwide. And a great part of the AR success is due to its versatility.

AR technology is proving to be useful in practically every sector: retail, logistics, journalism, maintenance, fashion, tourism, gaming, decor and more. Check the top AR use-cases to see how mobile developers are using augmented reality to boost their apps and improve the overall user experience.

Already working with augmented reality and want to increase your chances of getting your AR project approved? Take a look at some tips we’ve collected for you on how to pitch augmented reality for your next project.

With augmented reality in current expansion the next big AR hit, after the Pokémon Go frenzy, is still up for grabs. Perfect time for innovative app developers to make their moves. Are you up for the challenge?

Talk to one of our specialists and learn how to get started with AR today.

Dev to Dev

APIs: scaling up AR capabilities

An API (Application Programming Interface) allows applications to communicate. Serving as an access point to an app, API enables users to access a database, request and retrieve information, or even alter data on other applications.

In this article, we get you familiar with the functionality of the Wikitude Studio and explain how you can benefit from using Studio API in your AR app.

Introduction to Wikitude Studio API

Wikitude’s Studio is an easy-to-use, drag and drop, web-based AR content manager. Using Studio, you can easily create two types of targets(image targets (2D items) and object targets (3D items)) to further augment for your JS-based app. On top of that, you can add simple augmentations to test your targets and their position. 

Not sure if the image target quality is good enough? Use the rating indicating the quality of an image target as a guide. Wikitude Studio API also enables conversion of image target collections to cloud archives and their management, making it possible to work with cloud-based recognition instead of on-app recognition. You can create and host your AR projects in Studio and link them directly to your app without exporting and pushing app updates.  

What does the Studio API do? 

Studio API allows you to access all the functionality mentioned above without logging in to Wikitude Studio. You can have your app or system programmatically communicate with the engine behind Wikitude Studio. The keyword here is “programmatically,” meaning the flow enables simplified app development, design, and administration and provides more flexibility. In practical terms, it allows users to quickly scale up and integrate AR capabilities into existing architecture. 

How can Studio API benefit your business: use cases

Now, let’s see some real situations where Studio API can come in handy.

  • Create the project for each of the targets your customers upload in your CMS

Studio API can be integrated into your own CMS, making it easier to maintain collections and content automatically. Say you run a printing photo service and an accompanying app. The end-customer can upload pictures and add digital content associated with that photo: a video, a song, an animation, or GIF. By scanning a printed image with the app, the customer can access an AR experience that enhances a memory or a moment captured on the photo. 

Creating the image targets, assigning augmentations to the targets, and publishing content can be managed programmatically, enabling you to design the user interaction the way you want to. 

Similar functionality could be used by a postcard service, corporate merchandise producers, and other services. 

  • Easily manage image targets and have your app make updates in the background  

When working with fast-changing content, numerous images, and heavy augmentations, we discourage storing your targets and augmentations in the app. Offline recognition will force you to redeploy the app frequently and make the size of the app massive. That’s where Wikitude cloud-based recognition comes to the rescue.

Imagine a publisher (just like issuing analog books and magazines with an extra AR layer. Such a service can have one app giving access to all the AR experiences associated with each printed item. As the new books and magazines are published, the publisher simply programmatically adds fresh digital content to the server, making it available for the users in the app via cloud-based recognition.

Wikitude cloud-based recognition provides an opportunity to work with a target collection containing up to 50,000 images. Otherwise, you are limited to 1,000 target images per active target collection, and only one group can be active at a time. This flow can lead to a longer recognition speed, and the end-user will need to switch manually between collections. The functionality could be extended to many other fields, such as education, tourism, art, and culture.

  • Integrate the AR functionality with already existing architecture and automatically grab data from that closed system 

The Studio API can also be used for 3D items. By having the 3D models and the material file of a machine, a robot, or part of an assembly line, you can use that information and render images of that specific machinery. The Studio engine will automatically process those images via the API to create object targets, while the API will help position augmentations. 

Using such AR experience lets employees detect and precisely localize malfunctions on the production line by grabbing data from other parts of the system, such as live sensors, measurements, and machinery history. The factory can leverage its existing training material or repairment specifications and overlay AR instructions on the machines, reducing the time required to identify and fix issues. 

Wikitude Plugins API 

The Plugins API allows extending the Wikitude SDK by 3rd party functionality. It enables users to implement external tracking algorithms (such as image tracking, instant, and object tracking) to work in conjunction with the Wikitude SDK algorithms. Additionally, you can input camera frame data and sensor data into the Wikitude SDK, which will take care of the processing and rendering. Our compatible plugins are written in C++, Java, or ObjC and can communicate with JavaScript, Native, and Unity plugins. Please note we currently don’t provide support for the extension SDKs like Cordova, Xamarin, Flutter. 

  • Integrate with OCR and code readers 

What else can you achieve? The Plugins API can trigger AR content via QR code and barcode reader or add text recognition. Our client Anyline‘s text recognition API allows apps to read text, gift cards codes, bank slips, numbers, energy meters, and much more. The company’s solutions have been used by Red Bull Mobile, PepsiCo, and The World Food Program.

Anyline barcode reader

  • Build remote assistance app by leveraging Wikitude’s instant tracking  

Typically, our engine is set up to recognize targets in the camera feed. With the Plugins API, you can set specific images as input rather than grab them from the camera feed. Where does that come in handy? That is an implementation-specific for remote support solutions, where one needs to broadcast a user screen to another user. Scope AR used this functionality when launching WorkLink Remote Assistance, their AR remote assistance tool. They required a markerless tracking provider to complement the plugin they created, and we were happy to support it with our technology.

  • Augment the human body 

Another use case that we’ve often encountered is adding face detection, hand detection, or body detection. To use it, you need a library specialized in one of those detection functions and plug Wikitude in it. It will take over the processing and rendering of AR content. Watch our detailed face detection sample to learn more.

Connecting with a face tracking library via the Plugins API is not the only way to create this type of AR experience in combination with Wikitude. Alternatively, you could access Face and Body tracking from ARKit or ARcore via AR Bridge in our Unity Expert Edition SDK.

Wikitude AR Bridge

As you already know, an API (Application Programming Interface) allows applications to communicate with one another. Wikitude’s AR Bridge, part of the Unity Expert Edition SDK, has similar functionality: it provides access to the tracking facilities of other native platforms, e.g., ARKit and ARCore. The AR Bridge enhances Wikitude’s image and object tracking by tapping into external tracking capabilities. There are two options:

  • Internal: this is a direct communication to ARKit and ARCore maintained by Wikitude and, at the moment, it offers basic positional tracking (no plane detection or more advanced tracking); 
  • Plugin: allows an indirect connection to any tracking facility, pre-existing or written by developers.  As an example, we provide integration with Unity’s AR Foundation plugin. 

We provide a ready-made plugin for Unity’s AR Foundation that developers can use immediately. The plugin uses AR Bridge to inform the SDK about tracking. All current and future AR Foundation features work similarly to what we referred to as 3rd party libraries in the Plugins API context.  

However, plugins can be developed by anyone, not just by Wikitude. For example, a company is building glasses with its tracking system and integrating with Wikitude. Since they are not using ARKit or ARCore, the internal AR Bridge won’t be a default. Instead, they can create their plugin and have this custom solution work fast inside our SDK.

Ready to start building today? Access our store to choose your package or contact our team to discuss which license is the best match for your AR project.

Dev to Dev

How to apply UX design principles in augmented reality

If you are a UX/UI designer who builds user experiences in digital environments, chances are you will be working with augmented reality sooner than you think. As AR applications rapidly break into the mainstream, making the user feel in control of a product becomes even more critical in user experience design.

This article breaks down the role that user experience design principles play in augmented reality application development, with a specific focus on UI design.

The article is based on a presentation by our senior software engineer, Gökhan Özdemir, for the “UX for AR” webinar. Watch the full recording here.

What is UX design for augmented reality? 

User Experience Design, or UX, is the process of designing a product that considers the needs of the users and makes the user flow as seamless and intuitive as possible. Good UX always starts with putting the user at the center of the design process. It also relies on the principles of human psychology and empathy.

Now, what about UX for AR?  In augmented reality apps, success means offering a great user experience through a seamless blend of hardware and software. 

Augmented reality experiences are overlaid on the real environment, so the user experience is spatial and highly contextual. It makes designing UX for AR more challenging as designers need to think through spatial experiences. Getting it wrong can mean users have a less than stellar experience – and no one wants that. 

Getting started

User design can be tricky. Designing for a new technology that is only getting traction? Even trickier! Let’s explore the role of user experience (UX) design in AR applications — how to think through your user experience as a designer and navigate the technical decisions when creating an AR app. 

You will learn how to create a compelling user experience for your AR application that considers the physical space and natural human interaction. 

Five pillars of UX design for augmented reality

Users prefer to interact with elements of an interface discreetly, not to be reminded of what the interface contains. This is different from the traditional user experience (UX) associated with conventional websites and mail applications. The UX for augmented reality (also known as 3D user interface) concept emphasizes interaction and visual interest above all else. Users are interested in entering the virtual space and are not distracted by surroundings that are not real.

Our five common UX design pillars for AR will help you define the considerations you’ll need to make when designing your UI and experiences for virtual objects.

Kick-off your design process by considering these criteria:

  • Environment
  • Movement
  • Onboarding
  • Interaction
  • UI (User Interface)

While it’s crucial to consider the first two pillars (environment and movement) designing for AR, the last three (onboarding, interaction, and UI) are equally crucial for both 3D and traditional 2D screen space UI.  


As augmented reality experiences are spatial and always interconnected with the real world, the environment plays a key role in the design process. The environment can be broken down into four most common categories of space, defined by the distance from the user.

Image source: Wikipedia

Examples of AR in the intimate space include face filters (like Snapchat or Instagram), hand tracking, or hand augmentations if you use a head-mounted AR display. 

Moving to personal space, augmented reality experiences might feature real objects, people, or the area around you. Featured in the video below, you can see a learning-focused AR experience that uses educational models to animate the chemistry concepts through an interactive digital layer.

AR experience in personal space

Another example of using augmented reality in personal space is popular table-top and card games and augmented packaging. Think augmented pizza boxes or collectible cards with augmented characters that interact with each other.

Next up is the social space. If you pan the camera further away, you will target the area that can be occupied by other people, unlike in a personal space where you have more privacy. This space segment is widely used for multiplayer AR games or augmenting objects on a scale, from the furniture to monuments and buildings.

In many cases, AR experiences in public space are anchored to specific locations with enough area to place an augmentation or sites that should be tracked in AR. The mumok AR experience in Vienna is a perfect example of the AR in public space where the entire building is tracked, using the Wikitude Scene Tracking feature.

mumok AR


The success of any new product or service directly depends on how well it integrates with today’s users’ minds — both physically and psychologically. Movement makes the next UX design pillar. When you design the experience, you want to use the area around the user most of the time.

As smartphones and head-worn devices give a limited view into the environment, the designer’s primary task is to guide the user. By including the navigation elements on the screen, you will be steering the user’s gaze, helping them get around and move along the experience. 

While you are visually guiding the user, it’s essential to keep in mind not to dictate to go in specific directions. This might lead to unwanted hiccups in the experience or even cause accidents. 


The next pillar we are going to explore is the user onboarding. Creating user-friendly and engaging augmented reality experiences can be a challenge. It’s not enough to just put some markers around your location or overlay some information on top of an image. You need to understand what the user is looking at and how they are using it. When creating the AR experiences, keep in mind that the most important thing for your users is not accuracy but usability. 

Another factor to consider is that different devices have various technical limitations in supporting AR features. Markerless AR, for instance, would require the user to move the device around, so that computer vision algorithms could detect different feature points across multiple poses to calculate surfaces.

The scanning process takes no time for newer devices with an in-built LiDAR sensor (like iPad Pro). But for other devices, your users might appreciate a comprehensive onboarding UI. The pop-up menu or instructions should guide the user on the following steps to successfully launch and run an AR experience.

To launch a tracking algorithm, you might want to use a sketched silhouette of the desired object to provide a clue on the shape and pose to prompt the user to align the view with the real object. Read more about the Alignment Initializer feature in our documentation.

Alignment initialization

Taking the onboarding offline, sometimes the physical methods like signage are used to communicate about the AR app, provide a QR code for quick download and mark the exact standpoint for an optimal experience. 


Once the AR experience is launched, we are transitioning to another UX design staple – interaction. During this phase, your user will benefit from the intuitive and responsive interaction. When designing for touch, you are most likely be using these most common gestures and prompts: 

  • Tap to select
  • Drag starting from the center of the object to translate
  • Drag starting from edge of the object to rotate
  • Pinch to scale

Responsive interaction means taking into account the distance from the desired objects to the camera, which will define how easy or difficult it is for the user to interact with it. To facilitate the interaction with farther placed objects, consider increasing the sphere’s bounding box to make it less dependent on the distance to the camera.

Minimizing input via finger might also be a good idea, especially when designing for tablet users. As most of the tablets are held with two hands, some UI or interaction elements placed in the middle of the screen will be very hard to interact with. Instead, use gaze input like triggering intro, interactions, or buttons in the augmented space by looking at them long enough. You might know this from VR where you don’t have any controllers and experiences are mostly driven by gaze. 

Consider using accessibility features, especially if you are designing for a broader audience. This way, you let the user rotate or reset the position of an augmentation instead of walking around it.

UI (User Interface)

The final principle we want to highlight is UI, which consists of augmented and traditional screen space. Depending on the use case, you will be using them interchangeably. While UI in the augmented space boosts immersion as the user perceives it as part of the experience, screen space UI is sometimes easier to read and interact. 

Designing with humans in mind

AR can improve people’s lives simply by allowing them to experience something that wasn’t possible before. Applying UX principles to AR can help designers create experiences that are clear, integrate easily into daily life, and create powerful emotional responses.

The guidelines we’ve shared aren’t magic bullets, but they do place fundamental guidance around where designers should be focusing their attention when crafting an experience for a user of any age.

What is your take on using UX principles when designing AR experiences? Let us know via social media (TwitterFacebook, and LinkedIn) and tag @wikitude to join the conversation.

3d Dev to Dev

Creating 3D content for augmented reality

Content is constantly changing. Designed for TVs and devices in the early 2000s, it now transcends the 2D realm and comes to the world around. 3D augmented reality content needs to be as immersive as VR advocates ever dreamed, minus the isolation from the outside world.

The more AR becomes part of our lives, the higher the need for content to adapt to the 3D world. It means the content needs to be realistic, spatial, and engaging. And while there are thousands of apps online, most companies are still figuring out what compelling content looks like in AR.

In this post, we’re diving into the role of content in ​​augmented reality, the challenges the industry faces, and the future of spatial content. 

Augmented reality content basics

Augmented reality content is the computer-generated input used to enhance parts of users’ physical world through mobile, tablet, or smart glasses. It can be user-generated (think of social media face filters) or professionally produced by designers working for brands and specialized agencies. 

AR content often comes as 3D models but can also come in visual, video, or audio format.

Whether you are using AR to buy a new IKEA sofa or play a game, the quality of the content you see in the app will make (or break) the AR experience.

Image source: IKEA

The role of 3D content in augmented reality experiences

Among the thousands of AR apps in the market today, the most successful ones have one thing in common: high-quality, engaging AR content. Fail to deliver that, and your project will risk joining the astonishing 99.9% of apps that flop or become irrelevant in the app stores

Content is the heart of ​​augmented reality. It ensures users have a reason to keep coming back. 

Users might be thrilled to scan an augmented wine bottle a few times and share the experience with friends. But how many times can we expect them to go back and watch the same video? 

Companies must see AR content as a critical component of long-term, well-thought-through digital strategies to ensure app longevity. It means constantly delivering fresh, contextual, and personalized content. 

Easier said than done. From high production costs to a scarcity of skilled professionals, building AR content at scale is one of the biggest challenges companies face that blocks them from keeping the apps relevant in the long run.

Challenges of building 3D content for augmented reality

3D models need to create perfect digital twins of the real world. Combined with other rendering elements (e.g. animation, audio, and physics), they make for AR’s most used type of content and provide an additional immersive layer for the user experience. 

What the user doesn’t see is the relatively complex process of creating such realistic visual assets. Their production can go from a detailed manual process and re-using computer-aided data to a photogrammetry-based creation process.

Size limits, file formats, and the total size of the application are just some of the plenty requirements developers need to understand to build great AR experiences. In addition, a lack of industry standards for AR content and a limited qualified workforce imposes significant industry challenges.

Building 3D assets: 3D model versus 3D scanning

Before we jump into the technicalities of creating content for AR, there are some basic concepts we need to clarify.

3D modeling x 3D scanning

3D modeling and 3D scanning are two ways of building 3D assets for augmented reality. 

3D modeling uses computer graphics to create a 3D representation of any object or surface. This technology is beneficial when used to recreate physical objects because “it does not require physical contact with the object since everything is done by the computer” (Skywell Software). Therefore, 3D modeling becomes ideal for creating virtual objects, scenes, and characters that don’t exist in the real world (think of Pokémons and other fantasy AR games).

3D scanning uses real-world objects and scenes as a base for the production of AR assets. Using this method, the content creators don’t craft the model from scratch using a program. Instead, they scan the object using one of two different methods: photogrammetry or scanning through a 3D scanner device (LiDAR or similar). 

GIF source:

The main difference between the two is how they capture the data of the object. While photogrammetry uses images captured by regular smartphones, smart glasses, or tablets, scanning requires special devices equipped with depth sensors to map the object. 

It makes photogrammetry more accessible to the broader developer crowd when creating AR content, as no special equipment is required. On the flip side, 3D scanners are more reliable. 

Using either of two approaches, a point cloud can be extracted, which can be applied in the AR experience.  You can read more on the advantages of each method in the section 3D point cloud below. 

Ultimately, you can decide between using 3D modeling or 3D scanning by assessing the accessibility to the physical object to scan. If the selected AR object target is not available, then 3D modeling is the way to go. 

How is 3D content created for augmented reality?

There are plenty of AR content creation tools available on the market. Some are easy drag-and-drop that don’t require coding skills. Others are much more complex, and target experienced professionals.

Here’s an overview of the different possibilities:

Image source: DevTeam.Space

3D point cloud: In AR, a point cloud is a virtual representation of the geometry of real-world objects using a collection of points. Generated via photogrammetry software or 3D scanners, these points are captured based on the external surfaces of objects.

Because photogrammetry allows gathering 3D information out of 2D images, this method makes content creation more accessible. It overcomes ownership issues often faced with 3D models. As a result, anyone can create 3D models by simply recording or scanning the real object. 3D scanners (for example, LidAR-enabled devices) gradually become more available in the market and provide more detailed point clouds thanks to depth sensors.

Commercial tools such as Wikitude Studio, Apple Object Capture, and Amazon Sumerian are examples of photogrammetry-based programs.

AR Object Target Transformation in Wikitude Studio Editor

CAD (Computer-Aided Design): CAD models are commonly the first step to prototyping physical goods, bringing a first product view to life in the digital world. Assisted by software applications, AR developers can repurpose legacy CAD models for augmented reality-based solutions. Existing CAD data can then be used as the input method to create digital representations of the object or environment to be augmented.

Once uploaded into the selected program, CAD data is converted to an AR-compatible format in phones, tablets, and smart glasses. CAD models typically provide accurate information about the object, maximizing the potential for a reliable AR experience. Albeit being prevalent in the industrial sector, CAD-based AR experiences are progressively gaining popularity for consumer-facing apps.

Games, computer graphics: authoring software tools such as Blender, 3ds Max, and Maya are popular 3D design applications used by AR content creators. Unity, Unreal Engine, and even Apple’s Reality Composer are great tools to assemble the pieces of content and make them work together for augmented reality.

Other 3D models: beyond CAD, other popular 3D model formats can be leveraged to power augmented reality solutions, for example, glTF 2.0, FBX, Obj, etc. Compatible file formats will depend on the program used to build augmented reality. 

On the one hand, this wide variety of 3D assets formats has opened the doors to creators of many areas to put their existing models to work for AR. On the other hand, it creates confusion among developers, increasing the debate around the need for standardization in the AR industry and the creation of alternative tools that are intuitive and code-free.

What’s next for AR content creation?

With increased interest in augmented reality, we will see more tools emerging that help to create content, overcome workforce scarcity and deliver actual value through the technology. 

To facilitate content creation, AR companies invest in building platforms that don’t require technical skills (therefore closing the workforce gaps) to help brands optimize the AR content creation process. 

An example is Apple’s latest release: Reality Kit 2. This new framework includes a much-awaited Object Capture feature that allows developers to snap photos of real-world objects and create 3D models using photogrammetry. 

But if Apple’s announcement gives you déjà vu, you are not wrong. Last year, the AR media went crazy about an app that lets you copy and paste the real world with your phone using augmented reality.  

The topic of interoperability of experiences across platforms and devices is equally important. The ability to code an AR app once and deploy it in several devices and operating systems helps companies bring their projects to market as fast as possible.

The final and most crucial aspect is understanding how 3D content in augmented reality can deliver value to its users. That means getting clear goals for the AR project, understanding how it fits into your digital strategy, and having a deep knowledge of your customer. 

What are some of the trends you see in AR content creation? Let us know via social media (TwitterFacebook, and LinkedIn) and tag @wikitude to join the conversation.

Dev to Dev

Augmented Reality Glossary: from A to Z

Augmented reality technology becomes a driving force behind tectonic changes in business methods by and large. We created a comprehensive AR glossary with the most common terms and definitions to help you understand the lingo better.

Augmented Reality Glossary



Augmented Reality (AR)

Technology that uses software to superimpose various forms of digital content – such as videos, photos, links, 3D models, and others – in the real environment, predefined images or object targets. The realistic augmentation is achieved by making use of the device camera and its sensors.

AR Bridge

A feature that allows developers to integrate native AR SDKs such as ARKit and ARCore with advanced image and object tracking functionality from the Wikitude SDK. When enabled, the camera configured as AR Camera will be driven by AR Bridge, while the Drawables will be driven by the Wikitude SDK. The Wikitude SDK provides a built-in connection to these native SDKs through the Internal AR Bridge implementation. This is a ready-made solution that just needs to be enabled. As an alternative, a Plugin implementation can be chosen, which allows the developer to integrate with other positional tracking SDKs.

AR Overlay

An overlay principle is indispensable for the augmented reality technology.  Overlay happens when the formats such as images, videos, or 3D are superimposed over an Image or Object Target.

ARKit and ARCore

These are, respectively, Apple’s and Google’s AR development platforms. Fully integrable with the Wikitude SDK, ARKit and ARCore can be extended with features that are not natively available in those AR frameworks or come with different quality standards (compared to the implementation in the Wikitude SDK).

Automatic Initialization

Automatic initialization is the default mode of the Wikitude SDK for both image and object targets. It is the most natural behavior for users, and as they point the camera towards the target, position and orientation will be detected automatically. The tracking of the target will start seamlessly. 


Alignment Initialization

The alignment initializer is a visual element in the UI that signals the user from which viewpoint (perspective) the object can be recognized and tracking can be started. This feature can be used for objects that are hard to recognize automatically (usually objects with unclear or unreliable texture). An unreliable texture could be an object that has different colors or areas that keep changing (e.g. mud, stickers). 

Assisted Reality 

Assisted Reality is a non-immersive visualization of various content (e.g. text, diagrams, images, simple videos).  Being considered experience within the augmented reality range, the assisted reality is often delivered through wearable hardware and serves to enhance personal awareness in given situations or scenes.

Assisted Tracking

Assisted tracking is a term describing a technology where the performance of Image, Cylinder, and Object targets benefit from the fact that a native AR framework is run in parallel. This results in increased stability of the mentioned trackers even when they move independently. Assisted tracking is enabled by default when using AR Bridge or AR Foundation.



Computer-Aided Design (CAD)

CAD or computer-aided design and drafting (CADD), is a technology for design and technical documentation. In AR, CAD is a common asset format used as an input method for augmented reality experiences. The format digitalizes /automatizes designs and technical specifications for built or manufactured products. 

Combine Trackers

The feature that allows developers to combine different trackers such as Positional Tracking from ARKit/ARCore, Image Tracking, and Object Tracking in a single AR experience.

Computer Vision (CV)

Computer vision is the ability of machines to recognize, process and understand digital images and objects, as well as scenes of the world around us. CV is one of the bases of augmented reality and the core of Wiktiude’s AR SDK. 

Cloud Recognition

Cloud Recognition is a cloud-based service that hosts predefined images online and allows recognition of many targets through a smartphone or smart glasses. This service allows fast, scalable, and reliable online recognition for ever-changing, dynamic, and large-scale projects.

Cylinder Tracker

Cylinder Tracker (or cylinder targets) is a special form of an Image Target. With it, images that are wrapped around a cylindrical shape can be recognized and tracked. This can range from labels on a wine bottle to prints on a can or any other graphical content. Cylinder Recognition and Tracking extend the capabilities of the Wikitude SDK to recognize cylinder objects. The feature is based on the Image Tracking module, but instead of recognizing plane images, it is able to recognize cylinder objects like cans through its images.



An instance of an augmentation prefab that is instantiated in the scene when a target is detected.


Extended Tracking

Extended Tracking allows digital augmentations, attached to objects, scenes, or images, to persist in the user’s field of view even when the initial target is no longer in the frame. That is particularly useful when showing large augmentations that exceed the target. 


Field of view

The field of view is an area that can be observed either physically by a person or through a device lens. Depending on the lens focus, the field of view can be adapted and can vary in size. 


Geo AR

Location-based augmented reality allows developers to attach interactive and useful digital content to geo-based markers. This means that unlike the typical marker-based AR features – like Image Tracking and Object Tracking, Geo AR does not need a physical target to trigger the AR experience. Wikitude has been developing augmented reality technology since 2008 and pioneered in launching the world’s first location-based AR app for mobile. 



A hologram is a digital content formed by light that is projected on a transparent display or into open space. This type of content can be static or interactive, is usually three-dimensional and commonly used for smart glasses/mixed reality devices such as HoloLens. 


HoloLens is a Microsoft’s head-mounted display, also referred to as mixed reality smart glasses. A popular device for industrial use cases and compatible with the Wikitude SDK.


Instant  Targets

Instant Targets is a feature within AR Instant Tracking which allows end-users to save and load their AR sessions. It means the important digital notes, directions, visual augmentations, and the whole AR experience itself can be accessed and experienced by multiple users across devices and operating systems (iOS, Android, and UWP) at different points in time. This makes sharing and revisiting the AR experience easy and meaningful. Instant Targets also allows users to load, edit, and re-save the AR experience on the fly.  The versatility of the feature use makes it very practical for remote assistance and maintenance use cases

Image Target

Image Target is a known planar image which will trigger an AR experience when recognized through the camera view from a smartphone or smart glasses. Targets are preloaded to the Wikitude system and are associated with a target collection for recognition.

Image Recognition and Tracking

This feature enables the Wikitude SDK to recognize and track known images (single or multiple) to trigger augmented reality experiences. Recognition works best for images with characteristics described on Wikitude’s best practice Image Target guideline. Suitable images can be found on product packaging, books, magazines, outdoors, paintings, and other 2D targets.  

Instant Tracking 

Instant Tracking technology (also known as ‘dead reckoning’) makes it possible for AR applications to overlay interactive digital content onto physical surfaces without requiring the use of a predefined marker to kick off the AR experience. Instant Tracking does not need to recognize a predefined target to start the tracking procedure thereafter. Instead, it initializes by tracking the physical environment itself. This markerless augmented reality is possible thanks to SLAM – Simultaneous Localization and Mapping technology. 





Machine Learning

Machine learning is a subset of artificial intelligence, that provides computer algorithms with the ability to learn and constantly improve the learning outcome based on the knowledge collected.


Markup is the method of creating a composed scene by using the augmentations, triggers, or other information.



Object Target 

Objects can be used as targets to trigger the AR experience upon recognition via the camera view. The target is a pre-recorded map of the object. Object Targets can be created using two different ways: images or 3D models as input methods. The source material in both cases is converted into a Wikitude Object Target Collection, which is stored as a .wto file.

Object Recognition and Tracking

This feature enables the Wikitude SDK to recognize and track arbitrary objects for augmented reality experiences. Object Recognition and Tracking let users detect objects and entire scenes that were predefined. Recognition works best for objects that have only a limited number of changing/dynamic parts. Suitable objects for recognition and tracking include toys, monuments, industrial objects, tools, and household supplies.

Optical character recognition (OCR) 

OCR, or optical character reader, is the electronic conversion of images of handwritten or printed texts into machine-encoded text.


Positional Tracker (from Native AR frameworks)

The Wikitude SDK can use native AR frameworks (like ARKit or AR Core) in parallel to other trackers. This can be either through an existing connection to Unity’s AR Foundation or through Wikitude’s own AR Bridge. Positional tracking is the process of tracking the position and orientation of the device continuously by the device itself. This is sometimes referred to as World Tracking (Apple), Motion Tracking (Google), Head Tracking (VR headsets), or Instant Tracking (Wikitude Professional Edition).



Range Extension

The Wikitude SDK Image Recognition engine can make use of HD camera frames to detect images from further away. Further away in this context means distances 3x further away, compared to not enabling this mode (e.g. A4-sized target can reach recognition distances in the area of 2.4 meters/ 8 feet). This feature is called Image Recognition Range Extension and can be activated through a setting in the Image Tracker class. 

Real-world Scale

The Wikitude SDK can be configured to work with a real-world scale, which has the benefit that augmentations can be authored with a consistent scale that will be preserved when used on different targets.


Recognition describes the process of finding an image or object in the camera viewfinder. For augmented reality purposes, it is not enough to only identify the object or the bounding box of the object. The position and orientation of the object need to be detected accurately as well. This capability distinguishes AR recognition significantly from other recognition or classification services. Recognition acts as the starting point for tracking the object in real-time – this is also referred to as initialization. The Wikitude SDK has two recognition methods: Automatic Initialization and Alignment initialization. 

Remote Assistance

Remote Assistance in the context of augmented reality is the offering of a platform or application with features such as live video streaming of images and videos. The digital content is overlaid on the user’s view of the real-world environment, making it essential for frontline and field workers in various industries.  


Scene Recognition

The object recognition engine of the Wikitude SDK is used to recognize and track larger structures that go beyond table-sized objects. The name Scene Recognition reflects this in particular. The feature is ideal for augmented reality experiences using rooms, building facades, as well as squares and courtyards as targets.

Software Development Kit (SDK)

Group of development tools used to build an application for a specific platform.

Spatial Computing

This term is defined as human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.


SLAM is an abbreviation for Simultaneous Localization and Mapping technology. SLAM is a technology that Computer Vision uses to receive visual data from our physical world (usually in the form of tracked points). Devices then use this visual input to understand and appropriately interact with the environment. 


SMART is a seamless API within Instant Tracking that integrates ARKit, ARCore, and Wikitude’s SLAM engine in a single cross-platform AR SDK. By using it, developers do not have to deal with specific ARKit / ARCore code and can create their projects in either JavaScript, Unity, Xamarin, and Cordova. SMART works by dynamically identifying the end user’s device and deciding which should be used for each particular case.  



A target image and associated extracted data are used by the tracker to recognize an image.

Target collection

An archive storing a collection of targets that can be recognized by the tracker. A target collection can come from two different resource types: as plain (a regular ZIP file containing images in plain JPG or PNG) or preprocessed (regular images that are converted into a WTC file (Wikitude Target collection) for faster processing and optimized storing offline).


The AR experience should “understand and follow” where a specific object is placed in the real-world to anchor content to it. This process is commonly referred to as tracking. Tracking in ideal cases happens in real-time (minimum every 33ms) so that the object is followed very accurately. There are many trackers available today, ranging from trackers that follow a face, hands, fingers, images, or generic object. All of them are based on a reference that is later understood by the software.



Unity is a cross-platform game engine developed by Unity Technologies.



Wikitude SDK

The Wikitude SDK script handles the initialization and destruction of the SDK and its native components. It additionally needs to be configured with the correct license key for your application. You can either buy a commercial license from our web page or download a free trial license key and play around with our SDK.


XR (Extended reality)

Extended Reality is an umbrella term that covers all computer-generated environments, either superimposed onto the physical world or creating immersive experiences for the user. XR includes AR, VR, MR, and any other emerging technologies of the same type.




Barcode scanning software can be combined with the Wikitude SDK via Plugins API allowing developers to integrate barcode identification to AR apps.  

3D model based generation 

3D models of objects are a great source for information, that can be used as a reference for recognizing and tracking an object for augmented reality experiences. The huge variety of 3D models in today’s market ranging from precise CAD/CAM data for manufacturing to runtime assets defined in FBX glTF or others brought us to the conclusion to launch this feature in closed BETA. For more details please contact Wikitude directly.

Would you like us to include other terms and concepts in Augmented Reality Glossary? Let us know.

Contact us

Dev to Dev SDK releases

Power your Flutter app with augmented reality

Wikitude SDK 9.9 unlocks the power of augmented reality for Flutter 2.2

Wikitude was the very first AR platform to offer official support for Flutter. And now you can bring even more powerful augmented reality features to your Flutter-based apps. 

Our latest release, SDK 9.9, delivers location-based AR, image and object tracking, markerless AR, and a wide range of features allowing Flutter developers to augment the world around them in just a few hours. 

Wikitude’s Flutter Plugin is based on the JavaScript API and comes with the complete package: comprehensive AR library/framework, sample app, and documentation.

In combination with the Wikitude SDK, Flutter’s 2.2 release brings more options for polish and optimization, iOS performance improvements, Android deferred components, and more.

For those hearing about Flutter for the first time, it is Google’s open-source mobile application UI development framework used to build natively-compiled apps for iOS and Android.

The ease of use and constant improvements in this framework makes Flutter one of the most popular development tools among the community, surpassing an astonishing 2 million users mark in just 3 years. 

Get started with augmented reality for your Flutter-based application. 

Curious what else is new in the Wikitude SDK 9.9? Head over to our release notes to learn more.

Download Wikitude SDK
Dev to Dev

Augmented Reality 101: what is AR and how it works

The tech giants are heavily betting on augmented reality. A few years back, Apple brought attention to this technology by launching ARKit, an AR software development kit for iOS. Following in their footsteps, Google soon released ARCore, an AR SDK for Android. Facebook, Microsoft, and Amazon have also jumped on the bandwagon.

Starting from the basics

Simply put, augmented reality is computer-generated elements (graphics, 3D animations, videos, etc.) digitally layered on top of a user’s view of the real world. Currently, AR can be experienced through smart glasses, tablets, and smartphones.

To get a better grasp of the concept, think about the protagonists of Arnold Schwarzenegger in the “Terminator” movie, Robert Downey Jr. in “Iron Man”, or Ryan Reynolds in “Free Guy”. All characters have their vision of the physical world enhanced with resourceful superimposed instant digital data – also known as AR.

Now, let’s move from the Hollywood-style examples to the nitty-gritty of augmented reality. We’ll begin with the types of augmented reality technology that are actually available today.

Augmented reality features

Be not mistaken; not all AR experiences are alike. Some are triggered by flat images, others by three-dimensional objects. Some require Geo-location based data to prompt the AR content, while others allow you to spontaneously augment the world around. It all depends on what the developer wants to achieve. Let’s take a quick look at some of the most common types of AR technologies in use today:

Geo-location AR

This feature triggers the AR experience by a predefined location or by the user’s position. Users can visualize and interact with AR content that has been placed in specific locations worldwide. The usage varies from restaurant reviews left by customers all the way to Pokémon hunting journeys. Speaking of this AR craze, check this awesome tutorial to learn how to create your own Pokémon Go-like app.

Fun fact: AR technology started with location-based experiences and, almost a decade ago, Wikitude was the company that launched the very first location-based AR app.

Image Recognition and Tracking

The AR experience is triggered by a targeted image (or multiple targeted images). Users scan predetermined recognizable images to view and interact with AR content. It works with a single image as well as with multiple images. Once the target image is recognized, users can move their device around the subject and continue viewing the AR content. Multiple targets can also interact with one another (and it’s frankly very impressive!).

Nowadays, this technology is widely used in marketing and e-commerce. The use cases vary from AR enriched catalogs, board games, museum guides, and more. We have a complete guide to the best practices and target guidelines to make it easy for you to start.

Object Recognition and Tracking

The AR experience is triggered by various objects, that can be completely different in shapes, sizes, and textures. Object tracking allows users to scan previously mapped real-world objects (e.g. toys, sculptures, architectural models, product packaging, industrial machines) in order to view and interact with AR content in a 360-degree manner.

This ever-evolving technology has been proving its usefulness in product modeling, design, assembly, and other areas. You can dig deeper into object tracking use cases and step-by-step instructions here.

Instant tracking (also known as positional tracking)

Instant tracking is an algorithm that, contrary to those previously introduced, does not aim to recognize a predefined target. Instead, it immediately starts tracking in an arbitrary environment. The core of this feature lies in SLAM (simultaneous localization and mapping), a technology that makes it possible to instantly create a map of what is in the camera view and to understand where the device is relative to the world around it.

It is most commonly used in client engagement campaigns to promote products. We also see more home decor apps picking it, allowing users to place virtual furniture in their home environment before the actual purchase. You can learn all about instant tracking use cases and how to get started here.

Which AR engine to choose?

Your choice of AR engine will most likely be determined by budget, AR features needs, system integration requirements, and preferred development platform. Despite the variety of features in use today, Google ARCore and Apple ARKit’s most substantial focus is on markerless AR technology. Any apps created through their SDK can also only be deployed on their respective platforms (either Android or iOS).

More posts from the AR 101 series:

This first part of Wikitude’s Augmented Reality 101 series tackled the basics. After explaining what AR is and going through the most common types of AR technologies, we will next explore how augmented reality is being used in the world nowadays.

To cut straight to the chase and get some hands-on action, download our powerful award-winning SDK today to put it to the test!

Dev to Dev

Top Augmented Reality (AR) Tutorials

If you are anything like us, striving to become a better programmer is definitely among your goals. That’s why Wikitude has decided to pitch in our choice of AR tutorials and encourage those who wish to follow through.

Don’t break the motivational flow and keep yourself busy by reviewing this fine selection of augmented reality programming tutorials.

Markerless AR tutorial with Wikitude, Unity, and SLAM technology

In this AR tutorial, you will learn how to detect ground planes, place augmented objects and set up the sample scene for instant tracking or SLAM technology. 

3D Object Tracking 

Learn how to record your 3D objects, build object trackers, and add effects to the AR scene.

Object Recognition from a distance

Learn how to create an AR app that can recognize objects from a far distance.

Multiple Target Tracking with Unity

Learn how to create a simple AR tower defense game using Wikitude.

Mobile AR with Object Tracking in Unity

Learn how to create an animated AR Aquarium.

Geo-based AR app

This AR tutorial shows how to make a Pokémon Go type app in three easy steps.

Instant Tracking with Unity

Remember the famous WikiTurtle app? This tutorial teaches you step-by-step how to create a floating 3D markerless AR turtle using Unity and Wikitude’s instant tracking (SLAM).

To put these tutorials to the test download a free trial of the Wikitude SDK to start working with a wide range of AR features, including markerless tracking, object tracking, image tracking, and geo-location AR.

The Wikitude SDK allows developers to choose between using Native API, JavaScript API, or other supported extensions and plugins such as Cordova, Unity, Xamarin and Flutter.

Have specific AR questions? Access the Wikitude Forum to get extra support from a broad network of active developers worldwide.

For extra inspiration, navigate through Wikitude’s YouTube channel for use cases, presentations, AR features and much more.

Good luck with your AR tutorials endeavors!

Interested in creating an AR project of your own?
Talk to one of our specialists and learn how to get started.

Contact The Wikitude Team

Dev to Dev

Augmented Reality 101: development tools and extensions for beginners

Wikitude is well known for supporting a wide range of development frameworks. Our SDK is constantly updated to meet the demands, needs, and wants of software developers worldwide. But for beginners, learning Augmented Reality can often be overwhelming.

With this in mind, we decided to shed some light on the matter, hoping to clarify a few basic AR concepts and tools, and give insights about the available development frameworks compatible with the Wikitude SDK. AR has never been more accessible, thanks to our Augmented Reality 101 guide for beginners.

First things first:

Software Development Kit (SDK)

An SDK is a set of tools, software, code samples, guides, technical documentation, libraries, processes, and everything needed to develop software applications for specific platforms. A go-to tool for Augmented Reality beginners and developers.

Computer Vision Engine (CV Engine)

The computer vision engine provides the core functionalities of the features available in the Wikitude SDK and is used by all platforms. It includes three major parts in its own: SLAM Engine, Image Recognition Engine, and the Cloud Recognition Engine. Wikitude’s CV engine is not directly accessible but rather wrapped either by the Native API (Java, Obj-C) or the JavaScript API.

One of the most popular AR features, which has been highly adopted by the market, is Image Recognition:

Application Programming Interface (API)

Simply put, an API is an interface that allows two programs to communicate with each other following a set of commands, functions, objects, and protocols that developers can use to create software.

Wikitude’s SDK allows developers to choose between using Native API, JavaScript API, or any of the supported extensions and plugins available and detailed below.

Native API

The Native API is used to create apps that are developed specifically for their respective platforms, therefore not able to make cross-platform AR experiences like the JavaScript API (see below). This means that the code base of native apps needs to be written in the programming language of their corresponding operating system: Obj-C for iOS, Java for Android, and C++ for Windows.

The Native API does not integrate a separate rendering engine, developers have full control and, therefore, maximum flexibility to use the rendering engine of their choice. This “tailor-made” characteristic, consequently, requires more advanced programming knowledge.

JavaScript API

Wikitude’s JavaScript (JS) API is the most used SDK among our developer community. Using simple web technologies, this API lets developers create augmented reality content defined in HTML and JavaScript. This means developers and apps that enjoy simple cross-platform development can benefit from Wikitude’s JavaScript SDK.

Unlike the Native one, with this API the entire experience is controlled and defined in JavaScript. It incorporates a fully functional rendering engine with broad support for augmented reality content such as 3D models, videos, images, etc.

Since JavaScript is typically known to be easier to learn, edit, implement and debug, this API has not only been able to hold its ground over the years, but its usage is actually still increasing.

You can have a complete overview of which features are included in each of the Wikitude SDKs with this feature list.

Wikitude extensions and plugins

Wikitude offers different extensions and plugins to combine the available SDK features with other development frameworks. Among the extensions based on Wikitude’s JavaScript API are Cordova (PhoneGap), Xamarin, and Flutter. These extensions include the same features available in the JS API, such as location-based AR, image recognition and tracking, and SLAM.

Unity is the sole plugin based on the Wikitude Native SDK and includes image recognition and tracking, SLAM, as well as a plugin API, which allows you to connect the Wikitude SDK to third-party libraries.

Unity Plugin

Unity is a cross-platform game engine and one of the world’s most popular platforms for mobile game development. Wikitude expanded its SDK to support Unity, and it quickly became very popular among our developer community.

On top of the Native API, the Unity plugin allows developers to integrate Wikitude’s computer vision engine into a game or application fully based on Unity.

This extension provides a very easy-to-use interface that allows developers to preview content and run their apps on both iOS, Android, and Windows using the same code base.

Since 2020, Wikitude is offering two Unity Editions: SDK Expert Edition (EE) and SDK Professional Edition (PE). Developers can use ARkit/ARCore’s positional tracking via SMART in PE and via AR Bridge feature in EE. The list covering the differences between EE and PE can be found here.

Unity is also a popular choice for games (and apps) with advanced graphics, which enables the development of highly realistic AR experiences on mobile. It has a very strong online developer community with several tutorials, forums, support channels, and rich documentation, which makes it an appealing option for developers interested in building AR experiences, such as the one below:

Cordova Plugin

Using the JavaScript API in the background, the Cordova plugin allows developers to work cross-platform using HTML or CSS. Apart from being attractive for the “write once, run anywhere” aspect, this API also results in lower development and maintenance costs. Apache Cordova stands out for its simplicity, empowering web developers to create AR apps.

Other supported development frameworks based on Wikitude’s Cordova extension are PhoneGap, Intel® XDK HTML5, SAP Mobile Platform, and Telerik app builder.

Flutter Plugin

Flutter is Google’s open-source UI SDK. Launched in 2017, this framework has gained significant momentum among developers looking to build cross-platform apps. For Wikitude, this plugin runs on our JavaScript API and enables the full feature set of the Wikitude SDK, powering Flutter projects with AR. It includes ARKit & ARCore (SMART) on supporting devices.

Xamarin Plugin

Xamarin is for the lovers of C#. Acquired by Microsoft in 2016, this app development framework is particularly popular for its runtime environment, ease of use, and good performance.

The Wikitude Xamarin Component enables C# developers to embed an augmented reality view into their Xamarin project. One can create a fully-featured app with advanced augmented reality features, including image recognition and tracking.

Architecture of the Wikitude SDK

To better understand the architecture of the basic components which comprise Wikitude’s SDK, please refer to the overview below:

As shown above, Wikitude provides a wide selection of programming possibilities for our developers, including augmented reality for beginners. If you found this article helpful and are thinking about including an AR feature in your project, get inspired by the second part of our Augmented Reality 101 series for beginners, where we share an extensive list of real-world AR use cases.

If you are curious about the technical side of the most common types of AR technologies in use today, we recommend navigating through the first part of our Augmented Reality 101 for beginners series.

More posts from the AR 101 series:

To cut straight to the chase and get some hands-on action, download our powerful award-winning SDK right now to put it to the test!

Download and start testing with the free trial
Dev to Dev

How to build an AR app like Pokémon Go in three simple steps

If you haven’t been living under a rock, you’ve heard about the Pokémon Go app. This game has already picked up more users than Twitter and helped push Nintendo’s market capitalization over $69 billion in 2020 (and counting!). Pokémon Go app is also a great example of a location-based (LBS) game and geo-based augmented reality.

Much of the success is due to the phenomenal fan base Pokémon already had in place. Another success component is clever use of smartphone technology, including very easy-to-use AR. And here’s the great news for developers: all the tools to build your own location-based AR game are already out there. We’re quickly going to show you how to use them in three simple steps.

OK, it’s a little more complicated than that, but once we break it down to fundamentals, you’ll see how simple it really is to build your own kind of Pokémon Go app.

Ready? Keep on reading!

Step 1: Set up a basic Geo-based infrastructure

Setting up the server infrastructure for such a game requires some work. You need to manage user accounts and arrange a smart distribution of Pokémon characters or objects of your choice in addition to some contextually aware components in your system.

For simplicity: let’s assume that you already created the server backend required to fetch the closest 5 creatures for any given user position so that once you know where the user is you get a list of them in a JSON Format (making it easier to parse in your JavaScript code).

Placing objects (such as Pokémons) around users using their GPS position was actually the very first feature supported by the Wikitude SDK since 2012.

Step 2: Build a geo-AR app

The Wikitude SDK JavaScript API also offers features to fetch the user’s location, and place videos, 2D and 3D content in geo-spaces. You can query content from a remote origin to create a personalized Geo-AR scene on the fly.

The Wikitude JavaScript API offers so-called AR.GeoObjects, which allows you to place virtual objects in the real world. A callback function informs you about user movements, firing latitude, longitude, and altitude information. In the following sample, the user’s first location is stored for further use, using Wikitude’s JS SDK.

Note that this code must run inside a Wikitude AR-View, the central component of the Wikitude JavaScript SDK.

AR.context.onLocationChanged = function(lat, lon, alt, accuracy){
// store user's location so you have access to it at any time
World.userLocation = {"latitude": lat, "longitude" : lon, "altitude" : alt };

The following function uses coordinates to create an AR.GeoObject out of a 3D Model (in Wikitude’s wt3 file format) and an AR.GeoLocation.

createModelAtLocation: function (location) {

World.modelGeoLocation = new AR.GeoLocation(	location.latitude, 
location.longitude + 0.0005, 
// load model from relative path or url
World.model = new AR.Model(World.PATH_MODEL_WT3, {
// fired when 3D model loaded successfully
onLoaded: function() {

// define model as geoObject
	World.GeoObject = new AR.GeoObject(World.modelGeoLocation, {
		drawables: {
			cam: [World.model]
		onEnterFieldOfVision: function() {
			console.log('model visible');
			World.modelVisible = true;
onExitFieldOfVision: function() {
			console.log('model no longer visible');
World.modelVisible = false;
		onClick: function() {
			console.log('model clicked');
onError: function(err) {
console.error(‘unexpected error occurred ’ + err);

Step 3: Make the rules of your game

Now that you can place virtual 3D models in the real world, you can get as creative as you can be and define the rules of your augmented reality game.
Here are some examples of what’s possible:

    • Similar to the Pokémon Go app, you may require the user to be a maximum of 10 meters (30 ft.) away from a 3D model experience, otherwise the player may not collect or see it. On top of that, you could also use altitude values to make sure a user has to get to the top of a building, or other unique locations.
    • Mix up the content! In addition to 3D models, you could also use videos, text, icons, or buttons and place them in any geo-location you like.
    • In addition to GEO AR, you could also use 2D image recognition and tracking: Use signs, billboards, print, walls, or any other 2D surfaces as “stages” for interesting and cool AR augmentations that are “glued” onto these surfaces as seen in this video.
    • If you want to take your game to the next level, use the environment around your user to extend the play with markerless AR or even include physical toys with object tracking to create immersive hybrid play experiences.

    Get an app template!

    Need an app template to get you started? No problem, just download the Wikitude SDK package, which includes an application called “SDK Examples”. One of the many AR templates you will find in there is called “3D Model At Geo Location”. Simply use this as a starting point for your code.

    You can also download Wikitude’s sample app (included in the SDK download package) for examples of how to build your own geo-AR experiences. Also, have a look at this helpful video tutorial made by a developer from our community:

    Additionally, check out this integration tutorial by LBAR – a third-party plugin, built on top of Wikitude’s comprehensive Unity plugin, offering location-based AR for Unity:

    That’s it! We hope you will build great geo-based AR apps with our SDK.

    Wikitude provides a wide selection of development frameworks. Developers can choose between using Native API, JavaScript API or any of the supported extensions and plugins available.
    Next to JavaScript Android and iOS, among the extensions based on Wikitude’s JavaScript API are Cordova (PhoneGap), Xamarin, and Flutter. These extensions include the same features available in the JS API, such as location-based AR, image recognition, object tracking and instant tracking.

    In case you have any questions, don’t hesitate to contact us via the Wikitude forum, we have a broad network of developers who can help you in creating the next big AR app!

Interested in creating an AR project of your own?
Talk to one of our specialists and learn how to get started.

Contact The Wikitude Team