Dev to Dev

APIs: scaling up AR capabilities

An API (Application Programming Interface) allows applications to communicate. Serving as an access point to an app, API enables users to access a database, request and retrieve information, or even alter data on other applications.

In this article, we get you familiar with the functionality of the Wikitude Studio and explain how you can benefit from using Studio API in your AR app.

Introduction to Wikitude Studio API

Wikitude’s Studio is an easy-to-use, drag and drop, web-based AR content manager. Using Studio, you can easily create two types of targets(image targets (2D items) and object targets (3D items)) to further augment for your JS-based app. On top of that, you can add simple augmentations to test your targets and their position. 

Not sure if the image target quality is good enough? Use the rating indicating the quality of an image target as a guide. Wikitude Studio API also enables conversion of image target collections to cloud archives and their management, making it possible to work with cloud-based recognition instead of on-app recognition. You can create and host your AR projects in Studio and link them directly to your app without exporting and pushing app updates.  

What does the Studio API do? 

Studio API allows you to access all the functionality mentioned above without logging in to Wikitude Studio. You can have your app or system programmatically communicate with the engine behind Wikitude Studio. The keyword here is “programmatically,” meaning the flow enables simplified app development, design, and administration and provides more flexibility. In practical terms, it allows users to quickly scale up and integrate AR capabilities into existing architecture. 

How can Studio API benefit your business: use cases

Now, let’s see some real situations where Studio API can come in handy.

  • Create the project for each of the targets your customers upload in your CMS

Studio API can be integrated into your own CMS, making it easier to maintain collections and content automatically. Say you run a printing photo service and an accompanying app. The end-customer can upload pictures and add digital content associated with that photo: a video, a song, an animation, or GIF. By scanning a printed image with the app, the customer can access an AR experience that enhances a memory or a moment captured on the photo. 

Creating the image targets, assigning augmentations to the targets, and publishing content can be managed programmatically, enabling you to design the user interaction the way you want to. 

Similar functionality could be used by a postcard service, corporate merchandise producers, and other services. 

  • Easily manage image targets and have your app make updates in the background  

When working with fast-changing content, numerous images, and heavy augmentations, we discourage storing your targets and augmentations in the app. Offline recognition will force you to redeploy the app frequently and make the size of the app massive. That’s where Wikitude cloud-based recognition comes to the rescue.

Imagine a publisher (just like issuing analog books and magazines with an extra AR layer. Such a service can have one app giving access to all the AR experiences associated with each printed item. As the new books and magazines are published, the publisher simply programmatically adds fresh digital content to the server, making it available for the users in the app via cloud-based recognition.

Wikitude cloud-based recognition provides an opportunity to work with a target collection containing up to 50,000 images. Otherwise, you are limited to 1,000 target images per active target collection, and only one group can be active at a time. This flow can lead to a longer recognition speed, and the end-user will need to switch manually between collections. The functionality could be extended to many other fields, such as education, tourism, art, and culture.

  • Integrate the AR functionality with already existing architecture and automatically grab data from that closed system 

The Studio API can also be used for 3D items. By having the 3D models and the material file of a machine, a robot, or part of an assembly line, you can use that information and render images of that specific machinery. The Studio engine will automatically process those images via the API to create object targets, while the API will help position augmentations. 

Using such AR experience lets employees detect and precisely localize malfunctions on the production line by grabbing data from other parts of the system, such as live sensors, measurements, and machinery history. The factory can leverage its existing training material or repairment specifications and overlay AR instructions on the machines, reducing the time required to identify and fix issues. 

Wikitude Plugins API 

The Plugins API allows extending the Wikitude SDK by 3rd party functionality. It enables users to implement external tracking algorithms (such as image tracking, instant, and object tracking) to work in conjunction with the Wikitude SDK algorithms. Additionally, you can input camera frame data and sensor data into the Wikitude SDK, which will take care of the processing and rendering. Our compatible plugins are written in C++, Java, or ObjC and can communicate with JavaScript, Native, and Unity plugins. Please note we currently don’t provide support for the extension SDKs like Cordova, Xamarin, Flutter. 

  • Integrate with OCR and code readers 

What else can you achieve? The Plugins API can trigger AR content via QR code and barcode reader or add text recognition. Our client Anyline‘s text recognition API allows apps to read text, gift cards codes, bank slips, numbers, energy meters, and much more. The company’s solutions have been used by Red Bull Mobile, PepsiCo, and The World Food Program.

Anyline barcode reader

  • Build remote assistance app by leveraging Wikitude’s instant tracking  

Typically, our engine is set up to recognize targets in the camera feed. With the Plugins API, you can set specific images as input rather than grab them from the camera feed. Where does that come in handy? That is an implementation-specific for remote support solutions, where one needs to broadcast a user screen to another user. Scope AR used this functionality when launching WorkLink Remote Assistance, their AR remote assistance tool. They required a markerless tracking provider to complement the plugin they created, and we were happy to support it with our technology.

  • Augment the human body 

Another use case that we’ve often encountered is adding face detection, hand detection, or body detection. To use it, you need a library specialized in one of those detection functions and plug Wikitude in it. It will take over the processing and rendering of AR content. Watch our detailed face detection sample to learn more.

Connecting with a face tracking library via the Plugins API is not the only way to create this type of AR experience in combination with Wikitude. Alternatively, you could access Face and Body tracking from ARKit or ARcore via AR Bridge in our Unity Expert Edition SDK.

Wikitude AR Bridge

As you already know, an API (Application Programming Interface) allows applications to communicate with one another. Wikitude’s AR Bridge, part of the Unity Expert Edition SDK, has similar functionality: it provides access to the tracking facilities of other native platforms, e.g., ARKit and ARCore. The AR Bridge enhances Wikitude’s image and object tracking by tapping into external tracking capabilities. There are two options:

  • Internal: this is a direct communication to ARKit and ARCore maintained by Wikitude and, at the moment, it offers basic positional tracking (no plane detection or more advanced tracking); 
  • Plugin: allows an indirect connection to any tracking facility, pre-existing or written by developers.  As an example, we provide integration with Unity’s AR Foundation plugin. 

We provide a ready-made plugin for Unity’s AR Foundation that developers can use immediately. The plugin uses AR Bridge to inform the SDK about tracking. All current and future AR Foundation features work similarly to what we referred to as 3rd party libraries in the Plugins API context.  

However, plugins can be developed by anyone, not just by Wikitude. For example, a company is building glasses with its tracking system and integrating with Wikitude. Since they are not using ARKit or ARCore, the internal AR Bridge won’t be a default. Instead, they can create their plugin and have this custom solution work fast inside our SDK.

Ready to start building today? Access our store to choose your package or contact our team to discuss which license is the best match for your AR project.


Augmenting the future: interview with Martin Herdina

Martin Herdina talks about Wikitude joining the Qualcomm family, growing together with the developer community, and why the future of augmented reality is headworn.

Running a start-up needs strong vision, grit, and persistence. Running an augmented reality start-up? Double that up and mix in a profound belief in an extraordinary team that can accomplish anything.

It started with a vision

Thirteen years ago, we set out on a mission to pioneer the augmented reality industry. As a team of engineers, researchers, product and business people from all walks of life, we came together under the Wikitude’s roof to pursue our curiosity and see what happens if we take another step towards our vision.

Our belief has always been that AR will drastically shape the future of how we consume information, and we worked hard to make that vision a reality.

Our belief has always been that augmented reality will shape the future of how we consume information, and we worked hard to make that vision a reality.

A fair share of wins (some smaller, some larger) in the market showed us that we are on the right path (even though things have been tough sometimes). Wikitude spearheaded the industry when we launched the world’s first mobile AR app. We’ve created tools that have become the go-to technology for developers worldwide.

Using our AR SDK, Wikitude customers and developers applied augmented reality across industry verticals, creating billions of apps and use cases. Through community’s tireless efforts, our vision of augmented reality has been taking shape!

The ultimate dream

But the final frontier was still ahead – not only making augmented reality accessible for everyone but turning it into the most natural experience that hardware can allow. Since 2013, when Wikitude started supporting wearable devices, we’ve been dreaming of headworn AR.

While smartphones serve as an important step, smart glasses would completely remove the friction of looking down on the small screen.

And this is where Qualcomm steps in. The company plays a special role in the XR ecosystem, having continuously shown interest in the XR segment, investing in the next generation of chips and reference hardware. We’ve been working together since 2019, integrating our AR SDK into Snapdragon® XR Platform, showing a glimpse of how the next generation of spatial computing will look like.

Now, when augmented reality hardware and technology have advanced to the point where both started to gain commercial traction, we are excited to join forces and accelerate the enablement of custom experiences powering the next generation of AR glasses. It’s a very exciting journey ahead, where together we’ll set the stage for a thriving AR ecosystem and mass-market adoption.

The future of AR is headworn

For years, we’ve been tailoring our SDK to support a number of headworn devices to enable flawless tracking and help users discover the potential headworn AR experiences can bring.

Why headworn? We believe it provides a basis to experience the true immersiveness that augmented reality is all about. Something that no smartphone can ever bring. Using AR headsets, users can see the augmented world around the same way they experience the real world. See-through displays allow a wide field of view while you have your hands free and can freely move around, collaborate, work and play with immersive experiences.

Using AR headsets, users can see augmented world around the same way they experience the real word.

The absence of friction headworn AR can provide will pave the way to the metaverse, where we will eventually interact and socialize, just like we do in the real world (plus the endless opportunities the digital universe can bring).

Driving adoption
While the expectations for AR hardware grow and the industry slowly gets to the point of meeting customer expectations, we believe that the world won’t switch to all-in-one AR devices in the nearest future. Instead, we are leaning in on the approach Qualcomm Technologies takes in connecting a lightweight viewer device to the smartphone that provides ultra-low-power technology with advanced rendering.

Powered by 5G, this is a pragmatic step toward enabling headworn AR tomorrow, making the innovation accessible for everyone who can’t wait to experience headworn AR.

What’s next?
Having become a part of the Qualcomm family, Wikitude will continue doing what we do best–working on our cutting-edge AR SDK and growing a thriving developer community. Our expertise in well-designed AR experiences, robust tools and strong knowledge of our developer audience and Qualcomm Technologies XR innovation will help strengthen the XR sector and accelerate the enablement of custom AR experiences as the toolkit of choice for headworn AR glasses.

United in the horizontal-platform approach, we share the vision of running a platform for headworn AR that will open up endless opportunities. And Wikitude developers will be the first to make a difference and start creating and experimenting with the new tools.

Introducing Snapdragon Spaces

Today we are unveiling a new beginning: Snapdragon Spaces XR Developer Platform. This developer-first platform is tailored to remove friction for developers and unlock the full potential of wearable immersive AR.

Backed by Wikitude’s 9th generation AR technology and Qualcomm Technologies leadership in the XR ecosystem, Snapdragon Spaces XR Developer Platform paves the way to a new frontier of spatial computing and empowers developers to create experiences for AR glasses that transform spaces around us.

Learn more about Snapdragon Spaces XR Developer Platform to stay in the know

Snapdragon and Snapdragon Spaces are products of Qualcomm Technologies, Inc. and/or its subsidiaries.

Dev to Dev

How to apply UX design principles in augmented reality

If you are a UX/UI designer who builds user experiences in digital environments, chances are you will be working with augmented reality sooner than you think. As AR applications rapidly break into the mainstream, making the user feel in control of a product becomes even more critical in user experience design.

This article breaks down the role that user experience design principles play in augmented reality application development, with a specific focus on UI design.

The article is based on a presentation by our senior software engineer, Gökhan Özdemir, for the “UX for AR” webinar. Watch the full recording here.

What is UX design for augmented reality? 

User Experience Design, or UX, is the process of designing a product that considers the needs of the users and makes the user flow as seamless and intuitive as possible. Good UX always starts with putting the user at the center of the design process. It also relies on the principles of human psychology and empathy.

Now, what about UX for AR?  In augmented reality apps, success means offering a great user experience through a seamless blend of hardware and software. 

Augmented reality experiences are overlaid on the real environment, so the user experience is spatial and highly contextual. It makes designing UX for AR more challenging as designers need to think through spatial experiences. Getting it wrong can mean users have a less than stellar experience – and no one wants that. 

Getting started

User design can be tricky. Designing for a new technology that is only getting traction? Even trickier! Let’s explore the role of user experience (UX) design in AR applications — how to think through your user experience as a designer and navigate the technical decisions when creating an AR app. 

You will learn how to create a compelling user experience for your AR application that considers the physical space and natural human interaction. 

Five pillars of UX design for augmented reality

Users prefer to interact with elements of an interface discreetly, not to be reminded of what the interface contains. This is different from the traditional user experience (UX) associated with conventional websites and mail applications. The UX for augmented reality (also known as 3D user interface) concept emphasizes interaction and visual interest above all else. Users are interested in entering the virtual space and are not distracted by surroundings that are not real.

Our five common UX design pillars for AR will help you define the considerations you’ll need to make when designing your UI and experiences for virtual objects.

Kick-off your design process by considering these criteria:

  • Environment
  • Movement
  • Onboarding
  • Interaction
  • UI (User Interface)

While it’s crucial to consider the first two pillars (environment and movement) designing for AR, the last three (onboarding, interaction, and UI) are equally crucial for both 3D and traditional 2D screen space UI.  


As augmented reality experiences are spatial and always interconnected with the real world, the environment plays a key role in the design process. The environment can be broken down into four most common categories of space, defined by the distance from the user.

Image source: Wikipedia

Examples of AR in the intimate space include face filters (like Snapchat or Instagram), hand tracking, or hand augmentations if you use a head-mounted AR display. 

Moving to personal space, augmented reality experiences might feature real objects, people, or the area around you. Featured in the video below, you can see a learning-focused AR experience that uses educational models to animate the chemistry concepts through an interactive digital layer.

AR experience in personal space

Another example of using augmented reality in personal space is popular table-top and card games and augmented packaging. Think augmented pizza boxes or collectible cards with augmented characters that interact with each other.

Next up is the social space. If you pan the camera further away, you will target the area that can be occupied by other people, unlike in a personal space where you have more privacy. This space segment is widely used for multiplayer AR games or augmenting objects on a scale, from the furniture to monuments and buildings.

In many cases, AR experiences in public space are anchored to specific locations with enough area to place an augmentation or sites that should be tracked in AR. The mumok AR experience in Vienna is a perfect example of the AR in public space where the entire building is tracked, using the Wikitude Scene Tracking feature.

mumok AR


The success of any new product or service directly depends on how well it integrates with today’s users’ minds — both physically and psychologically. Movement makes the next UX design pillar. When you design the experience, you want to use the area around the user most of the time.

As smartphones and head-worn devices give a limited view into the environment, the designer’s primary task is to guide the user. By including the navigation elements on the screen, you will be steering the user’s gaze, helping them get around and move along the experience. 

While you are visually guiding the user, it’s essential to keep in mind not to dictate to go in specific directions. This might lead to unwanted hiccups in the experience or even cause accidents. 


The next pillar we are going to explore is the user onboarding. Creating user-friendly and engaging augmented reality experiences can be a challenge. It’s not enough to just put some markers around your location or overlay some information on top of an image. You need to understand what the user is looking at and how they are using it. When creating the AR experiences, keep in mind that the most important thing for your users is not accuracy but usability. 

Another factor to consider is that different devices have various technical limitations in supporting AR features. Markerless AR, for instance, would require the user to move the device around, so that computer vision algorithms could detect different feature points across multiple poses to calculate surfaces.

The scanning process takes no time for newer devices with an in-built LiDAR sensor (like iPad Pro). But for other devices, your users might appreciate a comprehensive onboarding UI. The pop-up menu or instructions should guide the user on the following steps to successfully launch and run an AR experience.

To launch a tracking algorithm, you might want to use a sketched silhouette of the desired object to provide a clue on the shape and pose to prompt the user to align the view with the real object. Read more about the Alignment Initializer feature in our documentation.

Alignment initialization

Taking the onboarding offline, sometimes the physical methods like signage are used to communicate about the AR app, provide a QR code for quick download and mark the exact standpoint for an optimal experience. 


Once the AR experience is launched, we are transitioning to another UX design staple – interaction. During this phase, your user will benefit from the intuitive and responsive interaction. When designing for touch, you are most likely be using these most common gestures and prompts: 

  • Tap to select
  • Drag starting from the center of the object to translate
  • Drag starting from edge of the object to rotate
  • Pinch to scale

Responsive interaction means taking into account the distance from the desired objects to the camera, which will define how easy or difficult it is for the user to interact with it. To facilitate the interaction with farther placed objects, consider increasing the sphere’s bounding box to make it less dependent on the distance to the camera.

Minimizing input via finger might also be a good idea, especially when designing for tablet users. As most of the tablets are held with two hands, some UI or interaction elements placed in the middle of the screen will be very hard to interact with. Instead, use gaze input like triggering intro, interactions, or buttons in the augmented space by looking at them long enough. You might know this from VR where you don’t have any controllers and experiences are mostly driven by gaze. 

Consider using accessibility features, especially if you are designing for a broader audience. This way, you let the user rotate or reset the position of an augmentation instead of walking around it.

UI (User Interface)

The final principle we want to highlight is UI, which consists of augmented and traditional screen space. Depending on the use case, you will be using them interchangeably. While UI in the augmented space boosts immersion as the user perceives it as part of the experience, screen space UI is sometimes easier to read and interact. 

Designing with humans in mind

AR can improve people’s lives simply by allowing them to experience something that wasn’t possible before. Applying UX principles to AR can help designers create experiences that are clear, integrate easily into daily life, and create powerful emotional responses.

The guidelines we’ve shared aren’t magic bullets, but they do place fundamental guidance around where designers should be focusing their attention when crafting an experience for a user of any age.

What is your take on using UX principles when designing AR experiences? Let us know via social media (TwitterFacebook, and LinkedIn) and tag @wikitude to join the conversation.

3d Dev to Dev

Creating 3D content for augmented reality

Content is constantly changing. Designed for TVs and devices in the early 2000s, it now transcends the 2D realm and comes to the world around. 3D augmented reality content needs to be as immersive as VR advocates ever dreamed, minus the isolation from the outside world.

The more AR becomes part of our lives, the higher the need for content to adapt to the 3D world. It means the content needs to be realistic, spatial, and engaging. And while there are thousands of apps online, most companies are still figuring out what compelling content looks like in AR.

In this post, we’re diving into the role of content in ​​augmented reality, the challenges the industry faces, and the future of spatial content. 

Augmented reality content basics

Augmented reality content is the computer-generated input used to enhance parts of users’ physical world through mobile, tablet, or smart glasses. It can be user-generated (think of social media face filters) or professionally produced by designers working for brands and specialized agencies. 

AR content often comes as 3D models but can also come in visual, video, or audio format.

Whether you are using AR to buy a new IKEA sofa or play a game, the quality of the content you see in the app will make (or break) the AR experience.

Image source: IKEA

The role of 3D content in augmented reality experiences

Among the thousands of AR apps in the market today, the most successful ones have one thing in common: high-quality, engaging AR content. Fail to deliver that, and your project will risk joining the astonishing 99.9% of apps that flop or become irrelevant in the app stores

Content is the heart of ​​augmented reality. It ensures users have a reason to keep coming back. 

Users might be thrilled to scan an augmented wine bottle a few times and share the experience with friends. But how many times can we expect them to go back and watch the same video? 

Companies must see AR content as a critical component of long-term, well-thought-through digital strategies to ensure app longevity. It means constantly delivering fresh, contextual, and personalized content. 

Easier said than done. From high production costs to a scarcity of skilled professionals, building AR content at scale is one of the biggest challenges companies face that blocks them from keeping the apps relevant in the long run.

Challenges of building 3D content for augmented reality

3D models need to create perfect digital twins of the real world. Combined with other rendering elements (e.g. animation, audio, and physics), they make for AR’s most used type of content and provide an additional immersive layer for the user experience. 

What the user doesn’t see is the relatively complex process of creating such realistic visual assets. Their production can go from a detailed manual process and re-using computer-aided data to a photogrammetry-based creation process.

Size limits, file formats, and the total size of the application are just some of the plenty requirements developers need to understand to build great AR experiences. In addition, a lack of industry standards for AR content and a limited qualified workforce imposes significant industry challenges.

Building 3D assets: 3D model versus 3D scanning

Before we jump into the technicalities of creating content for AR, there are some basic concepts we need to clarify.

3D modeling x 3D scanning

3D modeling and 3D scanning are two ways of building 3D assets for augmented reality. 

3D modeling uses computer graphics to create a 3D representation of any object or surface. This technology is beneficial when used to recreate physical objects because “it does not require physical contact with the object since everything is done by the computer” (Skywell Software). Therefore, 3D modeling becomes ideal for creating virtual objects, scenes, and characters that don’t exist in the real world (think of Pokémons and other fantasy AR games).

3D scanning uses real-world objects and scenes as a base for the production of AR assets. Using this method, the content creators don’t craft the model from scratch using a program. Instead, they scan the object using one of two different methods: photogrammetry or scanning through a 3D scanner device (LiDAR or similar). 

GIF source:

The main difference between the two is how they capture the data of the object. While photogrammetry uses images captured by regular smartphones, smart glasses, or tablets, scanning requires special devices equipped with depth sensors to map the object. 

It makes photogrammetry more accessible to the broader developer crowd when creating AR content, as no special equipment is required. On the flip side, 3D scanners are more reliable. 

Using either of two approaches, a point cloud can be extracted, which can be applied in the AR experience.  You can read more on the advantages of each method in the section 3D point cloud below. 

Ultimately, you can decide between using 3D modeling or 3D scanning by assessing the accessibility to the physical object to scan. If the selected AR object target is not available, then 3D modeling is the way to go. 

How is 3D content created for augmented reality?

There are plenty of AR content creation tools available on the market. Some are easy drag-and-drop that don’t require coding skills. Others are much more complex, and target experienced professionals.

Here’s an overview of the different possibilities:

Image source: DevTeam.Space

3D point cloud: In AR, a point cloud is a virtual representation of the geometry of real-world objects using a collection of points. Generated via photogrammetry software or 3D scanners, these points are captured based on the external surfaces of objects.

Because photogrammetry allows gathering 3D information out of 2D images, this method makes content creation more accessible. It overcomes ownership issues often faced with 3D models. As a result, anyone can create 3D models by simply recording or scanning the real object. 3D scanners (for example, LidAR-enabled devices) gradually become more available in the market and provide more detailed point clouds thanks to depth sensors.

Commercial tools such as Wikitude Studio, Apple Object Capture, and Amazon Sumerian are examples of photogrammetry-based programs.

AR Object Target Transformation in Wikitude Studio Editor

CAD (Computer-Aided Design): CAD models are commonly the first step to prototyping physical goods, bringing a first product view to life in the digital world. Assisted by software applications, AR developers can repurpose legacy CAD models for augmented reality-based solutions. Existing CAD data can then be used as the input method to create digital representations of the object or environment to be augmented.

Once uploaded into the selected program, CAD data is converted to an AR-compatible format in phones, tablets, and smart glasses. CAD models typically provide accurate information about the object, maximizing the potential for a reliable AR experience. Albeit being prevalent in the industrial sector, CAD-based AR experiences are progressively gaining popularity for consumer-facing apps.

Games, computer graphics: authoring software tools such as Blender, 3ds Max, and Maya are popular 3D design applications used by AR content creators. Unity, Unreal Engine, and even Apple’s Reality Composer are great tools to assemble the pieces of content and make them work together for augmented reality.

Other 3D models: beyond CAD, other popular 3D model formats can be leveraged to power augmented reality solutions, for example, glTF 2.0, FBX, Obj, etc. Compatible file formats will depend on the program used to build augmented reality. 

On the one hand, this wide variety of 3D assets formats has opened the doors to creators of many areas to put their existing models to work for AR. On the other hand, it creates confusion among developers, increasing the debate around the need for standardization in the AR industry and the creation of alternative tools that are intuitive and code-free.

What’s next for AR content creation?

With increased interest in augmented reality, we will see more tools emerging that help to create content, overcome workforce scarcity and deliver actual value through the technology. 

To facilitate content creation, AR companies invest in building platforms that don’t require technical skills (therefore closing the workforce gaps) to help brands optimize the AR content creation process. 

An example is Apple’s latest release: Reality Kit 2. This new framework includes a much-awaited Object Capture feature that allows developers to snap photos of real-world objects and create 3D models using photogrammetry. 

But if Apple’s announcement gives you déjà vu, you are not wrong. Last year, the AR media went crazy about an app that lets you copy and paste the real world with your phone using augmented reality.  

The topic of interoperability of experiences across platforms and devices is equally important. The ability to code an AR app once and deploy it in several devices and operating systems helps companies bring their projects to market as fast as possible.

The final and most crucial aspect is understanding how 3D content in augmented reality can deliver value to its users. That means getting clear goals for the AR project, understanding how it fits into your digital strategy, and having a deep knowledge of your customer. 

What are some of the trends you see in AR content creation? Let us know via social media (TwitterFacebook, and LinkedIn) and tag @wikitude to join the conversation.


In focus: interview with Markus Eder

At Wikitude, we are proud to say that we have a talent to find (& keep) talent. To show you how, we are introducing our new interview series, In Focus. Each article will tell a story about a team member that makes Wikitude what the company is today. Let us introduce Markus Eder, Head of Computer Vision team, who has celebrated nine years at Wikitude.

What made you take the leap and start your AR/Computer Vision journey?

When I studied computer science for my Bachelor’s degree, many things started shifting in the industry. The first smartphones were just introduced, with their new capabilities (especially these of iPhones), allowing a whole new category of applications. 

A concept that immediately caught my attention was an app called London tube. It overlaid the location of the next tube station in London over the running camera feed. I liked the concept and its potential and started to read about the technology and theory behind it. This search first brought me to augmented reality and further on to the whole area of Computer Vision (CV). So much so that I decided to focus my Master’s on it. 

After doing a semester abroad in Australia, I looked for opportunities to get a paid Master’s thesis. Such an opportunity arose in Salzburg at a local research center. There I developed a concept for AR assisted pedestrian navigation on a mobile device. To improve the user experience, it combined Geo AR with Computer Vision algorithms. As the thesis was completed, I realized that I want to stay in this field.

At the time, the job openings in Computer Vision were scarce, but luckily I landed one directly in Salzburg – at Wikitude.

What were your initial expectations?

When I joined the company, I thought I could continue in the same area as my thesis. But soon, I realized there is a clear gap between scientific research and product development. One thing is to show a prototype proving that a particular technology works. Developing that prototype into a ready-for-market product that works under all circumstances proved to be a whole other topic.

From a research perspective, those days were really exciting. Many Computer Vision concepts and groundworks that are applied in current AR frameworks were conceived around that time. As research topics like SLAM, Structure-from-Motion, or 3D Reconstruction were evolving, it became apparent that these ideas will enable a new generation of AR capabilities (even on a mobile phone). Wikitude, Imagination and Metaio (now part of Apple) were the first movers to integrate some of these ideas into their products.

What challenges or setbacks did you face along the way?

In 2012, a few AR apps in the market used the device’s sensors (GPS, Accelerometer, Compass) solely to overlay AR content on the phone’s camera stream. Back then, the focus was to market the existing solution rather than improving technology. So the first technical challenge was shifting from sensor-based to Computer Vision-based algorithms for AR visualization. But once we did, it opened the door for an entirely new category of AR use cases which have shaped the market as we know it today.

A second challenge was playing in a highly competitive environment. Up until now, the leading AR companies are far larger. Despite that, we’ve been competitively leading the space. I believe this drives all Wikituders to compete with those companies and offer better features and quality.

How did Wikitude transition from an app to an AR SDK provider? 

When Wikitude was still a young startup, the atmosphere encouraged the exploration of new ideas. It facilitated the integration of new features into the application. Soon, it got clear that there is a massive potential in allowing other developers to integrate AR technology into their products. This shift in focus meant changes in the tech teams to optimize our resources and develop a full-merged AR SDK.

As you progressed, did any part of your journey change? How?

Initially, the focus lay mainly on the Geo AR-based Wikitude app, so I worked on the Android side. The more we realized CV and AR’s potential, the more my focus shifted in that direction. At this point, we decided to create a new team to focus on R&D in that area solely. For me, it meant that I could work on the CV-based research aspects of the SDK. At the same time, we conducted state-funded research projects with several Austrian universities, which are among the best in AR and computer vision. The research results and expanding our team with international hires have helped us a jump-start in the right direction. 

Can you share any tips on how to build a successful CV team?

With an increased demand for Computer Vision-based features in the SDK, we continued to hire more people. It was hard to find people with a computer science background in general and with a particular skill set. Quickly we realized that we have to look for potential candidates internationally as it still was a very specialized field.  

As a tight-knit collective, we can not afford to hire the wrong people. I guess I always go with my gut feeling, and time shows it has been working.

There is also an exceptional working atmosphere in Wikitude where each team member feels that they can bring something to the table, and it encourages people to do their best.

How were your expectations met along throughout your journey at Wikitude?

When I think back to where we started, the whole journey exceeded my expectations – especially considering the market changes in the last couple of years. After all these years, we are one of the leading AR technology providers in the market with a vast customer base. The profile of our customers changed drastically over the years. 

In the early stages of our SDK, customers were most interested in creating a “wow”-effect for the customer by showcasing necessary information in AR. This approach has changed – now, the customers come with business cases in mind where AR has a clear benefit for the users. Just showing something in AR is not enough. Relying heavily on constant communication with our clients, we have significantly changed our offering and the feature set we provide to the developers.

Did you achieve what you wanted to? 

After completing my studies, my goal was to deepen my knowledge and continue to work and research in that field. Since then, AR has progressed so far that it influences people’s lives and assists in daily use cases.

Until the technology hasn’t reached its full potential yet, I will not stop working in this field until it’s done.

From your perspective, how is the future of AR and Computer vision will look like?  

I believe that in the future, we’ll move away from the web-based AR experiences where users have to hold a device in their hands as it restricts the use cases and interaction. As technology evolves, we’ll see a more natural and less intrusive way to interact with augmented reality content that is comfortable and intuitive for users. 

Another aspect that will change is the level of immersion. At the moment, in most cases, users statically look at augmented experience. Along with the hardware that will handle heavy computing and advanced optics, augmented experiences will evolve and offer more advanced graphics and interaction. We can already see how devices have solved the problem of localization, next up will be detecting and recognizing the objects in user’s environment to create a context. 

As a company, we are working closely on solving these challenges to support more use cases and make the engine smarter to recognize more complex objects.

What would be your advice for professionals who want to enter and succeed in the area of Computer Vision and Augmented reality? 

Many ways could lead to this area. Having a solid background in advanced mathematics will help along the way to understand the fundamental concepts of computer vision and AR. It has become more accessible in the last couple of years, thanks to dedicated programs focused on computer vision and the progress of deep and machine learning. 

Another helpful thing is defining which specific area you’re interested in – whether it’s visualization (an essential aspect of AR) or other areas, which comprise the fundamentals of computer vision and machine learning. There is much research happening in this area, but trying your hands on the actual technology could be very useful. 

Get more insights about our team: Read the interview with Wikitude COO Nicola Radacher

SDK releases

Develop powerful AR apps with the new Wikitude SDK 8.10

The latest Wikitude AR platform release includes updated support for Epson Moverio smart glasses, new Flutter sample app, and stability improvements

The Wikitude AR platform goes through regular quality assurance tests, and maintenance and development processes to ensure you have access to the level of quality you need to create high-performing augmented reality experiences.

SDK 8.10 contains all the latest platform maintenance and stability improvements; brings our optimized Epson Moverio AR SDK up to date; and includes the new, highly requested, Flutter sample app.

Epson Moverio

SDK 8.10 support for BT-300 and BT-350 Epson Moverio smart glasses

Starting off with the development that many of you have been waiting for: updated Epson Moverio Wikitude AR SDK.

The Epson Moverio devices are used by enterprises and consumers worldwide to deliver hands-free augmented reality experiences. Wikitude SDK 8.10 offers a fully optimized AR SDK for Moverio BT-300 and BT-350 smart glasses.

The Wikitude AR platform is adapted to make the best out of the unique features of both devices, ensuring optimal performance in a variety of environments and use cases. Among these customizations are:

  • Intel SSE optimization: providing best processing power and performance for both devices;
  • Optimization for stereoscopic view: enabling full 3D see-through (side-by-side view) support of Moverio smart glasses;
  • Personal calibration: enabling perfect alignment between the real world and AR content.

Watch the video to view an Epson Moverio hands-free supported remote assistance use case combined with Wikitude Object Recognition and Tracking technology:

Access the link below to give the Epson Moverio AR SDK a try (download redirects to the signup page for a free trial). Subscription license users are entitled to this free update.  


Sample app for Flutter. Another request from our awesome AR community

Wikitude was the very first AR platform to offer official support for Flutter. For those of you who are unfamiliar with it, Flutter is an open-source mobile application UI development framework toolkit created by Google. It is used to develop natively-compiled applications for iOS and Android from a single codebase.

The Wikitude documentation, with SDK 8.10, introduces a new Flutter sample app to help you add augmented reality technology to your projects.

The Flutter Plugin is based on our JavaScript API and includes the Wikitude AR library/framework, sample app, and documentation.

AR SDK Performance and Stability Enhancements

The Wikitude SDK is regularly inspected by our quality assurance team and optimized by our technical team so you can have access to high-performing tools that are always up to date.

As updates are frequent – and ideal for app maintenance and device compatibility reasons, we recommend choosing our subscription license, which includes one year of SDK update releases. 

Download Wikitude SDK 8.10

Active Wikitude SDK subscribers are entitled to all SDK version updates released throughout their term. Follow the links below to update your SDK:

New to Wikitude? Download a free Wikitude SDK 8.10 trial version for testing purposes and contact our team to discuss upgrade possibilities.

To explore all SDK options, including smart glasses, plugins, and other dev tools, please access our download page:

Interested in creating an AR project of your own? Access our store to choose your package or contact our team to discuss your specific AR requirements in detail.

SDK releases

Wikitude SDK 8.7: new UWP feature, Extended Image Tracking range, and AR platform improvements

Wikitude SDK 8.7 is out! This update adds some pretty impressive performance enhancements and a few new features you can’t miss

Every 6 weeks Wikitude runs an effective AR SDK maintenance and quality assurance program to ensure our developers always have access to a high-performing AR platform. Here’s what’s new for our latest:

  • Increased Image Tracking detection range (+50%)
  • Faster camera frame processing and stability enhancements
  • Preparation to support Apple’s iOS 13 
  • External camera support for Windows in Unity Editor

Faster camera frame processing and increased image tracking detection range (+50%)

The Wikitude AR technology development teams work closely together with our quality assurance unit to programmatically test, tweak, and optimize our AR platform. With every update, our customers can expect high performance and general enhancements.

SDK 8.7 improves the detection range of image targets across devices by 50%. A4-sized image targets can reach impressive recognition distances in the area of 4.7 meters (15.42 feet).

Additionally, internal changes in the processing pipeline of Wikitude SDK 8.7 resulted in faster camera frame access for all related components and includes a series of additional fixes and stability improvements. Please review the release notes for your platform for an in-depth report.

iOS 13 Preparations

At the Worldwide Developers Conference held in June, Apple announced its new iOS 13 mobile operating system. Even though iOS 13 is scheduled to be officially released in the fall of 2019, Wikitude has already started optimizing its AR SDK.

People occlusion, motion capture, and collaborative sessions are some of the much-awaited ARKit 3 features which you will be able to enjoy. Stay tuned to future Wikitude updates to get the latest and greatest AR for Apple’s iOS 13!

External camera support for Windows in Unity Editor 

Designed to facilitate UWP app development in Unity, this new feature allows developers to plug-in an external web camera into any UWP-based tablet and use its camera stream in the Wikitude SDK. 

The additional camera view helps developers to have a realistic idea of how the AR project is progressing from another perspective in Unity, hassle-free.

Get started with SDK 8.7 for your preferred development platform:

Download Wikitude SDK 8.7

Active Wikitude SDK subscribers are entitled to all SDK version updates released throughout their term. Follow the links below to update your AR SDK:

New to Wikitude? Download a free Wikitude SDK 8.7 trial version for testing purposes or contact our team to discuss upgrade possibilities.

To explore all SDK options, including smart glasses, plugins, and other dev tools, please access our download page:

Interested in creating an AR project of your own? Access our store to choose your package or contact our team to discuss your specific AR requirements in detail.


Wikitude is among the top 1% rated startups by Early Metrics

Wikitude is among the top 1% rated startups, according to European-based rating agency Early Metrics. As a pioneer in the augmented reality industry, Wikitude was awarded 83 out of 100 points, placing the company in Early Metrics’ prestigious club of five-star startups.

Early Metrics’ ratings, which focus on key non-financial metrics, provide an independent assessment of a venture’ growth potential. The ratings support decision-makers, such as investors and corporations, in identifying and understanding in-depth the most innovative startups across Europe.

Wikitude is the world’s leading independent AR technology provider with a robust ecosystem of over 100,000 registered developers and 20,000 published apps covering a wide variety of industries and use-cases. Its fully in-house developed SDK enables enterprises, agencies, and developers to create powerful AR solutions for mobile devices and smart glasses that delight users and provide tangible ROI.

With the addition of SLAM technology in January 2017, Wikitude’s SDK is the AR market’s most comprehensive developer tool with a combination of location-based, image-based recognition and tracking, and 3D tracking capabilities. Wikitude’s SLAM-based markerless tracking is the most versatile 3D tracking system available for mobile today.

Martin Herdina, CEO at Wikitude, says: “We are delighted by this award. Over $2 billion of investments went into AR/VR in 2016. Being among the top 1% of startups rated by an independent third party shows that we are headed in the right direction in a super exciting market segment. You can expect even more exciting news from Wikitude this year”.

About Wikitude®
Wikitude is the world’s mobile augmented reality (AR) pioneer and leading AR technology provider for smartphones, tablets and digital eyewear on both iOS and Android. Its fully in-house developed AR technology is available through its SDK, Cloud Recognition and Studio products enabling brands, agencies and developers to achieve their AR goals. Wikitude® is a registered trademark of Wikitude GmbH, for more information please visit:

About Early Metrics
Early Metrics is the pan-european rating agency for startups and innovative SMEs, analyzing non-financial metrics to assess their growth potential. Ratings are free for entrepreneurs and provide them with a third party assessment, supporting their growth development. Established in London, Paris and Tel Aviv, Early Metrics works on behalf of private and institutional investors as well as corporates ventures and business units. To get rated or to access rating reports:


Digital agencies

Roomle makes furniture shopping so much more fun

Let’s face it – despite the bright colors, clever Swedish design, and exceedingly happy salespeople, few of us would actually choose to spend their entire Saturday at IKEA. It’s a chore! But good thing for you – we now practically live in Back to the Future, and augmented reality is, in this case, going to get you to spend less time on life administration by streamlining the time you spend planning, designing, and purchasing furniture for your home.

It’s not only one of the best use cases of AR, it’s also one of the most obvious: planning and designing interior spaces using easy-to-understand visuals – while you stand in the space you’re planning. Of course, IKEA already tried it back in 2013 – but the technology has advanced significantly. Now start-up app Roomle is making the process even easier – using Wikitude’s SLAM 3D tracking.

Check out the short demo below:

The benefits are clear, for everyone involved: less hassle, less travel, quicker and more intuitive understanding of how a space will look and feel. Customers love it because it makes their lives easier; retailers love it because it means more sales, and less overhead on showrooms and stores. And those are just the big benefits – here’s a few more:

  • Real-time supply with up-to-date and individually relevant product information
  • Visualization of residential environments and interior architecture
  • Interactive interface creates strong brand connection
  • The personalization factor is enhanced by the unique usability
  • Simpler presentation of complex products
  • Products ‘stick’ in the consumer’s memory and are recognized more quickly

So what makes Roomle the AR room design app of the future? The stuff behind the scenes. It’s got an incredibly simple user interface – users can jump on the app and start designing rooms and spaces intuitively. In the home, they can simply select a product from the catalogue, and use their phone’s camera to see it in live space. Key here is our SLAM 3D markerless tracking tech – without ‘seeing’ the room, the app wouldn’t be able to place the object in the room to see.

Using Roomle is this easy

Screen shot of roomle app, with 3D white chair overlayed on floor
Roomle is even more impressive in the hands of a trained professional (that’s a nice way of saying ‘salesperson!’). It turns an iPad into a custom furniture showroom. Sales staff can pick furniture from the brand catalog, configure it according to the customer´s preferences and demonstrate the result in convincing 3-D or augmented reality views, live in every room. See the longer explanation about how Roomle works, here.

So now that we’ve arrived at the future, what’s the future of the future? Good question – for one, we can imagine one-click ordering (á la Amazon) combined with the flat-packing genius of IKEA to facilitate home shopping even more – take a picture, pick your product, click ‘purchase’ and it shows up at your door one day later. What follows? Pre-fabbed house construction – calculate the price of a new floor or painting a room, or installing an addition to your home.

If you’ve been thinking about making some changes around the house, but the hassle of getting out the measuring tape, doing the research, and going shopping has been holding you back – wait no more, give Roomle a try!

Roomle is powered by Wikitude. Get started with the Wikitude SDK today!

SDK releases

Here comes SDK 6.1

Update (August 2017): Object recognition, multi-target tracking and SLAM: Track the world with SDK 7

When we launched Wikitude SDK 6 nearly 2 months ago, we were excited to see developers jump onto markerless SLAM tracking and creating fascinating augmented and mixed reality experiences. Today we are proudly releasing an update for SDK 6 with stability updates and a few new features. Check out what’s new:

Support for OpenGL ES 3.x Graphics API:

Over the past weeks, we have seen that many augmented reality projects are based on more modern graphics API than OpenGL ES 2.0. Developers using Wikitude SDK 6.1 can now make use of OpenGL ES 3.x. Support for Metal Graphics API is currently being worked on and a new iOS rendering API will be included in our next release.

Improved stability for image tracking:

This release comes with an updated computer-vision engine for image tracking, which delivers a smoother AR experiences. Particularly when holding the device still or using larger augmentations, developers will notice a more stable tracking and little to no jittering. See video below for a performance comparison.

Reworked communication from JavaScript API to native code:

An essential part of the JavaScript API is the ability to communicate with parts of the app that are not involved in the augmented reality experience as such, often written in Obj-C or Java. This communication has been based on a custom URL protocol to send and receive data. In Wikitude SDK 6.1 we are introducing a different approach to the communication between JavaScript API and native code based on exchanging JSONObjects directly.

Several stability updates:

 SDK 6.1 comes with many stability updates and improvements – the most noticeable being the fix of a nasty bug that prevented 2D and 3D augmentations to be rendered separately. With this fix, the z-order of augmentations is properly respected. Additionally, developers now can use the ADE.js script again to debug experiences in the browser.

For a full list of improvements and fixes, make sure to check out the release notes for all supported platforms and extensions:

For customers with an active subscription, the update is free – depending on your current license key, it might be necessary to issue a new key. Please reach out via email for any license key related issues.

All other existing customers can try out Wikitude’s cross-platform SDK 6.1 for free and purchase an upgrade for Wikitude SDK 6.1 anytime.

The update SDK package is available on our download page for all supported platforms, extensions and operating systems. If you have any questions feel free to reach out to our developers via the Wikitude forum.