Wikitude reveals SLAM technology at CES in Las Vegas
After a long period of confidentiality, Wikitude is proud to reveal a long held secret and is making a SLAM dunk at CES 2015 in Las Vegas, literally.
Our skilled team of software engineers have finally given us the GO to spread the word about our Simultaneous Localization And Mapping (SLAM) technology. If you’re not already familiar with this technology, SLAM is essentially doing two things at the same time. On one hand it scans a 3D scene or any real life environment allowing the localization of the device capturing the data. On the other, it simultaneously maps this environment allowing for the augmentation of digital content into the scene.
Demonstrated in the video below, the core Wikitude engine is now capable of augmenting a 3D model in real time, while simultaneously keeping track of it’s position in relation to the surrounding . The algorithm scans and “understands” the basketball court scene to then augment the 3D model of the red Lamborghini next to the physical Mercedes. The second augmentation is the scoreboard on the plain white wall above the court. The score board remains mounted to the wall in a stable position even in a very “low feature environment”. Meaning, that even when only very few details of the basketball players and basket are seen within the field of vision viewed by the device.
As you can imagine, the possibilities of SLAM use cases are endless in both enterprise and consumer space. We’re truly excited to finally announce and demonstrate our application of this technology to you today. We’ll continue our advancements on this technology and our R&D team is working ‘round the clock to make 3D object and environment recognition part of our SDK. We expect to incorporate SLAM technology and to make it available as a product soon.