Azure Object Anchors – Object understanding with HoloLens
Azure Object Anchors is an Azure framework that enables automatic alignment and understanding of objects, without manual adjustment and the use of QR codes or other markers.
Bouvet, as a Mixed reality Partner, got early access to Azure Object Anchors. This technology enables advanced object tracking and automatic alignment with HoloLens 2.
Working in tech we have seen our phones track objects for years. Face filters in apps like Messenger and Snapchat is commonplace due to the technical advantages of ARKit (iOS) and ARCore (Android). HoloLens on the other hand, has up until now been dependent on external markers, like QR codes, other external APIs or Cloud based services for image recognition.
Azure Object Anchors changes this. This is an Azure framework targeted at the HoloLens 2 and it enables automatic object understanding, without the use of manual adjustments or the use of QR codes. Even better, it can do this without being connected to the internet.
Getting started with Object Understanding
Object Understanding consists of two parts, training, and execution.
The HoloLens needs to understand what kind of object it is detecting. We train it by using a 3D model of said object. The training happens by running the 3D-model through the Azure cloud-based training framework. The result is a training-file that contains references to the object.
Step two is to bundle this training file with the app on the HoloLens. Doing this, we can trace and detect physical objects in the spatial map generated on the HoloLens without the use of external markers.
Possibilities with Azure Object Understanding
Showing contextual information is a central part of augmenting or enhancing reality. Because we can augment the user’s surroundings based on where he or she is. Presenting the right information, to the right user at the right time.
Another advantage of using HoloLens, compared to mobile devices, is that the users have both theirs hands free when doing the actual work. HoloLens supports input forms like gaze to click or voice that lets the users interact with both hands occupied. Lookup of data related to work logs or the work at hand is shown as holograms for the user.
There are huge potential gains within the fields of operation, training, or visual inspection. Imagine a production locale where a worker approaches a machine, the HoloLens understands what machine it is and prompts the user with the correct contextual information. This includes the machines maintenance documents, 3D models or planned maintenance instructions. The worker can also immediately visualize the machines real life data (pressure, production capacity, heat etc.) in form of informative information layers.
Proof of Concept – EV battery status
In our test project wanted to visualize a EVs remaining battery and anchor this battery visualization on top of where the actual battery is placed in the car. At the same time, we wanted to access the EVs API for a deeper integration and additional contextual information.
Azure Object Understanding is a supplement, meant to be used together with other APIs like Azure Remote Rendering. ARR enables us to render high quality 3D models in the cloud and stream the render to the (relatively) low powered HoloLens device. In the video above you see Object Anchors, Remote Rendering and the Tesla API, all work together.
Finally, the HoloLens has a solid fundament for Object Understanding, contact us today and challenge us on how you can elevate this technology to save your pains.