Monday, December 13, 2021

 

Azure Object Anchors:

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing. In this article, we take a break to discuss a preview feature named Azure Object anchors. An Azure preview feature is available for customers to use but it does not necessarily have the same service level as the services that have been released with general availability.

Azure Object Anchors is a service under the mixed-reality category. It detects an object in the physical world using a 3D model. This model may not apply a rendering directly to the physical counterpart without some translation and rotation.  This is referred to as a pose of the model and comes with the acronym 6DoF also called the 6 Degrees of Freedom.  The service accepts a 3D object model and outputs an Azure Object Anchors model.  The generated model can be used alongside a runtime SDK to enable a HoloLens application to load an object model, detect and track instances of that model in the physical world. 

Some examples use cases enabled by this model include:

1) Training which creates a mixed reality training experience for workers without the need to place markers or adjust hologram alignments. It can also be used to augment Mixed Reality training experiences with automated detection and tracking.

2) It can be used for task guidance where a set of tasks can be simplified when using Mixed Reality. 

This is different from object embedding that finds salient objects in a vector space. This is overlay or superimposition of a model on top of the existing physical world video which requires a specific object that has been converted to the vector space.  A service that does automated embedding and the overlay together, is not available yet.

Conversion service is involved in transforming a 3D asset into a vector space model. The asset can be described as a Computer-Aided Design diagram or scanned. It must be in one of the supported file formats as fbx, ply, obj, glb, and gltf format. The unit of measurement for the 3D model must be one of Azure.MixedReality.ObjectAnchors.Conversion.AssetLengthUnit enumeration.  A gravity vector is provided as an axis.  A console application is available in the samples to convert this 3D asset into An Azure Object Anchors model. An upload will require Account ID Guid, the account domain that is the named qualifier for the resource to be uploaded, and an Account Key

The converted model can also be downloaded. It can be visualized using its mesh. Instead of building a scene to visualize the converted model, we can simply open the “VisualizeScene” and add it to the scene build list.  Only that VisualizeScene must be included in the Build Settings. All other scenes shouldn’t be included.  Next, from the hierarchy panel, we can select the Visualize GameObject and then select the Play button on top of the Unity editor. Ensure that the Scene view is selected. With the Scene View’s navigational control, we can then inspect the Object Anchors model.

After the model has been viewed, it can be copied to a HoloLens device that has the runtime SDK for Unity which can then assist with detecting physical objects that match the original model.

Thanks

No comments:

Post a Comment