Using photogrammetry for content production in our AR/VR/MR workflow allows us to create detailed and photorealistic models in a fraction of the time. Here we will discuss in more detail what it is and how it can be used.

Creating detailed 3D assets for VR/AR/MR is already a time consuming process, but creating detailed assets of existing objects and spaces is even harder.  Using photogrammetry in our workflow allows us to create detailed and photorealistic models in a fraction of the time.

What is Photogrammetric processing and why use it?

To keep a long story short, photogrammetric processing is a method of creating 3D models using a series of photographs of that actual object.  When we get a request to create 3D assets for real time applications, there are a number of factors considered when it comes to how we create them.  Time and budget are the two biggest limiting factors that can clip the wings of what can be created for these projects.  Modeling, UV unwrapping, sculpting, retopologizing, and texturing each model can quickly deplete available resources.  When the need to replicate a pre-existing object or space is needed, photogrammetry is a great option to provide photorealistic results in a fairly quick amount of time (in comparison to a standard 3D pipeline).

Photogrammetry_Inline_Dinosaur1

The basic idea of this process is that we use a large quantity of still photographs of an object and then use software to create a 3D model, based off of those photos.  Number of photos, quality of photos taken, and proper lighting are some of the many considerations required to get a model working properly down the post-production pipeline.  These photos then go through a number of processing steps to get them into the software which starts stitching them all together.  Once in the photogrammetry application, the model is built out in stages.  Each stage builds up more information from the previous and gets closer to creating a final model with textures ready for export.

Photogrammetry_Inline_Model1-process

One caveat to this basic idea comes in the form emerging technology like Microsoft’s HoloLens mixed reality head mounted displays (HMDs), Magic Leap, or the Daqri Smart Helmet. These “MR” HMDs use infrared and depth cameras to become spatially aware, meaning, they scan and create a 3D map of the environment around you. The head mounted displays use these maps to accurately place holograms in your environment. However, with access to those maps (machine-made 3D renders), 3D specialists gain a head start in the photogrammetry process by simply looking at objects in a room. The maps can then be textured using the HMDs normal light spectrum camera feed. This area of exploration, while still in its infancy, promises to greatly streamline the process of 3D image capture for use directly in mixed reality experiences.

SurfaceReconstruction_Inline

The tricky part of getting a game-ready (VR/AR/MR) asset using photogrammetry, is figuring out how to create a low poly model from the high resolution model that is exported.  Normally, this is a time consuming process called retopologizing a model, so that it is optimized for a realtime application.  Finding automated solutions for this process, although not easy, is an integral part of completing this process in a timely manner.

“One of the benefits of using photogrammetry to create a space, instead of using 360° video, is the ability to walk around and interact with the space.”

How can these assets be used?

Depending on what the objective is, there are a number of uses that this process will make sense for.  Looking to create a custom “model” configurator or paint your own?  We can create an asset from an existing object and manipulate areas of the image data for our needs.  We created a paint demo which we created a number of models for, removed all of the texture data and used the models as a blank canvas.  This allowed us to paint on them live in a VR environment and visualize the object in real time.  Another example would be to recreate an actual object for a safety training demo.  Instead of modeling out tools and objects needed for a real life simulation, we can use photogrammetry to create a virtual version of this object that we can then use in a VR environment.

Environments and spaces can be recreated using photogrammetry as well.  One of the benefits of using photogrammetry to create a space, instead of using 360° video, is the ability to walk around and interact with the space.  With this method, the user is not fixed to a set viewing angle or moving rail system, but instead the user can walk around and explore the space at their leisure.  From there, more objects and detail can be added to fit the needs of the experience.

“When the need to replicate a pre-existing object or space is needed, photogrammetry is a great option to provide photorealistic results in a fairly quick amount of time (in comparison to a standard 3D pipeline).”

Photogrammetry is by no means the end-all-be-all solution for creating 3D assets.  There are many challenges that arise in this process and many situations where photogrammetry may not be the best solution.  However, when we are looking to create photo based models in a shorter amount of time, especially for rapid prototyping in VR, this process is a great tool to have in our digital toolbox.

Related News & Views