3D Photogrammetry #

3D models are a relatively recent addition to ambientCG.

Capturing process #

Setup #

img/IMG_20210320_095150.jpg I create the photos for 3D scans in an Amazon Basics Mobile Photo Studio (which I use without the front curtain).

Initially I would just place the objects on the ground in inside it, but despite the uniform lighting the objects still showed noticeable shadows near the ground. For that reason I have copied an approach from Kevin Parry’s Fruit stopmotion video and added a plexiglas pane to remove all shadows.

This is what the full setup looks like. Instead of placing the target on a turntable I instead rotate the object itself which has the same effect. The white background ensures that the software has nothing else to track and therefore resolves only the target object.

The full setup while shooting bread.

img/IMG_20210320_111911.jpg

The fact that I have to reach into the box makes it impossible to close the front curtain. To counteract this I place my Nanguang LuxPad 23 right below the camera to replace the light that would normally be reflected from the white curtain.

img/WIN_20210320_10_06_58_Pro.jpg

Support light turned off.

img/WIN_20210320_10_07_03_Pro.jpg

Support light turned on.

Execution #

For most objects I follow a similar process.

The entire capturing process of one object in a video.

Here is the entire set of images:

img/Scene.png

Keeping objects in place #

img/IMG_20210320_112942.jpg

For some objects it can be difficult to perform the rotation described above. While it is generally possible record two different disconnected chunks of the same object I wouldn’t recommend it because merging these chunks in Metashape tends to fail quite often.

So in order to maintain the advantages of having one continuous sequence of images I like to use modeling clay or patafix to keep objects from rolling away or tipping over.

Processing in Metashape #

Import all the images into one chunk in Metashape.

Alignment #

Start by aligning the images. Here are the setting that I use. You might want to play around with the number of key points and tie points. 75K usually works pretty well.

img/Untitled.png

Because everything was recorded in one continuous motion with a completely white background it can be assembled as one chunk. But there are still artefacts that were cause by my hands when I had to hold the bread to rotate it over the span of several photos.

img/Untitled%201.png

To remove them I start by building a mesh from the sparse cloud.

img/Untitled%202.png

img/Untitled%203.png

The resulting mesh obviously contains some artifacts from my fingers. I use the Free-Form Selection to only select the bread itself and then click on Crop Selection to only keep the selected area.

img/Untitled%204.png

img/Untitled%205.png

img/Untitled%206.png

After completing these steps I have a mesh of just the bread, but its quality is pretty low since it was created from just the sparse cloud. But this mesh is good enough to create masks from it.

I then go to Import → Import Masks and generate masks from the original model.

img/Untitled%208.png

The subject is now perfectly masked in every photo - no more artifacts from the background.

img/Untitled%209.png

But one problem remains. The mask does not mask the fingers in the few frames where I had to hold the subject in my hands.img/Untitled%2010.png

To fix this I use the “Intelligent Scissors” and manually remove the fingers from the few frames where it is necessary.

img/Untitled%2011.png

img/Untitled%2012.png

img/Untitled%2013.png

img/Untitled%2014.png

After that I can start the “real” alignment process.

img/Untitled%2015.png

This time the sparse cloud is nice and clean.

img/Untitled%2016.png

Building the “master” model #

Dense Cloud, Mesh & Texture #

With the alignment finished it’s now time to build the real dense cloud. I usually use the highest quality setting.

img/Untitled%2017.png

img/Untitled%2018.png

There might still be some tiny artifacts around the dense cloud, so I select and delete them.

img/Untitled 19.png

After that I build the mesh (via the “Workflow” tab).

img/Untitled 20.png

The mesh might contain some holes. To fix this I go to Tools > Mesh > Close Holes….

img/Untitled%2021.png

Another potential issue are loose parts. The Gradual Selection can help with that by selecting unconnected geometry, so that it can be deleted.

img/Untitled%2022.png

The next step is the texture for the master model. Building textures also happens via the Workflow menu. This step can pretty much be run with the default settings. I like to set the texture quality to slightly above what I want to have for the final models. For example, when aiming for 4K I use 6K:

img/Untitled%2023.png

This completes the master model. It has extremely dense geometry and a high-res texture.

img/Untitled%2024.png

Orientation #

img/Untitled%2025.png

Metashape usually generates the model with a completely random orientation. This is not immediately obvious in the software because the viewport gives me no real frame of reference. To change this, I go to Model > Show/Hide Items > Show Grid. This adds a “floor” to the scene which can be used to align the object in 3D space.

Here is the initial position:

img/Untitled%2027.png I then use the Move/Rotate Object tools to align the model to the grid.

img/Untitled%2028.png

Creating LOD versions #

After generating, texturing and aligning the model I can finally get started on creating the different LOD (Level of Detail) versions of the model.

The usual versions that I like to create are 50.000 / 5.000 / 500 polygons.

Models #

I use the Decimate Mesh function to reduce the model to 50.000 faces. It’s important to not replace the default model (Select “No”).

img/Untitled%2029.png img/Untitled%2030.png

For some meshes it might be a good idea to also apply a bit of smoothing.

img/Untitled%2031.png

By repeating this step I generate all 3 different versions.

img/Untitled%2032.png

Texturing #

And finally I generate textures for all three versions. This is fairly easy because I can just bake the diffuse, normal and occlusion map from the original model on the LOD version. Here are the settings which I run on all three versions:

img/Untitled%2033.png

img/Untitled%2034.png

img/Untitled%2035.png

And with that the model is finished.

img/3DBread007_PREVIEW.jpg