Photogrammetry vs. NeRFs vs. gaussian splatting – pros and cons
A comparison of photogrammetry, NeRFs, and Gaussian splatting for 3D modeling, with Teleport by Varjo offering an intuitive iOS app for easy, high-quality 3D scanning and proce
A comparison of photogrammetry, NeRFs, and Gaussian splatting for 3D modeling, with Teleport by Varjo offering an intuitive iOS app for easy, high-quality 3D scanning and proce
The oldest out of the three, photogrammetry is a technique based on large sets of photographs. It relies on multiple images of the same object taken from different angles. Photogrammetry software is then used to detect the points where these images should join together so a 3D model can be constructed.
As long as the source images themselves are high-quality, the result is a highly detailed and accurate 3D representation of the object, which can be anything from small items to very large objects such as terrain or large buildings. A 3D model created with photogrammetry can be measured and has a scale, and lighting conditions of the source images are also baked into the 3D image.
The primary advantages of photogrammetry are its simplicity and the high fidelity of the textures in the 3D models it produces – as long as the source images are of high quality as well. You can for example take high quality images using a high-end camera or use drone footage of large objects to create a model with photogrammetry and the detail will be preserved.
Neural radiance fields (NeRFs) is a more recent technology that creates 3D models from 2D images using deep learning. Unlike photogrammetry that builds 3D models from large, detailed sets of images, NeRF can create models using a much more limited set of photos.
NeRF models the light and color information as it moves through space, creating a continuous volumetric scene. This means it can also be used to easily generate new viewpoints that you did not originally capture in your images. As NeRF can render scenes with complex lighting and viewpoints with a more limited set of images, it’s particularly useful for creating virtual reality environments where the user can move around.
One advantage NeRFs have compared to photogrammetry is that they are better suited for capturing the environment around an object in addition to the object itself, so you can show skies, objects in the horizon, ceilings of indoor spaces, etc. more easily than with photogrammetry.
Gaussian splatting is the most recent 3D visualization technique primarily used in the visualization of volumetric data. It models scenes as a collection of points that create the visual appearance of surfaces. This involves projecting data points into a visualization space and smoothing them using a Gaussian function, creating a continuous representation from discrete data points.
While gaussian splatting and NeRFs are both volumetric techniques, they have major differences. NeRFs model the entire scene as a continuous volumetric function, while gaussian splatting represents the scene as a collection of discrete “splats” – point clouds with gaussian profiles that help blend the data smoothly. The gaussian approach allows for faster, real-time visualization by reducing computational requirements, but lacks the ability to model intricate lighting interactions and tends to be less detailed compared to NeRF.
Gaussian splatting is particularly useful for rendering fuzzy objects or semi-transparent materials where you need to visualize density or intensity variations within a volume. Originally, it was less about creating accurate 3D models from images and more about effectively displaying complex 3D data.
A major benefit of gaussian splatting is that it requires lighter computation compared to the other methods, meaning it is well-suited for real-time applications and modeling large environments that would require processing vast amounts of data. It should also be noted that as gaussian splatting is a new technology, it is constantly being developed further and will be capable of more in the future.
The three methods outlined above are particularly well suited for specific tasks, but all of them require rendering on a dedicated computer, and often require you to take numerous photos and use a high-end camera for the best result.
Teleport removes these restrictions by allowing you to scan and upload environments using an intuitive iOS application, without needing specialized cameras. Scans are processed in the cloud using advanced machine learning, removing the need for manual processing or editing.
With Teleport, you can easily create photorealistic digital twins of any environment or object, from single rooms to town squares, with accurate lighting and shading, textured, reflections, and more. Once the scans are processed, you can explore your capture in your web browser, with a desktop app, or in real-world scale with a VR headset.