The Difference Between NeRF And Photogrammetry 3D Scan

79,485
0
Published 2022-11-02
Last week, I made my first 3D scan using Polycam. It uses a technology called photogrammetry to generate a 3D model from a series of photos taken at multiple angles. This 3D model can then be used in AR or VR applications, so that’s why it’s so interesting.
 
Recently a new technology appeared, called NeRF (Neural Radiance Fields) and made a ton of headlines. It’s similar to photogrammetry, because it’s also a way to visualise a 3D scene or object using images as an input. But it differs from photogrammetry a lot.
 
The main difference between these two technologies is that photogrammetry generates a 3D model with meshes and textures and is stored in a way that traditional 3D tools can use it. So we can use it in 3D animation, games or VR and AR applications.

A NeRF generates a ‘radiance field’, instead of a traditional 3D model. So the way the 3D models are stored, is very different.

NeRF uses machine learning to create this “radiance field”. With this, you can render new viewpoints of an object from new angles, so when moving the 3D model around, it appears to be 3D to your eyes. The Radiance Field has learnt and can guess what an object would look like from any angle and renders the image you see on your screen.
 
To give an example: 10 years ago we used a series of images with a slider to make an object on a website appear three dimensional. I remember this cool slider on the apple website, to see the iPod touch from multiple angles. When twirling around, it almost seems like a 3D-model right?

But if I want to see the iPod from a different angle that was not captured by any of the pictures, I am out of luck. With NeRF, we can train the machine learning algorithm and then with the Radiance Field that is created, we can generate images to see the iPod from new perspectives too.

I recreated the iPod touch slider at home and used the images in Luma, this was the result. The original iPod slider images did not work, because there is no background.
 
The nice thing about NeRF is that reflections and light effects can be captured very accurately. Water, glass, and shiny surfaces usually don’t work well with photogrammetry and the traditional 3D model it creates.

Currently, a downside of NeRF is that it’s not easily applied in AR or VR applications yet! However that will improve over time, with better exporting tools and special viewing applications.
 
For my own experimentation, I used Luma and Polycam. I edited the AR scenes with www.wintor.com
Follow me for more insights about augmented, mixed and virtual reality. Bye!

All Comments (21)
  • @mrspazzout1
    Nerf was new to me but looks like it has a lot of room to grow as the ML models get better over time. Very clear explanation, thanks!
  • after watching a few videos I think I am finally understanding what NeRFs are. from the videos I have seen it is within Luma where it is real-time "3D models" like Unreal, with lighting and reflections
  • @RR-gx4ec
    The main interesting thing about NeRFs is the ability to capture view-dependent lighting (reflections). And then Luma Labs goes "look, you can export NeRFs to your favorite 3D software like Blender and Unreal!" The trick? They never mention that all reflection information is gone once you do that. A waste of time.
  • @JasonSipe16
    Thanks for this video. It has come a long way in less than a year! Also, Luma Ai has a new UE5 Plugin!
  • @ozibuyensin
    i am sorry if this is dumb but does this means we can use nerf created images and volumes to create a more detailed 3d model using the classic photogrammetry method? I am sure we will see more creative uses of it once its open for people to use but other that and potential social media usage, nerf's area of utilization seems pretty narrow compared to photogrammetry.
  • which one can we use for scanning environments like office or home interior for using in VR?
  • @Draconic404
    Those two apps seem to not be available on android, what are some alternatives for both types of 3d scanning?
  • @fintech1378
    how can we create 3d asset from a product image? and insert that to an existing video?
  • @bensmirmusic
    will NeRF be able to generate a 3D environment from a midjourney 2d art ?
  • @UgurEnginDeniz
    Both nerf and photogrammetry starts from a point cloud. Both can be meshed.
  • What’s the name of the app and If I can find this in the IOS App Store.
  • Err, no. "photogrammetry" means any method that measures things light. NeRF is a new technique of photogrammetry.
  • @SuperAnirock
    NeRF scans also convert into 3D model and work in AR/VR applications........with better output :)