Translator|Zhu Xianzhong
Reviewer|Sun Shujuan
Figure 1: Cover
Generating a 3D model can be time-consuming , or require a large number of reference images. One way to solve this problem is to use neural radiance fields (NeRF), an artificial intelligence method of generating images. The main idea of NERF is to take a small set of 2D images of the object or scene you photographed, and then use these 2D images to efficiently build a 3D representation. This is achieved by learning to transform between existing images. Now this jumping (also called "interpolation") technique can help you create images of new perspectives on objects!
Sounds good, right? With the help of a small set of images, you can make a 3D model! This works better than standard photogrammetry, which requires a huge library of images to generate some pictures (you need shots from every angle). However, NVIDIA did initially promise that NeRFs would be fast; however, until recently this was not the case. Previously, NeRFs tended to take a long time to learn how to convert a set of images into a 3D model.
But today, this is no longer the case. Recently, NVIDIA developed instant NeRF software that leverages GPU hardware to run the necessary complex calculations. This approach reduces the time required to create a model from days to seconds! NVIDIA makes many exciting claims about the usability and speed of instant-ngp software. Moreover, the results and examples they provided are also very impressive:
#Figure 2: NeRF image display-NVIDIA has a cool robotics lab
I find it hard not to be impressed by this demo - it looks amazing! So, I wanted to see how easy it would be to transfer this to my own images and generate my own NeRF model. So, I decided to install and use this software myself. In this article I will describe my experience with the experiment and detail the model I made!
Main task division
So what should we do? The roughly staged tasks are divided as follows:
- First of all, we need to quote some footage. Let's go record some videos we want to make in 3D!
- Then we start shooting the scene and convert the captured video into multiple still images.
- We pass the continuous image data obtained above to instant-ngp. Then, the AI is trained to understand the spaces between the images we generate. This is actually the same as making a 3D model.
- Finally, we wanted to create a video showing off our creation! In the software developed by NVIDIA, we will draw a path, let the camera take us through the model we made, and then render the video.
I won’t go into detail about how this all works, but I will provide links to many resources that I have found helpful. So, next, I'm going to focus on the videos I made, and some tidbits of knowledge I stumbled upon along the way.
Start My Experiment
NVIDIA's instant NeRF software is not easy to install. While the instructions for the software are clear, I feel like the required portion of the instructions doesn't offer a lot of wiggle room when it comes to the specific version of the software a person needs. It seemed impossible to me to use CUDA 11.7 or VS2022, but I think it was switching back to the CUDA 11.6 version and VS2019 that finally made the installation successful. Among them, I encountered many errors, such as "CUDA_ARCHITECTURES is empty for target", etc. This is because the cooperation between CUDA and Visual Studio is not friendly. Therefore, I sincerely recommend interested readers to refer to the video and the warehouse resources on Github to further help you set everything up smoothly. Work!
Other than that, this process is going smoothly. The official also provides a Python script to help guide the steps of converting the captured video into an image, and subsequently converting it into a model and video.
Experiment 1: LEGO Car
At first, I tried to NeRF-ify a small LEGO car in my office. I felt like my photography skills were nowhere near enough as I simply couldn't create any meaningful images. Just a weird 3D blemish. Forget it, let's take a look at an example provided by NVIDIA. Please note the position of the camera in the picture:
Figure 3: The "camera" position of the default NeRF model of the excavator provided by NVIDIA
One of the preparation settings that can work well for training is to place a "camera" in the scene as described in the picture above . These cameras are the angles the software thinks you're facing when shooting video. It should be a nice circle. Of course, my first Lego car didn’t look like this at all, but a squashed semicircle.
Experiment 2: Slightly Larger LEGO Car
To learn from the first experiment, I found a table with full mobility and found a larger Lego cars. I try to make sure I capture photos for longer periods of time than before, too. Finally, I shot a smooth 1-minute video from all angles. In total, it took me less than 30 seconds to train the model. Here’s the video I made after 4 hours of rendering at 720p:
Figure 4: My second NeRF model – a LEGO Technic car!
Experiment 3: Plants
The results prove that the above experiment 2 is better, at least technically feasible. However, there is still a strange fog, which is certainly not super troublesome. In my next experiment, I also tried shooting from further back (I'm assuming the fog is caused by the AI being "confused" about what's there). I'm trying to have more control over the aabc_scale parameter (which measures how big the scene is) and then train it for a few minutes. At the end of the rendering, the video result is obtained as shown below:
Figure 5: A NeRF model I made from a plant on the living room table
much better! It’s impressive how it represents the intricacies of the crocheted plant pots, the grooves in the wood, and the foliage with such precision. Look at the camera swooping over the leaves!
Test 4:
Now, our test results are getting better and better! However, I would like an outdoor video. I shot less than 2 minutes of video outside my apartment and started processing it. This is especially cumbersome for rendering/training. My guess here is that my aabc_scale value is quite high (8), so the rendering "rays" must go very far (i.e. the number of things I want to render is higher). So, I had to switch to 480p and lower the rendering FPS from 30 to 10. It turns out that the choice of setting parameters does affect rendering time. After 8 hours of rendering, I ended up with the following:
Figure 6: A NeRF model I used outside my apartment
However, I think The third trial is still my favorite. I think I could have done the fourth trial a little better. However, when render times become very long, it becomes difficult to iterate through versions and experiment with different rendering and training settings. It's now difficult to even set the camera angle for rendering, which causes my program to become extremely slow.
However, this is truly a pretty amazing output, since only a minute or two of video data was used. Finally, I finally have a detailed and realistic 3D model!
Pros and cons analysis
What I think is most impressive is that in 1-2 minutes of shooting, someone with absolutely no photogrammetry training (me) could create a workable 3D model. The process does require some technical know-how, but once you have everything set up, it's easy to use. Using a Python script to convert videos to images works great. Once this is done, inputting into the AI will proceed smoothly.
However, while it's hard to fault Nvidia for this aspect, I feel I should bring it up: this thing requires a pretty powerful GPU. I have a T500 in my laptop and this task simply pushed it to its absolute limits. The training time is indeed much longer than the advertised 5 seconds, and trying to render at 1080p will cause the program to crash (I chose to render dynamically around the 135*74 indicator). Now, this is still a huge improvement, as previous NeRF model experiments took several days.
I don't think everyone will have a 3090p device for a project like this, so it's worth briefly explaining. The low performance computer made the program difficult to use, especially when I was trying to get the camera to "fly" in order to have a more conducive setup for rendering video. Still, the results of the process are impressive.
Also, another problem I faced was not being able to find the render file render.py (which, as you might guess, is crucial for rendering videos). Very strangely, it is not in the officially provided open source code repositories, despite being heavily mentioned in most advertising articles and other documentation. Therefore, I have to dig out this treasure from the link https://www.php.cn/link/b943325cc7b7422d2871b345bf9b067f.
Finally, I also hope to convert the above 3D model into an .obj file. Maybe now, this is possible.
Figure 7: GIF animation of a fox - this is not made by me, it is made by NVIDIA. Not bad, right?
Summary and next thoughts
Personally, I am looking forward to more experimental results in this area. I want to be able to generate super realistic models and then dump them into AR/VR. Based on these technologies, you can even host web meetings – isn’t that fun? Because you only need to use the camera on your phone to achieve this goal, and most users already have this hardware configuration in their phones today. Overall, I'm impressed. It's great to be able to record a 1 minute video on your phone and turn it into a model you can step through. Although it takes a while to render and is a bit difficult to install, it works great. After a few experiments, I've got pretty cool output! I'm looking forward to more experiments! ReferencesNVIDIA Git
NVIDIA blog
Supplemental Git
Translator introduction Zhu Xianzhong, 51CTO community editor, 51CTO expert blogger, lecturer, computer teacher at a university in Weifang, veteran in the freelance programming industry One piece. In the early days, he focused on various Microsoft technologies (compiled three technical books related to ASP.NET AJX and Cocos 2d-X). In the past ten years, he has devoted himself to the open source world (familiar with popular full-stack web development technology) and learned about OneNet/AliOS Arduino/ IoT development technologies such as ESP32/Raspberry Pi and big data development technologies such as Scala Hadoop Spark Flink.Original title: Using AI to Generate 3D Models, Fast!, author: Andrew Blance
The above is the detailed content of Rapidly build 3D models based on artificial intelligence technology. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Dreamweaver Mac version
Visual web development tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.
