


In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.
Creating 3D content from given input (e.g., from text prompts, images, or 3D shapes) has important applications in the fields of computer vision and graphics. However, this problem is challenging. In reality, it usually requires professional artists (Technical Artists) to spend a lot of time and cost to create 3D content. At the same time, the resources in many online 3D model libraries are usually bare 3D models without any materials. If you want to apply them to the current rendering engine, you need a Technical Artist to create high-quality materials, lights and normal maps for them. . Therefore, it would be promising if there was a way to achieve automated, diverse, and realistic 3D model asset generation.
Therefore, research teams from South China University of Technology, Hong Kong Polytechnic University, Cross-dimensional Intelligence, Pengcheng Laboratory and other institutions have proposed a text-driven three-dimensional Model stylization method - TANGO, this method can automatically generate more realistic SVBRDF materials, normal maps and lights for a given 3D model and text, and has better robustness to low-quality 3D models. This study has been accepted into NeurIPS 2022.
## Project homepage: https://cyw-3d.github.io/tango/
Model EffectFor a given text input and 3D model, TANGO can produce finer, photorealistic details without self-intersection on the surface of the 3D model. question. As shown in Figure 1 below, TANGO not only presents realistic reflection effects on smooth materials (such as gold, silver, etc.), but can also estimate point-by-point normals for uneven materials (such as bricks, etc.) Renders a bumpy effect.
Figure 1. Stylized results of TANGO
TANGO can generate The key to real rendering results is to accurately separate each component (SVBRDF, normal map, light) in the shading model and learn them separately. Finally, these separated components are output through the spherical Gaussian differentiable renderer. , and sent to CLIP and input text to calculate loss. To demonstrate the rationale for decoupling components, the study visualized each component. Figure 2 (a) shows the stylized result of "a pair of shoes made of bricks", (b) shows the original normal direction of the 3D model, (c) is the normal direction predicted by TANGO for each point on the 3D model, (d) (e) (f) represent the diffuse reflection, roughness and specular reflection parameters in SVBRDF respectively, (g) is the ambient light expressed by the spherical Gaussian function predicted by TANGO.
Figure 2 Visualization of decoupled rendering components
At the same time, the Research can also edit the results output by TANGO. For example, in Figure 3, the research can use other light maps to re-light the TANGO results; in Figure 4, the roughness and specular reflectivity parameters can be edited to change the degree of reflection on the object surface.
Figure 3 Re-lighting the TANGO stylized result
Figure 4 Editing the material of the object
In addition, because TANGO uses predicted normal maps to add object surface details, it is also very robust to three-dimensional models with a small number of vertices. As shown in Figure 5, the original lamp and alien models had 41160 and 68430 faces respectively. The researchers downsampled the original models and obtained a model with only 5000 faces. It can be seen that the performance of TANGO on the original model and the downsampled model is basically similar, while Text2Mesh has a serious self-intersection phenomenon on the low-quality model.
Figure 5 Robustness Test
Principle and Method
TANGO mainly focuses on methods for text-guided stylization of three-dimensional objects. The most relevant current work in this area is Text2Mesh, which uses the pre-trained model CLIP as a guide to predict the color and position offset of surface vertices of a 3D model to achieve stylization. However, simply predicting surface vertex colors often produces unrealistic rendering effects, and irregular vertex offsets can cause severe self-intersections. Therefore, this research draws on the traditional physically based rendering pipeline to decouple the entire rendering process into the prediction process of SVBRDF materials, normal maps and lights, and express the decoupled elements with spherical Gaussian functions respectively. This physics-based decoupling method allows TANGO to correctly produce realistic rendering effects and has good robustness.
Figure 6 TANGO flow chart
Figure 6 shows the flow chart of TANGO work process. Given a 3D model and text (such as "a shoe made of gold" in the picture), the study first scales the 3D model to a unit sphere, and then samples the camera position near the 3D model. At this camera position Emit rays to find the intersection point with the three-dimensional model xp and the normal direction of the intersection point np. Next, xp and np will be sent to the SVBRDF network and Normal network to predict the material parameters and methods of the point. Line direction, and at the same time, multiple spherical Gaussian functions are used to express the lighting in the scene. For each training iteration, the study renders the image using a differentiable spherical Gaussian renderer, then encodes the augmented image using the CLIP model's image encoder, and finally the CLIP model backpropagates gradients to update all learnable parameters.
Summary
This paper proposes TANGO, a new method that generates realistic appearance styles for 3D models based on input text and is robust to low-quality models. By decoupling appearance style from SVBRDF, local geometric changes (pointwise normals) and lighting conditions, and representing and rendering these as spherical Gaussian functions, we can use CLIP as loss supervision and learn.
Compared with existing methods, TANGO can be very robust even for low-quality 3D models. However, the method of providing geometric details point-by-point normal while avoiding self-intersection will also slightly reduce the degree of concavity and convexity of the material surface that can be expressed. This study believes that TANGO and Text2Mesh based on vertex offset are performed in their respective directions. It is a good preliminary attempt and will inspire more follow-up research.
The above is the detailed content of In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.. For more information, please follow other related articles on the PHP Chinese website!

Introduction Suppose there is a farmer who daily observes the progress of crops in several weeks. He looks at the growth rates and begins to ponder about how much more taller his plants could grow in another few weeks. From th

Soft AI — defined as AI systems designed to perform specific, narrow tasks using approximate reasoning, pattern recognition, and flexible decision-making — seeks to mimic human-like thinking by embracing ambiguity. But what does this mean for busine

The answer is clear—just as cloud computing required a shift toward cloud-native security tools, AI demands a new breed of security solutions designed specifically for AI's unique needs. The Rise of Cloud Computing and Security Lessons Learned In th

Entrepreneurs and using AI and Generative AI to make their businesses better. At the same time, it is important to remember generative AI, like all technologies, is an amplifier – making the good great and the mediocre, worse. A rigorous 2024 study o

Unlock the Power of Embedding Models: A Deep Dive into Andrew Ng's New Course Imagine a future where machines understand and respond to your questions with perfect accuracy. This isn't science fiction; thanks to advancements in AI, it's becoming a r

Large Language Models (LLMs) and the Inevitable Problem of Hallucinations You've likely used AI models like ChatGPT, Claude, and Gemini. These are all examples of Large Language Models (LLMs), powerful AI systems trained on massive text datasets to

Recent research has shown that AI Overviews can cause a whopping 15-64% decline in organic traffic, based on industry and search type. This radical change is causing marketers to reconsider their whole strategy regarding digital visibility. The New

A recent report from Elon University’s Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, ‘Being Human in 2035’, concluded that most are concerned that the deepening adoption of AI systems over t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.