search
HomeTechnology peripheralsAIMVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

Realistic image generation has wide applications in fields such as virtual reality, augmented reality, video games and film production.

With the rapid development of diffusion models in the past two years, major breakthroughs have been made in the field of image generation. A series of open source or commercial models derived from Stable Diffusion for generating images based on text descriptions have had a huge impact on design, games and other fields

However, how to generate images based on given Text or other conditions, producing high-quality multi-view images remains a challenge. Existing methods have obvious flaws in multi-view consistency

Currently common methods can be roughly divided into two categories

First The class method is dedicated to generating a scene picture and depth map, and obtaining the corresponding mesh, such as Text2Room, SceneScape - first use Stable Diffusion to generate the first picture, and then use Image Warping and Image Inpainting ) to generate subsequent pictures and depth maps using an autoregressive method.

However, such a solution can easily lead to errors gradually accumulating during the generation of multiple images, and there are usually closed-loop problems (such as when the camera rotates in a circle and returns to near the starting position) , the generated content is not completely consistent with the first picture), resulting in poor performance when the scene scale is large or the perspective changes between pictures.

The second type of method generates multiple pictures at the same time by extending the generation algorithm of the diffusion model to produce richer content than a single picture (such as generating a 360-degree panorama, or The content of an image is extrapolated infinitely to both sides), such as MultiDiffusion and DiffCollage. However, since the camera model is not considered, the results generated by this type of method are not true panoramas.

The goal of MVDiffusion is to generate multi-view images that conform to a given camera model. These images are Strictly consistent in content and globally semantically unified. The core idea of ​​this method is to simultaneously denoise and learn the correspondence between images to maintain consistency

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

Please click the following link to view the paper: https ://arxiv.org/abs/2307.01097

Please visit the project website: https://mvdiffusion.github.io/

Demo : https://huggingface.co/spaces/tangshitao/MVDiffusion

Code: https://github.com/Tangshitao/MVDiffusion

Conference Published: NeurIPS (Key Points)

The goal of MVDiffusion is to generate multi-viewpoints with highly consistent content and unified global semantics through simultaneous denoising and global awareness based on the correspondence between images. Picture

Specifically, the researchers extended the existing text-picture diffusion model (such as Stable Diffusion), first allowing it to process multiple pictures in parallel, and further in the original An additional "Correspondence-aware Attention" mechanism is added to UNet to learn consistency between multiple perspectives and global unity.

By fine-tuning on a small amount of multi-view image training data, the resulting model can simultaneously generate multi-perspective images with highly consistent content.

MVDiffusion has achieved good results in three different application scenarios:

Generate multiple views based on text, and then splice them to Obtain a panorama

2. Extrapolate the perspective image (outpainting) to obtain a complete 360-degree panorama;

3. Generate for the scene Texture.

Application Scenario Display

Application 1: The process of panorama generation is to stitch together multiple photos or videos to create a panoramic perspective image or video. This process usually involves using special software or tools to automatically or manually align, blend and repair these images or videos. Through panorama generation, people can appreciate and experience scenes, such as landscapes, buildings, or indoor spaces, with a broader view. This technology has a wide range of applications in tourism, real estate, virtual reality and other fields (according to text)

Take generating a panorama as an example, enter a text describing the scene, MVDIffusion can generate multiple images of a scene Perspective Pictures

Enter the following to get 8 multi-perspective pictures: "This kitchen is a charming blend of country and modern, featuring a large reclaimed wood island with marble countertops, a A sink surrounded by cabinets. To the left of the island is a tall stainless steel refrigerator. To the right of the sink are built-in wooden cabinets painted in pastel colors."

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

These 8 pictures can be stitched into one panorama:

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

MVDiffusion also supports Provide a different text description for each image, but the descriptions need to be semantically consistent.

Application 2: The process of panorama generation is to stitch together multiple photos or videos to create a panoramic perspective image or video. This process usually involves using special software or tools to automatically or manually align, blend and repair these images or videos. Through panorama generation, people can appreciate and experience scenes, such as landscapes, buildings, or indoor spaces, with a broader view. This technology has wide applications in tourism, real estate, virtual reality and other fields (based on a perspective image)

MVDiffusion can extrapolate (outpainting) a perspective image into a complete 360-degree panorama picture.

For example, suppose we enter the following perspective:

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

MVDiffusion can further generate Panorama below:

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

It can be seen that the generated panorama semantically expands the input image, and the leftmost and rightmost The contents are connected (no closed-loop problems).

Application 3: Generating Scene Materials

Use MVDiffusion to generate materials (textures) for a given materialless scene mesh

Specifically, we first obtain multi-view depth maps by rendering mesh. Through the camera pose and depth map, we can obtain the correspondence between the pixels of multi-view images.

Next, MVDiffusion uses the multi-view depth map as a condition to simultaneously generate consistent multi-view RGB images.

Because the generated multi-view images can maintain a high degree of consistency in content, by projecting them back into the mesh, you can obtain a high-quality textured mesh.

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

The following are more effect examples:

The process of panorama generation is to combine multiple photos or videos Stitch images or videos together to create a panoramic view. This process usually involves using special software or tools to automatically or manually align, blend and repair these images or videos. Through panorama generation, people can appreciate and experience scenes, such as landscapes, buildings, or indoor spaces, with a broader view. This technology has wide applications in tourism, real estate, virtual reality and other fields

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials


MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials


MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials


MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

In this application scenario, It should be mentioned in particular that although the multi-view image data used in training MVDiffusion all come from panoramas of indoor scenes, and the styles are all single

, however, MVDiffusion does not Change the original stable diffusion parameters and just train the newly added Correspondence-aware Attention

Finally, the model can still generate multi-view pictures of various styles (such as outdoor, cartoon, etc.) based on the given text.

The content that needs to be rewritten is: single view extrapolation

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials


MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials


MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

Scene material generation

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

We will first introduce the specific image generation process of MVDiffusion in three different tasks, and finally introduce the core part of the method, namely the "Correspondence-aware Attention" module. Figure 1 shows an overview of MVDiffusion

1. The process of panorama generation is to stitch together multiple photos or videos to create a panoramic perspective image or video. This process usually involves using special software or tools to automatically or manually align, blend and repair these images or videos. Through panorama generation, people can appreciate and experience scenes, such as landscapes, buildings, or indoor spaces, with a broader view. This technology has wide applications in tourism, real estate, virtual reality and other fields (according to text)

MVDiffusion simultaneously generates 8 overlapping images picture (perspective image), and then stitch these 8 pictures into a panorama. In these 8 perspective images, a 3x3 homographic matrix determines the pixel correspondence between each two images.

In the specific generation process, MVDiffusion first uses Gaussian random initialization to generate 8 views of pictures

Then, these 8 pictures The image is input into a Stable Diffusion pre-trained Unet network with multiple branches, and synchronous denoising is performed to obtain the generated result.

A new "Correspondence-aware Attention" module (light blue part in the picture above) has been added to the UNet network, which is used to learn the geometric consistency between cross-views, so that These 8 pictures can be stitched into a consistent panorama.

#2. The process of panorama generation is to stitch together multiple photos or videos to create a panoramic perspective image or video. This process usually involves using special software or tools to automatically or manually align, blend and repair these images or videos. Through panorama generation, people can appreciate and experience scenes, such as landscapes, buildings, or indoor spaces, with a broader view. This technology has a wide range of applications in tourism, real estate, virtual reality and more (according to a perspective picture)

MVDiffusion can also Complete a single perspective view into a panorama. The process of panorama generation is to stitch together multiple photos or videos to create a panoramic view of the image or video. This process usually involves using special software or tools to automatically or manually align, blend and repair these images or videos. Through panorama generation, people can appreciate and experience scenes, such as landscapes, buildings, or indoor spaces, with a broader view. This technology has a wide range of applications in tourism, real estate, virtual reality and other fields. MVDiffusion inputs randomly initialized 8 perspective pictures (including perspectives corresponding to perspective views) into the multi-branch Stable Diffusion Inpainting pre-trained UNet network.

In the Stable Diffusion Inpainting model, UNet uses an additional input mask to distinguish the conditional image from the image to be generated

The perspective corresponding to the perspective, the mask is set to 1, and the UNet of this branch will directly restore the perspective. For other perspectives, the mask is set to 0, and the UNet of the corresponding branch will generate a new perspective

Similarly, MVDiffusion uses the "Correspondence-aware Attention" module to learn to generate images and conditions Geometric consistency and semantic unity between images.

3. Scene material generation

MVDiffusion first generates RGB on a trajectory based on the depth map and camera pose. image, and then use TSDF fusion to mesh the generated RGB image with the given depth map.

The pixel correspondence of RGB images can be obtained through the depth map and camera pose.

The process of panorama generation is to stitch together multiple photos or videos to create a panoramic view of the image or video. This process usually involves using special software or tools to automatically or manually align, blend and repair these images or videos. Through panorama generation, people can appreciate and experience scenes, such as landscapes, buildings, or indoor spaces, with a broader view. This technology has a wide range of applications in tourism, real estate, virtual reality and other fields. We use multi-branch UNet and insert "Correspondence-aware Attention" to learn geometric consistency across perspectives.

4. Correspondence-aware Attention mechanism

##「Correspondence-aware Attention" (CAA), which is the core of MVDiffusion, is used to learn geometric consistency and semantic unity between multiple views.

MVDiffusion inserts the "Correspondence-aware Attention" block after each UNet block in the Stable Diffusion UNet. CAA works by considering a source feature map and N target feature maps.

For a location in the source feature map, we calculate the attention output based on the corresponding pixel and its neighborhood in the target feature map.

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

Specifically, for each target pixel t^l, MVDiffusion will pass the (x/y) coordinate Add integer displacement (dx/dy) to consider a K x K neighborhood, where |dx| represents the displacement in the x direction, |dy| represents the displacement in the y direction

In practical applications, the MVDiffusion algorithm uses K=3 and selects a 9-point neighborhood to improve the quality of the panorama. However, when generating multi-view images subject to geometric conditions, in order to improve operating efficiency, the calculation using the K=1

MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials

CAA module follows The standard attention mechanism, as shown in the formula above, where W_Q, W_K and W_V are the learnable weights of the query, key and value matrices; the target feature is not located at an integer position, but is obtained through bilinear interpolation.

The key difference is that position encoding is added to the target feature based on the 2D displacement (panorama) or 1D depth error (geometry) between the corresponding positions s^l and s in the source image .

In panorama generation (Application 1 and Application 2), this displacement provides the relative position in the local neighborhood.

And in depth-to-image generation (Application 3), disparity provides clues about depth discontinuities or occlusions, which is very important for high-fidelity image generation.

Please note that displacement is a concept containing a 2D (displacement) or 1D (depth error) vector. MVDiffusion applies standard frequency encoding to the x and y coordinates of the displacement

The above is the detailed content of MVDiffusion: Achieve high-quality multi-view image generation and accurate reproduction of scene materials. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools