search
HomeTechnology peripheralsAIEfficiency crushes DALL·E 2 and Imagen, Google's new model achieves new SOTA, and can also handle PS in one sentence

At the beginning of the new year, Google AI has begun to work on text-image generation models again.

This time, their new model Muse reached a new SOTA (currently the best level) on the CC3M data set.

And its efficiency far exceeds that of the globally popular DALL·E 2 and Imagen (both of which are diffusion models), as well as Parti (which is an autoregressive model).

——The generation time of a single 512x512 resolution image is compressed to only 1.3 seconds.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

#In terms of image editing, you can edit the original image with just a text command.

(Looks like you no longer have to worry about learning PS~)

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

If you want the effect to be more precise, you can also select the mask position and edit specific area. For example, replace the buildings in the background with hot air balloons.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Once Muse was officially announced, it quickly attracted a lot of attention. The original post has already received 4,000 likes.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Seeing another masterpiece from Google, some people have even begun to predict:

The competition among AI developers is very fierce now. It seems that 2023 It's going to be a really exciting year.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence
Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

More efficient than DALL·E 2 and Imagen

Let’s talk about the Muse just released by Google.

First of all, in terms of the quality of the generated images, most of Muse’s works have clear images and natural effects.

Let’s take a look at more examples to get a feel for it~

For example, a sloth baby wearing a woolen hat is operating a computer; another example is a sheep in a wine glass:

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Various subjects that are usually out of reach coexist harmoniously in one picture without any sense of dissonance.

If you think these can only be regarded as the basic operations of AIGC, then you might as well take a look at the editing function of Muse.

For example, one-click outfit change (you can also change gender):

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

This does not require any masking and can be done in one sentence.

And if you use a mask, you can achieve 6 more operations, including switching the background with one click, from the original place to New York, Paris, and then to San Francisco.


You can also go from the seaside to London, to the sea of ​​​​flowers, or even fly to the rings of Saturn in space to play the exciting skateboard dolphin jump.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

# (Good guy, not only can you easily travel in the cloud, but you can also fly to the sky with one click...)

The effect is really outstanding. So what technical support is behind Muse? Why is the efficiency higher than DALL·E 2 and Imagen?

An important reason is that DALL·E 2 and Imagen need to store all learned knowledge in the model parameters during the training process.

As a result, they have to require larger and larger models and more and more training data to obtain more knowledge - tying Better and Bigger together.

The cost is that the number of parameters is huge and the efficiency is also affected.

According to the Google AI team, the main method they use is called: Masked image modeling.

This is an emerging self-supervised pre-training method. Its basic idea is simply:

Parts of the input image are randomly masked out and then reconstructed using a pre-trained text task.

Muse models are trained on spatial masks of discrete labels and combined with text extracted from pre-trained language large models to predict randomly masked image labels.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

From top to bottom: pre-trained text encoder, basic model, super-resolution model

The Google team found that using pre-trained The large language model can make AI's understanding of language more detailed and thorough.

As far as output is concerned, because AI has a good grasp of the spatial relationship, posture and other elements of objects, the generated images can be high-fidelity.

Compared with pixel space diffusion models such as DALL·E 2 and Imagen, Muse uses discrete tokens and has fewer sampling iterations.

In addition, compared with autoregressive models such as Parti, Muse uses parallel decoding, which is more efficient.

SOTA score on FID

As mentioned earlier, Muse has not only improved efficiency, but is also very good in generating image quality.

The researchers compared it with DALL·E, LAFITE, LDM, GLIDE, DALL·E 2, as well as Google's own Imagen and Parti, and tested their FID and CLIP scores.

(FID score is used to evaluate the quality of the generated image. The lower the score, the higher the quality; the CLIP score represents the degree of fit between the text and the image. The higher the score, the better.)

Result display , the Muse-3B model’s zero-shot FID-30K score in the COCO validation set is 7.88, second only to the Imagen-3.4B and Parti-20B models with larger parameters.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Even better, the Muse-900M model achieved a new SOTA on the CC3M data set, with an FID score of 6.06, which also means that it matches the text is the highest.

At the same time, the CLIP score of this model was 0.26, which also reached the highest level in the same period.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

In addition, in order to further confirm Muse’s image generation efficiency, the researchers also compared the single image generation time of Muse and other models:

Muse reached the fastest speed at 256x256 and 512x512 resolutions: 0.5s and 1.3s.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Research Team

Muse’s research team comes from Google, and the two co-authors are Huiwen Chang and Han Zhang.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Huiwen Chang is currently a senior researcher at Google.

She studied as an undergraduate at Tsinghua University and received her PhD from Princeton University. She has had internship experience at Adobe, Facebook, etc.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Han Zhang received his undergraduate degree from China Agricultural University, his master's degree from Beijing University of Posts and Telecommunications, and his PhD in computer science from Rutgers University.

Its research directions are computer vision, deep learning and medical image analysis.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

However, it is worth mentioning that Muse has not been officially released yet.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Some netizens joked that although it should be very fragrant, with Google’s “uric nature”, Muse may still be a long time away from its official release - after all, they still have AI hasn’t been released in 18 years.

Efficiency crushes DALL·E 2 and Imagen, Googles new model achieves new SOTA, and can also handle PS in one sentence

Speaking of which, what do you think of the effect of Muse?

Are you looking forward to its official release?

Portal:​​https://www.php.cn/link/854f1fb6f65734d9e49f708d6cd84ad6​

Reference link: https://twitter.com/AlphaSignalAI/status/ 1610404589966180360​

The above is the detailed content of Efficiency crushes DALL·E 2 and Imagen, Google's new model achieves new SOTA, and can also handle PS in one sentence. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor