Home  >  Article  >  Technology peripherals  >  Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

WBOY
WBOYforward
2023-04-14 20:28:011175browse

Recognition and generation are the two core tasks in the field of artificial intelligence. If they can be merged into a unified system, these two tasks should be complementary. In fact, in natural language processing, models like BERT [1] are not only able to generate high-quality text but also extract features from the text.

However, in the field of computer vision, current image generation models and recognition models are mostly trained separately, without fully utilizing the synergy of these two tasks. This is mainly due to the fact that the models of image generation and image recognition usually have essential structural differences: the input of image generation is low-dimensional features or noise, and the output is a high-dimensional original image; in contrast, the input of image recognition is high-dimensional. dimensional original image, while the output is low-dimensional features.

Recently, researchers from MIT and Google Research proposed a representation learning method based on image semantic masking, which for the first time achieved image generation and representation in a unified framework learned and achieved SOTA performance on multiple data sets. The research paper has been accepted by CVPR 2023, and the relevant code and pre-trained model have been open source.

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

  • ##Paper address: https://arxiv.org/abs/2211.09117
  • Code address: https://github.com/LTH14/mage

In CVPR 2022 On, MAE [2] proposed a representation learning method based on image masks (MIM) and achieved very good results on multiple subtasks. At a masking rate of up to 75%, MAE can reconstruct an image that closely matches the semantics of the original image, thereby allowing the network to self-supervisedly learn features in the image. However, as shown in Figure 1, although the image reconstructed by MAE has similar semantic information to the original image, serious blurring and distortion problems occur. Similar issues arise in all MIM-based representation learning methods. At the same time, current generative models, whether diffusion models or GANs, lack the ability to extract high-quality image features.

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

Figure 1: Comparison of MAE and MAGE reconstruction

Method Overview

In response to the above problems, the author of this article proposed MAGE (Masked Generative Encoder), which for the first time realized a unified image generation and feature extraction model. Different from the masking method where MIM acts directly on the image, MAGE proposes a masked image token modeling method based on image semantic symbols. As shown in the figure, MAGE first uses the VQGAN [3] encoder to convert the original image into discrete semantic symbols. After that, MAGE randomly masks it and uses the transformer-based encoder-decoder structure to reconstruct the mask. The reconstructed semantic symbols can be used to generate the original image through the VQGAN decoder. By using different masking rates in training, MAGE can train both generative models (nearly 100% masking rate) and representation learning (50%-80% masking rate). As shown in Figure 1, the image reconstructed by MAGE not only has semantic information consistent with the original image, but can also ensure the diversity and authenticity of the generated image at the same time.

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion


##Figure 2: MAGE Structure DiagramExperimental results

MAGE has reached or exceeded SOTA on multiple image generation and image recognition tasks.

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

In the unsupervised image generation task of ImageNet, the FID of MAGE dropped from the previous > 20 to 7.04, even reaching the level of supervised image generation (the FID of supervised Latent Diffusion on ImageNet is 3.60) :

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

picture 3: MAGE unsupervised image generation example

MAGE can also perform various image editing tasks, including image inpainting, outpainting, and uncropping:

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

#Figure 4: MAGE image editing sample

In In terms of representation learning, MAGE has greatly improved compared to the current MIM method in tasks such as ImageNet linear probing, few-shot learning, and transfer learning, and can reach or exceed the level of the current optimal self-supervised learning method.

Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion

Conclusion

This article aims to unify image generation and representation learning. To this end, the author of this article proposes MAGE, a self-supervised learning framework based on image semantic masking. This framework is simple and efficient, and for the first time reaches or exceeds SOTA performance in both image generation and representation learning. Interested readers can view the original text of the paper to learn more research details.

The above is the detailed content of Google and MIT propose a unified framework MAGE: representation learning surpasses MAE, and unsupervised image generation surpasses Latent Diffusion. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete