Home >Technology peripherals >AI >He Kaiming cooperates with MIT: A simple framework achieves the latest breakthrough in unconditional image generation
He Kaiming, the boss, has not officially joined MIT, but the first collaborative research with MIT has come out:
He developed it together with MIT teachers and students A self-conditional image generation framework was developed, named RCG (the code has been open source).
This framework structure is very simple but the effect is outstanding. It directly implements the new SOTA of unconditional image generation on the ImageNet-1K data set.
The images it generates do not require any human annotations (that is, prompt words, class labels, etc.) , and can achieve both fidelity and Have diversity.
In this way, it not only significantly improves the level of unconditional image generation, but also competes with the best current conditional generation methods.
In the words of He Kaiming’s team:
The long-standing performance gap between conditional and unconditional generation tasks was finally closed at this moment.
So, how exactly is it done?
First of all, the so-called unconditional generation means that the model directly captures the data distribution generation content without the help of input signals.
This method of training is difficult, so there has always been a large performance gap with conditional generation - just like unsupervised learning cannot be compared with supervised learning
Just like the emergence of self-supervised learning , it also changes this situation
In the field of unconditional image generation, there is also a self-conditional generation method similar to the concept of self-supervised learning.
Compared with traditional unconditional generation that simply maps the noise distribution to the image distribution, this method mainly sets the pixel generation process on the representation distribution derived from the data distribution itself.
It is expected to go beyond conditional image generation and promote the development of applications such asmolecular design or drug discoverythat do not require human annotations (This is why conditional image generation is developed So good, we also need to pay attention to unconditional generation).
Now, based on this concept of self-conditional generation, He Kaiming's team first developed arepresentation diffusion model RDM.
Intercepted from the image through the self-supervised image encoder, mainly used to generate low-dimensional self-supervised image representation Its core architecture is as follows: The first is the input layer, which is responsible for projecting the representation to the hidden dimension C, followed by N fully connected blocks, and finally an output layer, which is responsible for reprojecting the latent features of the hidden layer(Conversion)To the original representation dimension.
Each layer includes a LayerNorm layer, a SiLU layer and a linear layer. Such an RDM has two advantages: One of its characteristics is that it has strong diversity, and the other is that it has very little computational overhead. After that, the team proposed today's protagonist with the help of RDM: Representation Conditional Image Generation Architecture RCG It is a simple self-conditional generation framework, consisting of threeComponent consists of:
One is the SSL image encoder , which is used to convert the image distribution into a compact representation distribution.
One is RDM, which is used to model and sample the distribution.
Finally, there is a pixel generator MAGE, which is used to process the image according to the representation.
MAGE works by adding a random mask to the tokenized image and asking the network to reconstruct the missing token conditioned on a representation extracted from the same image After testing, it was found that the final results showed that although the structure of this self-conditional generation framework is simple, its effect is very goodOn ImageNet 256×256, RCG achievedFID of 3.56 and IS(Inception Score) score of 186.9.
In comparison, the most powerful unconditional generation method before it has an FID score of 7.04 and an IS score of 123.5.For RCG, it not only performs well in conditional generation, but also performs at the same level or even exceeds it when compared with baseline models in the field
Finally, without classifier guidance , RCG's results can be further improved to 3.31(FID) and 253.4(IS).
The team expressed:
These results show that conditional image generation models have great potential and may herald a new era in this field to come
There are three authors in this article:
##The first author is Li Tianhong, a doctoral student at MIT, who graduated from the Yao Class of Tsinghua University with his undergraduate degree, whose research direction is cross-modal integrated sensing technology.
His personal homepage is very interesting, and he also has a collection of recipes - research and cooking are the two things he is most passionate about Another The author is Dina Katabi, professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT and director of the MIT Wireless Networks and Mobile Computing Center. She is the winner of this year's Sloan Award and has been elected as an academician of the National Academy of Sciences.
Finally, the corresponding author He Yuming will officially return to academia next year and leave Meta to join the Department of Electrical Engineering and Computer Science at MIT, where he will become colleagues with Dina Katabi.
Please click the following link to view the paper: https://arxiv.org/abs/2312.03701
The above is the detailed content of He Kaiming cooperates with MIT: A simple framework achieves the latest breakthrough in unconditional image generation. For more information, please follow other related articles on the PHP Chinese website!