Home >Technology peripherals >AI >Hinton is on the list! Taking stock of the 10-year history of AI image synthesis, papers and names worth remembering
Now, it is the end of 2022.
The performance of the deep learning model in generating images is already so good. Obviously, it will give us more surprises in the future.
How did we get to where we are today in ten years?
In the timeline below, we will trace some milestone moments, that is, when the papers, architectures, models, data sets, and experiments that influenced AI image synthesis were launched. .
Everything starts from that summer ten years ago.
After the advent of deep neural networks, people realized that it would completely change image classification.
At the same time, researchers began to explore the opposite direction. What would happen if the images were produced using some techniques that are very effective for classification, such as convolutional layers?
This is the beginning of the birth of the “Summer of Artificial Intelligence”.
December 2012
It all started here.
This year, the paper "ImageNet Classification of Deep Convolutional Neural Networks" was published.
One of the authors of the paper is Hinton, one of the “Three Giants of AI”.
For the first time, it combines deep convolutional neural networks (CNN), GPUs and a huge Internet-sourced dataset (ImageNet).
##December 2014
##Ian Goodfellow and other AI giants published the epic paper "Generative Adversarial Networks".GAN is the first modern neural network architecture dedicated to image synthesis rather than analysis (the definition of "modern" is after 2012).
It introduces a unique learning method based on game theory, with two sub-networks "Generator" and "Discriminator" competing.
In the end, only the "generator" is kept outside the system and is used for image synthesis.
Hello World! GAN generated face samples from Goodfellow et al.'s 2014 paper. The model was trained on the Toronto Faces dataset, which was removed from the web
November 2015
The seminal paper "Unsupervised Representative Learning Using Deep Convolutional Generative Adversarial Networks" was published.In this paper, the authors describe the first practically usable GAN architecture (DCGAN).
This paper also raises the question of latent space manipulation for the first time - do concepts map to latent space directions? During these five years, GAN has been applied to various image processing tasks, such as style transfer, restoration, denoising and super resolution. During this period, papers on GAN architecture began to explode. ##Project address: https://github.com/nightrome/really-awesome-gan At the same time, artistic experiments with GAN began to emerge, and the first works of Mike Tyka, Mario Klingenmann, Anna Ridler, Helena Sarin and others appeared. The first “AI art” scandal occurred in 2018. Three French students used "borrowed" code to generate an AI portrait, which became the first AI portrait to be auctioned at Christie's. #At the same time, the transformer architecture revolutionized NLP. In the near future, this matter will have a significant impact on image synthesis. June 2017 The paper "Attention Is All You Need" was released. There is also a detailed explanation in "Transformers, Explained: Understand the Model Behind GPT-3, BERT, and T5". Since then, the Transformer architecture (in the form of pre-trained models like BERT) has revolutionized the field of natural language processing (NLP). ##July 2018 ##The paper "Conceptual Annotation: Cleaning, Superpositioning, and Image Alternative Text Dataset for Automatic Image Captioning" was published.
This and other multimodal datasets will become extremely important for models like CLIP and DALL-E.
In 2018-20NVIDIA researchers conducted a study on the GAN architecture. Series completely revamped.
In the paper "Training Generative Adversarial Networks Using Limited Data", the latest StyleGAN2-ada is introduced. For the first time, GAN-generated images become indistinguishable from natural images, at least for highly optimized datasets like Flickr-Faces-HQ (FFHQ) so. Mario Klingenmann, Memories of Passerby I, 2018. The baconesque faces are typical of AI art in the region, where the non-realistic nature of the generative models It is the focus of artistic exploration May 2020 The paper "Language Model is Small Sample Learners" published. OpenAI’s LLM Generative Pre-trained Transformer 3 (GPT-3) demonstrates the power of the transformer architecture. ##December 2020
ViT shows that the Transformer architecture can be used for images.
The method VQGAN presented in this article produced SOTA results in benchmark tests. The quality of GAN architectures from the late 2010s was mainly evaluated based on aligned facial images, with limited results for more heterogeneous datasets. The human face therefore remains an important reference point in academic/industrial and artistic experiments. The Era of Transformer (2020-2022) Since then, the field of image synthesis has begun to abandon GAN. "Multi-modal" deep learning integrates NLP and computer vision technologies. "Just-in-time engineering" replaces model training and adjustment and becomes an artistic method of image synthesis. In the paper "Learning Transferable Vision Models from Natural Language Supervision", the CLIP architecture is proposed.
It can be said that the current image synthesis craze is driven by the multi-modal function introduced for the first time by CLIP.
CLIP architecture in the paper ## January 2021 The paper "Zero-Sample Text to Image Generation" was published (see also OpenAI's blog post), which introduced the first version of DALL-E that is about to hit the world. This version works by combining text and images (compressed by VAE into "TOKEN") in a single data stream. This model just "continues" the "sentence". The data (250M images) includes text-image pairs from Wikipedia, concept descriptions, and a filtered subset of YFCM100M. CLIP lays the foundation for a "multi-modal" approach to image synthesis. January 2021 The paper "Supervised Learning of Transferable Vision from Natural Language Model" published. The paper introduces CLIP, a multi-modal model that combines ViT and ordinary Transformer. CLIP learns the "shared latent space" of images and captions, so it can label images. The model is trained on a large dataset listed in Appendix A.1 of the paper. June 2021 The paper "Release of Diffusion Model in Image Synthesis" Defeated GAN" published. The diffusion model introduces an image synthesis method that is different from the GAN method. Researchers learn by reconstructing images from artificially added noise. They are related to variational autoencoders (VAEs). July 2021 DALL-E mini released. It is a copy of DALL-E (smaller, with few adjustments to the architecture and data). Data includes Conceptual 12M, Conceptual Captions, and the same filtered subset of YFCM100M used by OpenAI for the original DALL-E model. Without any content filters or API restrictions, the DALL-E mini offers huge potential for creative exploration and resulted in an explosion of “weird DALL-E” images on Twitter growth. 2021-2022 Katherine Crowson releases a series of CoLab notes exploring making CLIP guides the generation of models. For example, 512x512CLIP-guided diffusion and VQGAN-CLIP (Open domain image generation and editing with natural language guidance, were only released as preprints in 2022 but public experiments appeared as soon as VQGAN was released ). Just like in the early days of GANs, artists and developers made significant improvements to existing architectures with very limited means, which were then simplified by companies and finally commercialized by "startups" such as wombo.ai. April 2022 The paper "Hierarchical Text Conditions with CLIP Potential Image Generation" published. This paper introduces DALL-E 2. It builds on the GLIDE paper published just a few weeks ago ("GLIDE : Based on Realistic Image Generation and Editing Using Text-Guided Diffusion Models. At the same time, due to the access of DALL-E 2 Restricted and intentionally limited, there is renewed interest in the DALL-E mini. According to the model card, the data includes "a combination of publicly available resources and our licensed resources," and according to The complete CLIP and DALL-E data sets of the paper. "Portrait photo of a blonde girl, taken with a digital SLR camera, medium Sexual background, high resolution", generated using DALL-E 2. Transformer-based generative model matches the realism of later GAN architectures such as StyleGAN 2, but allows the creation of a wide variety of themes and patterns May-June 2022 #In May, the paper "With Deep Language Understanding" "Realistic Text-to-Image Diffusion Model" published. #In June, the paper "Scaling Autoregression for Rich Text-to-Image Generation Model" was published. ##In these two papers, Imagegen and Parti were introduced. and Google's answer to DALL-E 2. # #"Do you know why I stopped you today?" Generated by DALL-E 2, "prompt engineering" has since become the main method of artistic image synthesisAI Photoshop (2022 to present) Users continued to try smaller models such as the DALL-E mini. Then, with the release of the groundbreaking Stable Diffusion, all of this changed. It can be said that Stable Diffusion marks the beginning of the "Photoshop era" of image synthesis. "Still life with four bunches of grapes, trying to create something like the ancient painter Zeuxis Juan El Labrador Fernandez, 1636, Prado, Madrid Grapes as lifelike as grapes", six changes produced by Stable Diffusion August 2022 Stability.ai releases Stable Diffusion model. #In the paper "High-Resolution Image Synthesis with Latent Diffusion Model", Stability.ai grandly launches Stable Diffusion. This model can achieve the same photorealism as DALL-E 2. In addition to DALL-E 2, models are available to the public almost immediately and can be run on the CoLab and Huggingface platforms. August 2022 Google published the paper "DreamBooth: Generating for theme-driven Fine-tuning text-to-image diffusion models. DreamBooth provides increasingly fine-grained control over the diffusion model. #However, even without such additional technical intervention, it becomes feasible to use generative models like Photoshop, starting from a sketch, layer by layer Add the resulting modifications. October 2022 ##Maximum Shutterstock, one of the photo gallery companies, announced that it is cooperating with OpenAI to provide/license generated images. It can be expected that the photo gallery market will be seriously affected by generative models such as Stable Diffusion. Five years of GAN (2015-2020)
The above is the detailed content of Hinton is on the list! Taking stock of the 10-year history of AI image synthesis, papers and names worth remembering. For more information, please follow other related articles on the PHP Chinese website!