Home > Article > Technology peripherals > MIT and Google jointly research new technology StableRep: using synthetic images to train AI image models
key emphasis:
Researchers have proposed a new technology called StableRep that uses images generated by artificial intelligence to train highly detailed artificial intelligence image models
StableRep is trained using millions of labeled synthetic images, adopts a "multiple positive contrast learning method" to improve the learning process, and applies it to the open source text-to-image model Stable Diffusion
- ⚙️ Although StableRep has achieved remarkable results in ImageNet classification, it is slow to generate images and suffers from semantic mismatch between text prompts and generated images.
Webmaster’s Home (ChinaZ.com) News on November 28: Researchers from MIT and Google recently developed a new technology called StableRep, which aims to utilize images generated by AI to train more detailed and efficient AI image models. This technology was applied to the open source text-to-image model Stable Diffusion, achieving a series of notable achievements.
StableRep uses a unique method called "multiple positive contrast learning method". In this approach, multiple images originating from the same text prompt are viewed as positive contrasts to each other to enhance the learning process. For example, for a landscape text prompt, the model will compare multiple generated landscape images with all relevant descriptions to find small differences based on these images and apply them to the final output, creating a highly detailed image
The researchers note that their method excels at treating multiple images as expressions of the same underlying thing, rather than just as a collection of pixels. Experiments have proven that StableRep achieved a linear accuracy of 76.7% in the ImageNet classification task using the Vision Transformer model. Furthermore, by introducing language supervision, the StableRep model trained on 20 million synthetic images surpassed the performance of the CLIP model trained on 50 million real images
However, stable generators are not without flaws. It generates images slowly and suffers from semantic mismatch between textual prompts and generated images. In addition, stable diffusion of the underlying model of the stable generator requires initial training on real data, so generating images using the stable generator will take longer and may be more costly
StableRep has been open sourced on GitHub and is available for commercial use. It adopts the Apache 2.0 license, and users can use it and generate derivative works, but they need to provide a copy of the Apache License in the redistributed work or derivative work and include a notice of the change. The license also includes a limitation on the contributor's liability for any damages arising from the use of the licensed work. Stable Replica (StableRep) has been released as open source on GitHub and can be used for commercial purposes. It adopts the Apache2.0 license, which allows users to use and create derivative works. However, in redistributions or derivative works, users are required to provide a copy of the Apache License and notify of changes made. This License also indemnifies contributors from any harm resulting from their use of the licensed work
This research result from MIT and Google represents an innovation in the field of artificial intelligence image generation. Although it has some flaws, it provides a new method and idea to generate high-quality images
The above is the detailed content of MIT and Google jointly research new technology StableRep: using synthetic images to train AI image models. For more information, please follow other related articles on the PHP Chinese website!