Home > Article > Technology peripherals > OPPO proposes GlyphDraw: one-click generation of images with Chinese characters, diffusion model to output emoticons
In recent years, many surprising breakthroughs have been made in the field of text-generated images, and many models are capable of creating high-quality and diverse images based on text instructions. Although the generated images are already very realistic, current models are usually good at generating physical images such as landscapes and objects, but have difficulty generating images with highly coherent details, such as images with complex glyph text such as Chinese characters
In order to solve For this problem, researchers from OPPO and other institutions have proposed a universal learning framework called GlyphDraw. The goal of this framework is to enable models to generate images embedded with coherent text. This work is the first in the field of image synthesis to solve the problem of Chinese character generation
Please click the following link to view the paper: https://arxiv. org/abs/2303.17870
Project home page link: https://1073521013.github.io/glyph-draw.github.io/
Let’s take a look at the generation effect first, such as generating warning slogans for the exhibition hall:
Making billboards:
Add a brief text description to the picture, and you can also diversify the text style
Another interesting and practical example is to generate emoticons:
Although there are some flaws in the results, overall the results generated by this study have been excellent. The main contributions of this research include:
This research proposes a Chinese character image generation framework called GlyphDraw. During the entire generation process, using auxiliary information such as Chinese character glyphs and positions, this framework can provide fine-grained guidance, so that the generated Chinese character images can be seamlessly embedded into the image with high quality
This study proposes an effective training strategy that successfully maintains the model's performance in open domain generation by limiting the number of trainable parameters in the pre-trained model to prevent overfitting and catastrophic forgetting. Strong performance and the ability to accurately generate Chinese character images
This study details the process of building a training dataset and proposes a new baseline method to evaluate Chinese character image generation the quality of. Among them, the generation accuracy of GlyphDraw reaches 75%, which is significantly better than the previous image synthesis method
Model introduction:
First, this study designed a complex image-text data set construction strategy. Then, using the open source image synthesis algorithm Stable Diffusion, a general learning framework GlyphDraw is proposed, as shown in Figure 2
The overall training goal of stable diffusion can be expressed as the following formula :
GlyphDraw is based on the cross-attention mechanism in Stable Diffusion. It cascades the latent vector z_t of the original input with the latent vector z_t of the image, the text mask l_m and the glyph image l_g
In addition, by using domain-specific fusion Module,Condition C is equipped with mixed glyph and text features. The introduction of text mask and glyph information enables the entire training process to achieve fine-grained diffusion control, which is a key component to improve model performance and can ultimately generate images with Chinese character text
Specifically, text The pixel representation of information, especially in complex text forms, such as pictographic Chinese characters, is significantly different from natural objects. For example, the Chinese word "sky" is composed of multiple strokes in a two-dimensional structure, and the corresponding natural image is "blue sky dotted with white clouds." In contrast, Chinese characters have very fine-grained characteristics, and even small movements or deformations can cause the text to render incorrectly, making image generation impossible
Embedding characters into natural image backgrounds also requires consideration of a key issue, which is to accurately control the generation of text pixels without affecting adjacent natural image pixels. In order to display perfect Chinese characters on natural images, the author designed two key components, namely position control and glyph control, which were integrated into the diffusion synthesis model
Different from the global conditional input of other models, the character Generation requires more attention to specific local areas of the image because the underlying feature distribution of character pixels is very different from that of natural image pixels. In order to prevent model learning from collapsing, this study innovatively proposes fine-grained position area control to decouple the distribution between different areas
Rewritten content: In addition to position control, another important issue is Get fine control over Chinese character stroke synthesis. Considering the complexity and diversity of Chinese characters, it is very difficult to just learn from a large image-text dataset without any explicit prior knowledge. In order to accurately generate Chinese characters, this study introduces explicit glyph images as additional conditional information into the diffusion process of the model
In order to keep the original meaning unchanged, it is necessary to The content has been rewritten into Chinese. The following is the rewritten content: Research Design and Experimental Results
Since there is no previous dataset specifically for Chinese character image generation, this study first created a benchmark dataset ChineseDrawText for qualitative and quantitative evaluation. Subsequently, the researchers tested the generation accuracy of several methods on ChineseDrawText and evaluated it through the OCR recognition model
The GlyphDraw model proposed in this study fully utilizes the auxiliary Glyph and position information, achieving an excellent average accuracy of 75%, proving the model's excellent ability in character image generation. The following figure shows the visual comparison results of several methods
#In addition, GlyphDraw can also maintain open-domain image synthesis performance by limiting training parameters, in MS-COCO FID-10k The FID of general image synthesis only dropped by 2.3
##Interested readers can read the original text of the paper for more research details.The above is the detailed content of OPPO proposes GlyphDraw: one-click generation of images with Chinese characters, diffusion model to output emoticons. For more information, please follow other related articles on the PHP Chinese website!