Home  >  Article  >  Technology peripherals  >  Google releases the latest "screen reading" AI! PaLM 2-S automatically generates data, and multiple understanding tasks refresh SOTA

Google releases the latest "screen reading" AI! PaLM 2-S automatically generates data, and multiple understanding tasks refresh SOTA

WBOY
WBOYforward
2024-03-06 18:30:03745browse

The big model that everyone wants is the kind that is truly intelligent...

The Google team will make it Developed a powerful "screen reading" AI.

The researchers call it ScreenAI, a new visual language model for understanding user interfaces and infographics.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Paper address: https://arxiv.org/pdf/2402.04615.pdf

ScreenAI At its core is a new screenshot text representation method that recognizes the type and position of UI elements.

The researchers used the Google language model PaLM 2-S to generate synthetic training data, which was used to train the model to answer questions related to screen information, screen navigation, and screen content summaries. question. It is worth mentioning that this method provides new ideas for improving the performance of the model when handling screen-related tasks.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

For example, if you open a music APP page, you can ask "How many songs are less than 30 seconds long?"

ScreenAI gave a simple answer: 1.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Another example is to command ScreenAI to open the menu and you can select it.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Source of architectural inspiration - PaLI

Figure 1 shows the ScreenAI model architecture. The researchers were inspired by the architecture of the PaLI family of models, which consists of a multimodal encoder block.

The encoder block contains a ViT-like visual encoder and an mT5 language encoder consuming image and text input, followed by an autoregressive decoder.

The input image is converted by the visual encoder into a series of embeddings, which are combined with the input text embedding and fed into the mT5 language encoder.

The output of the encoder is passed to the decoder, which produces text output.

This generalized formulation can use the same model architecture to solve various visual and multi-modal tasks. These tasks can be reformulated as text-image (input) to text (output) problems.

Compared to text input, image embeddings form a significant part of the input length to multi-modal encoders.

In short, this model uses an image encoder and a language encoder to extract image and text features, fuse the two and then input them into the decoder to generate text.

This construction method can be widely applied to multi-modal tasks such as image understanding.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

In addition, the researchers further extended PaLI’s encoder-decoder architecture to accept various image blocking modes.

The original PaLI architecture only accepts image patches in a fixed grid pattern to process input images. However, researchers in the screen-related field encounter data that spans a wide variety of resolutions and aspect ratios.

In order for a single model to adapt to all screen shapes, it is necessary to use a tiling strategy that works for images of various shapes.

To this end, the Google team borrowed a technology introduced in Pix2Struct, which allows the generation of arbitrary grid-shaped image blocks based on the input image shape and a predefined maximum number of blocks, such as As shown in Figure 1.

This is able to adapt to input images of various formats and aspect ratios without padding or stretching the image to fix its shape, making the model more versatile and able to handle movement simultaneously Image formats for device (i.e. portrait) and desktop (i.e. landscape).

Model configuration

The researchers trained 3 models of different sizes, containing 670M, 2B and 5B parameters.

For the 670M and 2B parameter models, the researchers started with pre-trained unimodal checkpoints for the visual encoder and encoder-decoder language model.

For the 5B parameter model, start from the multi-modal pre-training checkpoint of PaLI-3, where ViT is trained with a UL2-based encoder-decoder language model.

The parameter distribution between the visual and language models can be seen in Table 1.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Automatic data generation

Researchers say that pre-production of model development The training phase largely depends on access to large and diverse data sets.

However, manually labeling extensive data sets is impractical, so the Google team’s strategy is to automatically generate data.

This approach leverages specialized small models, each of which is good at generating and labeling data efficiently and with high accuracy.

Compared to manual annotation, this automated approach is not only efficient and scalable, but also ensures a certain level of data diversity and complexity.

The first step is to give the model a comprehensive understanding of text elements, various screen components, and their overall structure and hierarchy. This fundamental understanding is critical to the model's ability to accurately interpret and interact with a variety of user interfaces.

Here, researchers collected a large number of screenshots from a variety of devices, including desktops, mobile devices, and tablets, by crawling applications and web pages.

These screenshots are then annotated with detailed tags that describe the UI elements, their spatial relationships, and other descriptive information.

In addition, to inject greater diversity into the pre-training data, the researchers also leveraged the power of language models, specifically PaLM 2-S, to generate QA pairs in two stages.

Start by generating the screen mode described previously. The authors then design a prompt containing screen patterns to guide the language model to generate synthetic data.

After a few iterations, a tip can be identified that effectively generates the required tasks, as shown in Appendix C.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

#To assess the quality of these generated responses, the researchers performed manual verification on a subset of the data , to ensure that the predetermined quality requirements are met.

This method is described in Figure 2, which greatly improves the depth and breadth of the pre-training data set.

By leveraging the natural language processing capabilities of these models, combined with structured screen patterns, various user interactions and scenarios can be simulated.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Two sets of different tasks

Next, the researchers defined two different sets of tasks for the model Tasks: an initial set of pre-training tasks and a subsequent set of fine-tuning tasks.

The difference between these two groups mainly lies in two aspects:

- Source of real data: For fine-tuning tasks, labels are evaluated by human evaluators Provide or verify. For pre-training tasks, labels are inferred using self-supervised learning methods or generated using other models.

- Size of the dataset: Usually pre-training tasks contain a large number of samples, therefore, these tasks are used to train the model through a more extended series of steps.

Table 2 shows a summary of all pre-training tasks.

In mixed data, the dataset is weighted proportionally to its size, with the maximum weight allowed for each task.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Incorporate multi-modal sources into multi-task training, from language processing to visual understanding and web page content analysis, enabling the model to effectively handle different scenarios and enhancing its overall Versatility and performance.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Researchers use various tasks and benchmarks to estimate the quality of the model during fine-tuning. Table 3 summarizes these benchmarks, including existing primary screen, infographic, and document comprehension benchmarks.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Experimental results

Figure 4 shows the performance of the ScreenAI model and compares it with various The latest SOT results on screen and infographic related tasks are compared.

You can see that ScreenAI has achieved leading performance on different tasks.

In Table 4, the researchers present the results of single-task fine-tuning using OCR data.

For QA tasks, adding OCR can improve performance (e.g. up to 4.5% on Complex ScreenQA, MPDocVQA and InfoVQA).

However, using OCR slightly increases the input length, resulting in slower overall training. It also requires obtaining OCR results at inference time.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Additionally, the researchers conducted single-task experiments using the following model sizes: 670 million parameters, 2 billion parameters, and 5 billion parameters.

It can be observed in Figure 4 that for all tasks, increasing the model size improves performance, and the improvement at the largest scale has not yet saturated.

For tasks requiring more complex visual text and arithmetic reasoning (such as InfoVQA, ChartQA, and Complex ScreenQA), the improvement between the 2 billion parameter model and the 5 billion parameter model is significantly greater than 6.7 100 million parameter model and 2 billion parameter model.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Finally, Figure 5 shows that for images with aspect ratio >1.0 (landscape mode images), the pix2struct segmentation strategy is significantly better than fixed Grid segmentation.

For portrait mode images, the trend is opposite, but fixed grid segmentation is only slightly better.

Given that the researchers wanted the ScreenAI model to work on images with different aspect ratios, they chose to use the pix2struct segmentation strategy.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Google researchers said the ScreenAI model also needs more research on some tasks to scale down to larger models such as GPT-4 and Gemini. model gap.

The above is the detailed content of Google releases the latest "screen reading" AI! PaLM 2-S automatically generates data, and multiple understanding tasks refresh SOTA. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete