search
HomeTechnology peripheralsAIGoogle releases the latest 'screen reading' AI! PaLM 2-S automatically generates data, and multiple understanding tasks refresh SOTA

The big model that everyone wants is the kind that is truly intelligent...

The Google team will make it Developed a powerful "screen reading" AI.

The researchers call it ScreenAI, a new visual language model for understanding user interfaces and infographics.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Paper address: https://arxiv.org/pdf/2402.04615.pdf

ScreenAI At its core is a new screenshot text representation method that recognizes the type and position of UI elements.

The researchers used the Google language model PaLM 2-S to generate synthetic training data, which was used to train the model to answer questions related to screen information, screen navigation, and screen content summaries. question. It is worth mentioning that this method provides new ideas for improving the performance of the model when handling screen-related tasks.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

For example, if you open a music APP page, you can ask "How many songs are less than 30 seconds long?"

ScreenAI gave a simple answer: 1.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Another example is to command ScreenAI to open the menu and you can select it.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Source of architectural inspiration - PaLI

Figure 1 shows the ScreenAI model architecture. The researchers were inspired by the architecture of the PaLI family of models, which consists of a multimodal encoder block.

The encoder block contains a ViT-like visual encoder and an mT5 language encoder consuming image and text input, followed by an autoregressive decoder.

The input image is converted by the visual encoder into a series of embeddings, which are combined with the input text embedding and fed into the mT5 language encoder.

The output of the encoder is passed to the decoder, which produces text output.

This generalized formulation can use the same model architecture to solve various visual and multi-modal tasks. These tasks can be reformulated as text-image (input) to text (output) problems.

Compared to text input, image embeddings form a significant part of the input length to multi-modal encoders.

In short, this model uses an image encoder and a language encoder to extract image and text features, fuse the two and then input them into the decoder to generate text.

This construction method can be widely applied to multi-modal tasks such as image understanding.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

In addition, the researchers further extended PaLI’s encoder-decoder architecture to accept various image blocking modes.

The original PaLI architecture only accepts image patches in a fixed grid pattern to process input images. However, researchers in the screen-related field encounter data that spans a wide variety of resolutions and aspect ratios.

In order for a single model to adapt to all screen shapes, it is necessary to use a tiling strategy that works for images of various shapes.

To this end, the Google team borrowed a technology introduced in Pix2Struct, which allows the generation of arbitrary grid-shaped image blocks based on the input image shape and a predefined maximum number of blocks, such as As shown in Figure 1.

This is able to adapt to input images of various formats and aspect ratios without padding or stretching the image to fix its shape, making the model more versatile and able to handle movement simultaneously Image formats for device (i.e. portrait) and desktop (i.e. landscape).

Model configuration

The researchers trained 3 models of different sizes, containing 670M, 2B and 5B parameters.

For the 670M and 2B parameter models, the researchers started with pre-trained unimodal checkpoints for the visual encoder and encoder-decoder language model.

For the 5B parameter model, start from the multi-modal pre-training checkpoint of PaLI-3, where ViT is trained with a UL2-based encoder-decoder language model.

The parameter distribution between the visual and language models can be seen in Table 1.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Automatic data generation

Researchers say that pre-production of model development The training phase largely depends on access to large and diverse data sets.

However, manually labeling extensive data sets is impractical, so the Google team’s strategy is to automatically generate data.

This approach leverages specialized small models, each of which is good at generating and labeling data efficiently and with high accuracy.

Compared to manual annotation, this automated approach is not only efficient and scalable, but also ensures a certain level of data diversity and complexity.

The first step is to give the model a comprehensive understanding of text elements, various screen components, and their overall structure and hierarchy. This fundamental understanding is critical to the model's ability to accurately interpret and interact with a variety of user interfaces.

Here, researchers collected a large number of screenshots from a variety of devices, including desktops, mobile devices, and tablets, by crawling applications and web pages.

These screenshots are then annotated with detailed tags that describe the UI elements, their spatial relationships, and other descriptive information.

In addition, to inject greater diversity into the pre-training data, the researchers also leveraged the power of language models, specifically PaLM 2-S, to generate QA pairs in two stages.

Start by generating the screen mode described previously. The authors then design a prompt containing screen patterns to guide the language model to generate synthetic data.

After a few iterations, a tip can be identified that effectively generates the required tasks, as shown in Appendix C.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

#To assess the quality of these generated responses, the researchers performed manual verification on a subset of the data , to ensure that the predetermined quality requirements are met.

This method is described in Figure 2, which greatly improves the depth and breadth of the pre-training data set.

By leveraging the natural language processing capabilities of these models, combined with structured screen patterns, various user interactions and scenarios can be simulated.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Two sets of different tasks

Next, the researchers defined two different sets of tasks for the model Tasks: an initial set of pre-training tasks and a subsequent set of fine-tuning tasks.

The difference between these two groups mainly lies in two aspects:

- Source of real data: For fine-tuning tasks, labels are evaluated by human evaluators Provide or verify. For pre-training tasks, labels are inferred using self-supervised learning methods or generated using other models.

- Size of the dataset: Usually pre-training tasks contain a large number of samples, therefore, these tasks are used to train the model through a more extended series of steps.

Table 2 shows a summary of all pre-training tasks.

In mixed data, the dataset is weighted proportionally to its size, with the maximum weight allowed for each task.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Incorporate multi-modal sources into multi-task training, from language processing to visual understanding and web page content analysis, enabling the model to effectively handle different scenarios and enhancing its overall Versatility and performance.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Researchers use various tasks and benchmarks to estimate the quality of the model during fine-tuning. Table 3 summarizes these benchmarks, including existing primary screen, infographic, and document comprehension benchmarks.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Experimental results

Figure 4 shows the performance of the ScreenAI model and compares it with various The latest SOT results on screen and infographic related tasks are compared.

You can see that ScreenAI has achieved leading performance on different tasks.

In Table 4, the researchers present the results of single-task fine-tuning using OCR data.

For QA tasks, adding OCR can improve performance (e.g. up to 4.5% on Complex ScreenQA, MPDocVQA and InfoVQA).

However, using OCR slightly increases the input length, resulting in slower overall training. It also requires obtaining OCR results at inference time.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Additionally, the researchers conducted single-task experiments using the following model sizes: 670 million parameters, 2 billion parameters, and 5 billion parameters.

It can be observed in Figure 4 that for all tasks, increasing the model size improves performance, and the improvement at the largest scale has not yet saturated.

For tasks requiring more complex visual text and arithmetic reasoning (such as InfoVQA, ChartQA, and Complex ScreenQA), the improvement between the 2 billion parameter model and the 5 billion parameter model is significantly greater than 6.7 100 million parameter model and 2 billion parameter model.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Finally, Figure 5 shows that for images with aspect ratio >1.0 (landscape mode images), the pix2struct segmentation strategy is significantly better than fixed Grid segmentation.

For portrait mode images, the trend is opposite, but fixed grid segmentation is only slightly better.

Given that the researchers wanted the ScreenAI model to work on images with different aspect ratios, they chose to use the pix2struct segmentation strategy.

谷歌发布最新「读屏」AI!PaLM 2-S自动生成数据,多项理解任务刷新SOTA

Google researchers said the ScreenAI model also needs more research on some tasks to scale down to larger models such as GPT-4 and Gemini. model gap.

The above is the detailed content of Google releases the latest 'screen reading' AI! PaLM 2-S automatically generates data, and multiple understanding tasks refresh SOTA. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
谷歌三件套指的是哪三个软件谷歌三件套指的是哪三个软件Sep 30, 2022 pm 01:54 PM

谷歌三件套指的是:1、google play商店,即下载各种应用程序的平台,类似于移动助手,安卓用户可以在商店下载免费或付费的游戏和软件;2、Google Play服务,用于更新Google本家的应用和Google Play提供的其他第三方应用;3、谷歌服务框架(GMS),是系统软件里面可以删除的一个APK程序,通过谷歌平台上架的应用和游戏都需要框架的支持。

为什么中国不卖google手机为什么中国不卖google手机Mar 30, 2023 pm 05:31 PM

中国不卖google手机的原因:谷歌已经全面退出中国市场了,所以不能在中国销售,在国内是没有合法途径销售。在中国消费市场中,消费者大都倾向于物美价廉以及功能实用的产品,所以竞争实力本就因政治因素大打折扣的谷歌手机主体市场一直不在中国大陆。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

LLM之战,谷歌输了!越来越多顶尖研究员跳槽OpenAILLM之战,谷歌输了!越来越多顶尖研究员跳槽OpenAIApr 07, 2023 pm 05:48 PM

​前几天,谷歌差点遭遇一场公关危机,Bert一作、已跳槽OpenAI的前员工Jacob Devlin曝出,Bard竟是用ChatGPT的数据训练的。随后,谷歌火速否认。而这场争议,也牵出了一场大讨论:为什么越来越多Google顶尖研究员跳槽OpenAI?这场LLM战役它还能打赢吗?知友回复莱斯大学博士、知友「一堆废纸」表示,其实谷歌和OpenAI的差距,是数据的差距。「OpenAI对LLM有强大的执念,这是Google这类公司完全比不上的。当然人的差距只是一个方面,数据的差距以及对待数据的态度才

谷歌并未放弃TensorFlow,将于2023年发布新版,明确四大支柱谷歌并未放弃TensorFlow,将于2023年发布新版,明确四大支柱Apr 12, 2023 am 11:52 AM

2015 年,谷歌大脑开放了一个名为「TensorFlow」的研究项目,这款产品迅速流行起来,成为人工智能业界的主流深度学习框架,塑造了现代机器学习的生态系统。从那时起,成千上万的开源贡献者以及众多的开发人员、社区组织者、研究人员和教育工作者等都投入到这一开源软件库上。然而七年后的今天,故事的走向已经完全不同:谷歌的 TensorFlow 失去了开发者的拥护。因为 TensorFlow 用户已经开始转向 Meta 推出的另一款框架 PyTorch。众多开发者都认为 TensorFlow 已经输掉

四分钟对打300多次,谷歌教会机器人打乒乓球四分钟对打300多次,谷歌教会机器人打乒乓球Apr 10, 2023 am 09:11 AM

让一位乒乓球爱好者和机器人对打,按照机器人的发展趋势来看,谁输谁赢还真说不准。​机器人拥有灵巧的可操作性、腿部运动灵活、抓握能力出色…… 已被广泛应用于各种挑战任务。但在与人类互动紧密的任务中,机器人的表现又如何呢?就拿乒乓球来说,这需要双方高度配合,并且球的运动非常快速,这对算法提出了重大挑战。在乒乓球比赛中,首要的就是速度和精度,这对学习算法提出了很高的要求。同时,这项运动具有高度结构化(具有固定的、可预测的环境)和多智能体协作(机器人可以与人类或其他机器人一起对打)两大特点,使其成为研究人

参数少量提升,性能指数爆发!谷歌:大语言模型暗藏「神秘技能」参数少量提升,性能指数爆发!谷歌:大语言模型暗藏「神秘技能」Apr 11, 2023 pm 11:16 PM

由于可以做一些没训练过的事情,大型语言模型似乎具有某种魔力,也因此成为了媒体和研究员炒作和关注的焦点。当扩展大型语言模型时,偶尔会出现一些较小模型没有的新能力,这种类似于「创造力」的属性被称作「突现」能力,代表我们向通用人工智能迈进了一大步。如今,来自谷歌、斯坦福、Deepmind和北卡罗来纳大学的研究人员,正在探索大型语言模型中的「突现」能力。解码器提示的 DALL-E神奇的「突现」能力自然语言处理(NLP)已经被基于大量文本数据训练的语言模型彻底改变。扩大语言模型的规模通常会提高一系列下游N

超5800亿美元!微软谷歌神仙打架,让英伟达市值飙升,约为5个英特尔超5800亿美元!微软谷歌神仙打架,让英伟达市值飙升,约为5个英特尔Apr 11, 2023 pm 04:31 PM

ChatGPT在手,有问必答。你可知,与它每次对话的计算成本简直让人泪目。此前,分析师称ChatGPT回复一次,需要2美分。要知道,人工智能聊天机器人所需的算力背后烧的可是GPU。这恰恰让像英伟达这样的芯片公司豪赚了一把。2月23日,英伟达股价飙升,使其市值增加了700多亿美元,总市值超5800亿美元,大约是英特尔的5倍。在英伟达之外,AMD可以称得上是图形处理器行业的第二大厂商,市场份额约为20%。而英特尔持有不到1%的市场份额。ChatGPT在跑,英伟达在赚随着ChatGPT解锁潜在的应用案

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version