Home >Hardware Tutorial >Hardware Review >Easily understand 4K HD images! This large multi-modal model automatically analyzes the content of web posters, making it very convenient for workers.
A large model that can automatically analyze the content of PDFs, web pages, posters, and Excel charts is not too convenient for part-time workers.
The InternLM-XComposer2-4KHD (abbreviated as IXC2-4KHD) model proposed by Shanghai AI Lab, the Chinese University of Hong Kong and other research institutions makes this a reality.
Compared with other multi-modal large models that do not exceed the resolution limit of 1500x1500, this work increases the maximum input image of the multi-modal large model to more than 4K (3840 x1600) resolution, and supports any aspect ratio and dynamic resolution changes from 336 pixels to 4K.
Three days after its release, the model topped the Hugging Face visual question and answer model popularity list.
Easy 4K image understanding
Let’s take a look at the effect first~
The researcher inputs the paper (ShareGPT4V: Improving Large Multi-Modal Models with Better Captions) (resolution is 2550x3300), and asked the paper which model has the highest performance on MMBench.
It should be noted that this information is not mentioned in the text part of the input screenshot, but only appears in a rather complicated radar chart. Faced with such a tricky question, IXC2-4KHD successfully understood the information in the radar chart and answered the question correctly.
Faced with more extreme resolution image input (816 x 5133), IXC2-4KHD easily understands that the image consists of 7 parts and accurately explains what each part contains. Text message content.
Subsequently, the researchers also comprehensively tested the capabilities of IXC2-4KHD on 16 multi-modal large model evaluation indicators, of which 5 evaluations (DocVQA, ChartQA, InfographicVQA , TextVQA, OCRBench) focuses on the model’s high-resolution image understanding capabilities.
Using only 7B parameters, IXC2-4KHD achieved results that were comparable to or even surpassed GPT4V and Gemini Pro in 10 of the evaluations, demonstrating that it is not limited to high-resolution image understanding, but also for various tasks and Scenario versatility.
△With only 7B parameters, the performance of IXC2-4KHD is comparable to GPT-4V and Gemini-Pro. How to achieve 4K dynamic resolution?
In order to achieve the goal of 4K dynamic resolution, IXC2-4KHD includes three main designs:
(1) Dynamic resolution training:
△4K resolution image processing strategy
In the framework of IXC2-4KHD, the input image is randomly enlarged to a value between the input area and the maximum area (not exceeding An intermediate size (55x336x336, equivalent to 3840x1617 resolution).
Subsequently, the image is automatically cut into multiple 336x336 areas to extract visual features respectively. This dynamic resolution training strategy allows the model to adapt to visual input of any resolution, while also making up for the problem of insufficient high-resolution training data.
Experiments show that as the upper limit of dynamic resolution increases, the model achieves stable performance improvement on high-resolution image understanding tasks (InfographicVQA, DocVQA, TextVQA), and it still does not reach the upper limit at 4K resolution. world, demonstrating the potential for further expansion at higher resolutions.
(2) Add tile layout information:
In order to enable the model to adapt to changing dynamic resolutions, the researchers found that it is necessary to add tile layout information information as additional input. To achieve this, the researchers adopted a simple strategy: a special ‘newline’ (‘ n ’) token is inserted after each row of tiles to inform the model of the layout of the tiles. Experiments show that adding tile layout information has little impact on dynamic resolution training with relatively small changes (HD9 represents that the number of tile areas does not exceed 9), but can bring significant performance improvements to dynamic 4K resolution training .
(3) Expanding the resolution during the inference phase
Researchers also found that models using dynamic resolution can be directly expanded during the inference phase by increasing the maximum tile upper limit resolution and bring additional performance gains. For example, by testing a trained model on HD9 (up to 9 blocks) directly using HD16, a performance improvement of up to 8% can be observed on InfographicVQA.
IXC2-4KHD increases the resolution supported by multi-modal large models to the 4K level. Researchers said that currently this method supports larger images by increasing the number of tiles. The input strategy encountered computational cost and video memory bottlenecks, so they plan to propose a more efficient strategy to support higher resolutions in the future.
Paper link:
https://arxiv.org/pdf/2404.06512.pdf
Project link:
https://github.com /InternLM/InternLM-XComposer
— Finished—
Please send an email to:
ai@qbitai.com
Indicate the title and tell us :
Who are you, where are you from, submission content
Attach the paper/project homepage link and contact information
We will (try our best) to reply to you in time
Click here to follow me and remember to star~
One click three times to "share", "like" and "watch"
The cutting-edge progress of science and technology will be seen every day~
The above is the detailed content of Easily understand 4K HD images! This large multi-modal model automatically analyzes the content of web posters, making it very convenient for workers.. For more information, please follow other related articles on the PHP Chinese website!