Home  >  Article  >  Technology peripherals  >  Megvii's open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?

Megvii's open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?

WBOY
WBOYforward
2024-01-05 21:23:581023browse

Want to convert a document image into Markdown format?

In the past, this task required multiple steps such as text recognition, layout detection and sorting, formula table processing, text cleaning, etc.

This time, it only requires one sentence command,Multi-modal large modelVary directly outputs end-to-end results:

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

Whether it is a large paragraph in Chinese or English Text:

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

Also contains the document picture of the formula

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

Or a screenshot of the mobile page:

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

You can even convert the table in the picture into latexFormat:

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

Of course, as a multi-mode large-scale model, maintaining universal capabilities is essential

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

Vary shows great potential and extremely high upper limit. OCR can no longer require lengthy pipline, directly output end-to-end, and can be customized according to user requirements. The prompt outputs different formats such as latex, word, markdown.

With strong language priors, this architecture can avoid typo-prone words in OCR, such as "leverage" and "dupole". For fuzzy documents, with the help of language priors, it is also expected to achieve stronger OCR effects

The project that attracted the attention of many netizens immediately aroused widespread discussion once it was launched. One of the netizens called out after seeing it, "It's so awesome!"

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

How is this effect achieved?

Inspired by large models

Currently, almost all large multi-modal models use CLIP as the Vision Encoder or visual vocabulary. Indeed, CLIP trained on 400M image-text pairs has strong visual text alignment capabilities and can cover image encoding in most daily tasks.

But for dense and fine-grained perception tasks, such as document-level OCR and Chart understanding, especially in non-English scenarios, CLIP shows obvious coding inefficiency and out-of-vocabularyquestion.

When a large pure NLP model (such as LLaMA) transitions from English to Chinese (a "foreign language" for the large model), because the original vocabulary encoding Chinese is inefficient, the text vocabulary must be expanded to achieve a better performance. Good results.

The research team was inspired by it. It is precisely because of this feature

Now the multi-modal large model based on CLIP visual vocabulary faces the same problem and encounters "foreign language image" ”, such as a page of paper densely packed with text, it is difficult to efficiently tokenize images.

Vary is a solution provided to solve this problem. It can efficiently expand the visual vocabulary without rebuilding the original vocabulary

Megviis open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?Picture

Different from existing methods that directly use ready-made CLIP vocabulary, Vary is divided into two stages:

First, we will use a small only The decoder network generates a powerful new visual vocabulary in an autoregressive manner

Next, in the second stage, the new vocabulary and the CLIP vocabulary are fused to efficiently train the LVLM and give it new The characteristics of Trained on document charts and other data, Vary greatly enhances fine-grained visual perception capabilities.

While maintaining vanilla multi-modal capabilities, it also inspires end-to-end Chinese and English picture, formula screenshots and chart understanding capabilities.

In addition, the research team noticed that the page content that may have originally required thousands of tokens was input through document images, and the information was Vary compressed into 256 image tokens, which also provided information for further page analysis and summary. More room for imagination.

Currently, Vary’s code and model are open source, and a web demo is also provided for everyone to try.

Interested friends can try it~

The above is the detailed content of Megvii's open source multi-modal large model supports document-level OCR, covering Chinese and English. Does it mark the end of OCR?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete