search
HomeTechnology peripheralsAIHKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

Currently, Multimodal Large Model (MLLM)has demonstrated strong cognitive understanding capabilities on multiple visual tasks.

However, most large multi-modal models are limited to one-way image understanding, making it difficult to map the understood content back to the image.

For example, the model can easily tell what objects are in the picture, but it cannot accurately identify the objects in the picture.

The lack of positioning capabilities directly limits the application of multi-modal large models in downstream fields such as image editing, autonomous driving, and robot control.

In response to this problem, researchers from the University of Hong Kong and ByteDance’s commercialization team proposed a new paradigm Groma——

Through regional images Encoding to improve the perceptual positioning capabilities of multi-modal large models.

After integrating positioning, Groma can directly connect text content and image areas, thereby significantly improving the interactivity and directionality of conversations. This method does not change the original meaning, but only slightly adjusts the expression.

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture
HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

Core idea

How to give multi-modal large models the ability to locate objects, that is, to associate text content with image areas, Achieving "meaningful words" is a major research hotspot at present. The goal of the multimodal large model is to be able to find the region in the image that corresponds to the description when given an image and a corresponding text description. This task is called the image-text alignment problem. In order to solve this problem,

A common approach is to fine-tune the large language model to directly output object coordinates. However, this method has many limitations:

1. The large language model pre-trained on the text itself does not have the ability to understand space, and it is difficult to accurately locate objects relying only on fine-tuning with a small amount of data.

2. The positioning task has high requirements on the resolution of the input image, but increasing the resolution will significantly increase the calculation amount of the multi-modal large model.

3. The output form of the large language model is not suitable for processing fine positioning tasks, such as segmentation.

Based on these considerations, Groma proposed to transfer the positioning to the vision tokenizer of the multi-modal large model. The vision tokenizer discovers and locates potential objects, and then passes them to the large language model for recognition.

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

At the same time, this design also makes full use of the spatial understanding ability of the vision tokenizer itself, without the need for external expert models (such as SAM) to assist positioning , thus avoiding the redundancy of external models.

Specifically, Groma introduces region coding to realize the positioning function based on global image coding - as shown in the figure below, Groma first uses Region Proposer to locate potential objects, and then uses Region Encoder to locate potential objects. The regions reached are encoded into region tokens one by one.

The large language model can determine the corresponding region based on the semantic meaning of the region token, and achieve a hyperlink-like effect by inserting the region token into the output to achieve visually grounded conversation.

Similarly, the user-specified region can also be encoded into the corresponding region token through the Region Encoder and inserted into the user command, so that the multi-modal model can focus on the specified region and generate directional answers. .

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

In order to improve the robustness and accuracy of positioning, Groma uses more than 8M data (including SA1B) to pre-train the Region Proposer. Therefore, the proposals it generates include not only common objects, but also elements such as the components of the objects and the broader background.

In addition, thanks to the separated design, Groma can use high-resolution feature maps for Region Proposer/Encoder input, and use low-resolution feature maps for large model input, thus reducing the cost The calculation amount is reduced without losing positioning performance.

Experimental results

Groma has demonstrated performance surpassing MiniGPT-v2 and Qwen-VL on traditional Grounding Benchmarks.

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

At the same time, Groma has verified its dialogue and reasoning capabilities on the VQA Benchmark (LLaVA-COCO), which is common to multi-modal large models.

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

In the visual comparison, Groma also showed higher recall and fewer hallucinations.

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

In addition, Groma also supports referral dialogue and grounded chat that integrate dialogue capabilities and positioning capabilities.

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture

Thanks to the powerful cognitive reasoning capabilities of large language models, multi-modal large models perform outstandingly in visual understanding tasks.

However, some traditional vision tasks, such as detection segmentation, depth estimation, etc., rely more on visual perception capabilities, which is precisely what large language models lack.

Groma provides a new solution to this problem, which is to decouple perception and cognition, with the vision tokenizer responsible for perception and the large language model responsible for cognition.

This form of perception first and then cognition is not only more in line with the human visual process, but also avoids the computational overhead of retraining a large language model.

On May 15th, ByteDance just announced the self-developed large model of Doubao, which provides multi-modal capabilities, downstream supports 50+ businesses such as Doubao APP, Button, and Jimeng, and is open to the public through the Volcano Engine Enterprise customers, helping enterprises improve efficiency and accelerate intelligent innovation. At present, Doubao APP has become the AIGC application with the largest number of users in the Chinese market. ByteDance is continuing to increase its investment in top talents and cutting-edge technologies, and participate in the industry's top technical challenges and difficulties.

Project website:
https://www.php.cn/link/07a81d45ff030b63fe2a0f375b779f09
Paper link:
##https://www.php.cn/link/b82b80956cfbe75101bd223fe6319dec
Open Source Code:
##https://www.php.cn/link/b984bddf9e7c8fb09854e208c0284764

The above is the detailed content of HKU Byte proposes a new paradigm of multi-modal large models, simulating human perception first and then cognition, to accurately locate objects in the picture. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
From Friction To Flow: How AI Is Reshaping Legal WorkFrom Friction To Flow: How AI Is Reshaping Legal WorkMay 09, 2025 am 11:29 AM

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

This Is What AI Thinks Of You And Knows About YouThis Is What AI Thinks Of You And Knows About YouMay 09, 2025 am 11:24 AM

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

7 Steps To Building A Thriving, AI-Ready Corporate Culture7 Steps To Building A Thriving, AI-Ready Corporate CultureMay 09, 2025 am 11:23 AM

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Netflix New Scroll, Meta AI's Game Changers, Neuralink Valued At $8.5 BillionNetflix New Scroll, Meta AI's Game Changers, Neuralink Valued At $8.5 BillionMay 09, 2025 am 11:22 AM

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI ​​experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Take These Steps Today To Protect Yourself Against AI CybercrimeTake These Steps Today To Protect Yourself Against AI CybercrimeMay 09, 2025 am 11:19 AM

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber ​​criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

A Symbiotic Dance: Navigating Loops Of Artificial And Natural PerceptionA Symbiotic Dance: Navigating Loops Of Artificial And Natural PerceptionMay 09, 2025 am 11:13 AM

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

AI's Biggest Secret — Creators Don't Understand It, Experts SplitAI's Biggest Secret — Creators Don't Understand It, Experts SplitMay 09, 2025 am 11:09 AM

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

Bulbul-V2 by Sarvam AI: India's Best TTS ModelBulbul-V2 by Sarvam AI: India's Best TTS ModelMay 09, 2025 am 10:52 AM

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft