


Current Large Language Models (LLMs) such as GPT4 exhibit excellent multi-modal capabilities in following open instructions given an image. However, the performance of these models heavily depends on the choices of network structure, training data, and training strategies, but these choices have not been widely discussed in the previous literature. In addition, there is currently a lack of suitable benchmarks to evaluate and compare these models, which limits the development of multimodal LLMs.
Picture
- Paper: https://arxiv.org/abs/2307.02469
- Website: https://lynx-llm.github.io/
- Code: https://github.com/bytedance/lynx-llm
In this article, the author conducts a systematic and comprehensive study on the training of such models from both quantitative and qualitative aspects. More than 20 variants were set up. For the network structure, different LLMs backbones and model designs were compared; for the training data, the impact of data and sampling strategies was studied; in terms of instructions, the effect of diverse prompts on the model's instruction following ability was explored. Influence. For benchmarks, the article first proposes an open visual question answering evaluation set Open-VQA including image and video tasks.
Based on the experimental conclusions, the author proposed Lynx, which shows the most accurate multi-modal understanding compared with the existing open source GPT4-style model capabilities while maintaining the best multi-modal generation capabilities.
Evaluation scheme
Unlike typical visual language tasks, the main challenge in evaluating GPT4-style models lies in balance Performance in two aspects: text generation ability and multimodal understanding accuracy . To solve this problem, the authors propose a new benchmark Open-VQA including video and image data, and conduct a comprehensive evaluation of current open source models.
Specifically, two quantitative evaluation schemes are adopted:
- Collect open visual question answering (Open-VQA) tests Set, which contains different categories of questions on objects, OCR, counting, reasoning, action recognition, time sequencing, etc. Unlike the VQA data set, which has standard answers, Open-VQA's answers are open-ended. To evaluate the performance on Open-VQA, GPT4 is used as the discriminator, and the results are 95% consistent with human evaluation.
- In addition, the author used the OwlEval data set provided by mPLUG-owl [1] to evaluate the text generation ability of the model. Although it only contains 50 pictures and 82 questions, it covers stories Generation, ad generation, code generation and other various problems, and recruit human annotators to score the performance of different models.
Conclusion
In order to deeply study the training strategy of multi-modal LLMs, the author mainly starts from the network structure (prefix fine-tuning/cross-attention force), training data (data selection and combination ratio), instructions (single instruction/diversified instructions), LLMs model (LLaMA [5]/Vicuna [6]), image pixels (420/224) and other aspects are set With more than twenty variations, the following main conclusions have been drawn through experiments:
- #Multimodal LLMs are less capable of following instructions than LLMs. For example, InstructBLIP [2] tends to generate short replies regardless of input instructions, while other models tend to generate long sentences regardless of instructions, which the authors believe is due to a lack of high-level responses. Resulting from quality and diverse multimodal instruction data.
- #The quality of training data is crucial to the performance of the model. Based on the results of experiments on different data, it was found that using a small amount of high-quality data performs better than using large-scale noisy data. The author believes that this is the difference between generative training and contrastive training, because generative training directly learns the conditional distribution of words rather than the similarity between text and images. Therefore, for better model performance, two things need to be met in terms of data: 1) contain high-quality smooth text; 2) text and image content are well aligned.
- Quests and prompts are critical to zero-shot capabilities. Using diverse tasks and instructions can improve the model's zero-shot generation ability on unknown tasks, which is consistent with observations in plain text models.
- It is important to balance correctness with language-generating ability. If the model is undertrained on downstream tasks (such as VQA), it is more likely to generate fabricated content that does not match the visual input; while if the model is overtrained on downstream tasks, it is more likely to generate fabricated content that does not match the visual input. Short answers will not be able to generate longer answers as directed by the user.
- Prefix-finetuning (PT) is currently the best solution for multi-modal adaptation of LLMs. In experiments, the model with prefix-finetuning structure can improve the ability to follow diverse instructions faster and is easier to train than the cross-attention (CA) model structure. (prefix-tuning and cross-attention are two model structures, see the Lynx model introduction section for details)
Lynx model
The author proposed Lynx(lynx) - a prefix-finetuning GPT4-style model with two-stage training. In the first stage, approximately 120M image-text pairs are used to align visual and language embeddings; in the second stage, 20 images or videos are used for multi-modal tasks and natural language processing (NLP) ) data to adjust the model's instruction-following capabilities.
Picture
The overall structure of the Lynx model is shown in Figure 1 above.
The visual input is processed by the visual encoder to obtain visual tokens (tokens) $$W_v$$. After mapping, it is spliced with the instruction tokens $$W_l$$ as the input of LLMs. This structure is called "prefix-finetuning" in this article to distinguish it from the cross-attention structure used by Flamingo [3].
In addition, the author found that the training cost can be further reduced by adding Adapter (Adapter) after certain layers of frozen LLMs.
Model effect
The author evaluated the existing open source multi-modal LLMs model in Open-VQA, Mme [4] And the performance on OwlEval manual evaluation (results are shown in the chart below, and evaluation details are in the paper). It can be seen that the Lynx model has achieved the best performance in Open-VQA image and video understanding tasks, OwlEval manual evaluation and Mme Perception tasks. Among them, InstructBLIP also achieves high performance in most tasks, but its reply is too short. In comparison, in most cases, the Lynx model provides concise reasons to support the correct answer. Reply, which makes it more user-friendly (see the Cases display section below for some cases).
1. The indicator results on the Open-VQA image test set are shown in Table 1 below:
Picture
2. The indicator results on the Open-VQA video test set are shown in Table 2 below.
picture
3. Select the model with the top score in Open-VQA to conduct manual effect evaluation on the OwlEval evaluation set. The results are shown in Figure 4 above. It can be seen from the manual evaluation results that the Lynx model has the best language generation performance.
Picture
4. In the Mme benchmark test, the Perception class task achieved the best performance , among which 7 of 14 types of subtasks have the best performance. (See the appendix of the paper for detailed results)
Cases display
Open-VQA picture cases
OwlEval cases
Open-VQA video case
Summary
In this article, the author determined prefix-finetuning as the Open-VQA evaluation plan for the main structure of the Lynx model and open-ended answers. Experimental results show that the Lynx model performs the most accurate multi-modal understanding accuracy while maintaining the best multi-modal generation capabilities.
The above is the detailed content of The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTA. For more information, please follow other related articles on the PHP Chinese website!

1 前言在发布DALL·E的15个月后,OpenAI在今年春天带了续作DALL·E 2,以其更加惊艳的效果和丰富的可玩性迅速占领了各大AI社区的头条。近年来,随着生成对抗网络(GAN)、变分自编码器(VAE)、扩散模型(Diffusion models)的出现,深度学习已向世人展现其强大的图像生成能力;加上GPT-3、BERT等NLP模型的成功,人类正逐步打破文本和图像的信息界限。在DALL·E 2中,只需输入简单的文本(prompt),它就可以生成多张1024*1024的高清图像。这些图像甚至

Wav2vec 2.0 [1],HuBERT [2] 和 WavLM [3] 等语音预训练模型,通过在多达上万小时的无标注语音数据(如 Libri-light )上的自监督学习,显著提升了自动语音识别(Automatic Speech Recognition, ASR),语音合成(Text-to-speech, TTS)和语音转换(Voice Conversation,VC)等语音下游任务的性能。然而这些模型都没有公开的中文版本,不便于应用在中文语音研究场景。 WenetSpeech [4] 是

“Making large models smaller”这是很多语言模型研究人员的学术追求,针对大模型昂贵的环境和训练成本,陈丹琦在智源大会青源学术年会上做了题为“Making large models smaller”的特邀报告。报告中重点提及了基于记忆增强的TRIME算法和基于粗细粒度联合剪枝和逐层蒸馏的CofiPruning算法。前者能够在不改变模型结构的基础上兼顾语言模型困惑度和检索速度方面的优势;而后者可以在保证下游任务准确度的同时实现更快的处理速度,具有更小的模型结构。陈丹琦 普

由于复杂的注意力机制和模型设计,大多数现有的视觉 Transformer(ViT)在现实的工业部署场景中不能像卷积神经网络(CNN)那样高效地执行。这就带来了一个问题:视觉神经网络能否像 CNN 一样快速推断并像 ViT 一样强大?近期一些工作试图设计 CNN-Transformer 混合架构来解决这个问题,但这些工作的整体性能远不能令人满意。基于此,来自字节跳动的研究者提出了一种能在现实工业场景中有效部署的下一代视觉 Transformer——Next-ViT。从延迟 / 准确性权衡的角度看,

3月27号,Stability AI的创始人兼首席执行官Emad Mostaque在一条推文中宣布,Stable Diffusion XL 现已可用于公开测试。以下是一些事项:“XL”不是这个新的AI模型的官方名称。一旦发布稳定性AI公司的官方公告,名称将会更改。与先前版本相比,图像质量有所提高与先前版本相比,图像生成速度大大加快。示例图像让我们看看新旧AI模型在结果上的差异。Prompt: Luxury sports car with aerodynamic curves, shot in a

译者 | 李睿审校 | 孙淑娟近年来, Transformer 机器学习模型已经成为深度学习和深度神经网络技术进步的主要亮点之一。它主要用于自然语言处理中的高级应用。谷歌正在使用它来增强其搜索引擎结果。OpenAI 使用 Transformer 创建了著名的 GPT-2和 GPT-3模型。自从2017年首次亮相以来,Transformer 架构不断发展并扩展到多种不同的变体,从语言任务扩展到其他领域。它们已被用于时间序列预测。它们是 DeepMind 的蛋白质结构预测模型 AlphaFold

人工智能就是一个「拼财力」的行业,如果没有高性能计算设备,别说开发基础模型,就连微调模型都做不到。但如果只靠拼硬件,单靠当前计算性能的发展速度,迟早有一天无法满足日益膨胀的需求,所以还需要配套的软件来协调统筹计算能力,这时候就需要用到「智能计算」技术。最近,来自之江实验室、中国工程院、国防科技大学、浙江大学等多达十二个国内外研究机构共同发表了一篇论文,首次对智能计算领域进行了全面的调研,涵盖了理论基础、智能与计算的技术融合、重要应用、挑战和未来前景。论文链接:https://spj.scien

说起2010年南非世界杯的最大网红,一定非「章鱼保罗」莫属!这只位于德国海洋生物中心的神奇章鱼,不仅成功预测了德国队全部七场比赛的结果,还顺利地选出了最终的总冠军西班牙队。不幸的是,保罗已经永远地离开了我们,但它的「遗产」却在人们预测足球比赛结果的尝试中持续存在。在艾伦图灵研究所(The Alan Turing Institute),随着2022年卡塔尔世界杯的持续进行,三位研究员Nick Barlow、Jack Roberts和Ryan Chan决定用一种AI算法预测今年的冠军归属。预测模型图


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Notepad++7.3.1
Easy-to-use and free code editor

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.