search
HomeTechnology peripheralsAIKai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Leading the two authoritative lists in Chinese and English, Kai-Fu Zero handed over the multi-modal large model answer sheet!

It is less than three months since the release of its first open source large models Yi-34B and Yi-6B.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

The model is called Yi Vision Language (Yi-VL), and it is now officially open source to the world.

belong to the Yi series and also have two versions:

Yi-VL-34B and Yi-VL-6B.

Let’s take a look at two examples first to experience Yi-VL’s performance in diverse scenarios such as graphic and text dialogues:

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Yi-VL Each picture was analyzed in detail, not only explaining the content on the sign, but even taking care of the "ceiling".

In Chinese, Yi-VL can also express clearly and methodically accurately:

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

In addition, the official test results were also given.

Yi-VL-34B has an accuracy of 41.6% on the English data set MMMU, second only to GPT-4V with an accuracy of 55.7%, surpassing a series of multi-modal large models.

On the Chinese data set CMMMU, the accuracy of Yi-VL-34B is 36.5%, which is ahead of the current cutting-edge open source multi-modal models.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

#What does Yi-VL look like?

Yi-VL is developed based on the Yi language model. You can see the powerful text understanding capabilities based on the Yi language model. You only need to align the pictures to get a good multi-modal visual language model - this is also One of the core highlights of the Yi-VL model.

In terms of architecture design, the Yi-VL model is based on the open source LLaVA architecture and contains three main modules:

  • Vision Transformer (ViT for short) For image encoding, the open source OpenClip ViT-H/14 model is used to initialize the trainable parameters, and by learning to extract features from large-scale "image-text" pairs, the model has the ability to process and understand images.
  • The Projection module brings the ability to spatially align image features and text features to the model. This module consists of a multilayer perceptron (Multilayer Perceptron, referred to as MLP) that contains layer normalizations. This design allows the model to more effectively fuse and process visual and text information, improving the accuracy of multi-modal understanding and generation.
  • The introduction of Yi-34B-Chat and Yi-6B-Chat large language models provides Yi-VL with powerful language understanding and generation capabilities. This part of the model uses advanced natural language processing technology to help Yi-VL deeply understand complex language structures and generate coherent and relevant text output.
Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.
△Caption: Yi-VL model architecture design and training method process overview

On

training method, Yi -The training process of the VL model is divided into three stages, aiming to comprehensively improve the model's visual and language processing capabilities.

In the first stage, the ViT and Projection modules are trained using 100 million "image-text" paired data sets.

At this stage, the image resolution is set to 224x224 to enhance ViT’s knowledge acquisition capabilities in specific architectures while achieving efficient alignment with large language models.

In the second stage, the image resolution of ViT is increased to 448x448, making the model better at recognizing complex visual details. About 25 million "image-text" pairs are used in this stage.

In the third stage, the parameters of the entire model are opened for training, with the goal of improving the model's performance in multi-modal chat interaction. The training data covers diverse data sources, with a total of approximately 1 million "image-text" pairs, ensuring the breadth and balance of the data.

The zero-yiwu technical team also verified that it can quickly train efficient images based on the Yi language model's powerful language understanding and generation capabilities using other multi-modal training methods such as BLIP, Flamingo, EVA, etc. A multimodal graphic-text model for understanding and smoothing graphic-text dialogue.

Yi series models can be used as base language models for multi-modal models, providing a new option for the open source community. At the same time, the zero-one-things multi-modal team is exploring multi-modal pre-training from scratch to approach and surpass GPT-4V faster and reach the world's first echelon level.

Currently, the Yi-VL model has been opened to the public on platforms such as Hugging Face and ModelScope. Users can personally experience the performance of this model in diverse scenarios such as graphic and text dialogues.

Beyond a series of large multi-modal models

In the new multi-modal benchmark test MMMU, both versions Yi-VL-34B and Yi-VL-6B performed well.

MMMU (full name Massive Multi-discipline Multi-modal Understanding & Reasoning Massive Multi-discipline Multi-modal Understanding and Reasoning) The data set contains 11,500 subjects from six core disciplines(Art & Design, Business, Science, Health & Medicine, Humanities & Social Sciences, and Technology & Engineering) questions involving highly heterogeneous image types and intertwined textual image information pose challenges to the model's advanced perception and reasoning capabilities met extremely high demands.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Yi-VL-34B successfully surpassed a series of multi-modal large models with an accuracy of 41.6% on this test set, second only to GPT-4V (55.7%), showing strong ability to understand and apply interdisciplinary knowledge.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Similarly, on the CMMMU data set created for the Chinese scene, the Yi-VL model shows the unique advantage of "understanding Chinese people better".

CMMMU contains about 12,000 Chinese multi-modal questions derived from university exams, tests and textbooks.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Among them, GPT-4V has an accuracy of 43.7% on this test set, followed by Yi-VL-34B with an accuracy of 36.5%, leading the The current cutting-edge open source multimodal model.

Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.

Project address:
[1]https://huggingface.co/01-ai

[2] https://www.modelscope.cn/organization/01ai

The above is the detailed content of Kai-Fu Lee participated in Zero One Wish, which released a world-class open source multi-modal large model.. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
从VAE到扩散模型:一文解读以文生图新范式从VAE到扩散模型:一文解读以文生图新范式Apr 08, 2023 pm 08:41 PM

1 前言在发布DALL·E的15个月后,OpenAI在今年春天带了续作DALL·E 2,以其更加惊艳的效果和丰富的可玩性迅速占领了各大AI社区的头条。近年来,随着生成对抗网络(GAN)、变分自编码器(VAE)、扩散模型(Diffusion models)的出现,深度学习已向世人展现其强大的图像生成能力;加上GPT-3、BERT等NLP模型的成功,人类正逐步打破文本和图像的信息界限。在DALL·E 2中,只需输入简单的文本(prompt),它就可以生成多张1024*1024的高清图像。这些图像甚至

找不到中文语音预训练模型?中文版 Wav2vec 2.0和HuBERT来了找不到中文语音预训练模型?中文版 Wav2vec 2.0和HuBERT来了Apr 08, 2023 pm 06:21 PM

Wav2vec 2.0 [1],HuBERT [2] 和 WavLM [3] 等语音预训练模型,通过在多达上万小时的无标注语音数据(如 Libri-light )上的自监督学习,显著提升了自动语音识别(Automatic Speech Recognition, ASR),语音合成(Text-to-speech, TTS)和语音转换(Voice Conversation,VC)等语音下游任务的性能。然而这些模型都没有公开的中文版本,不便于应用在中文语音研究场景。 WenetSpeech [4] 是

普林斯顿陈丹琦:如何让「大模型」变小普林斯顿陈丹琦:如何让「大模型」变小Apr 08, 2023 pm 04:01 PM

“Making large models smaller”这是很多语言模型研究人员的学术追求,针对大模型昂贵的环境和训练成本,陈丹琦在智源大会青源学术年会上做了题为“Making large models smaller”的特邀报告。报告中重点提及了基于记忆增强的TRIME算法和基于粗细粒度联合剪枝和逐层蒸馏的CofiPruning算法。前者能够在不改变模型结构的基础上兼顾语言模型困惑度和检索速度方面的优势;而后者可以在保证下游任务准确度的同时实现更快的处理速度,具有更小的模型结构。陈丹琦 普

解锁CNN和Transformer正确结合方法,字节跳动提出有效的下一代视觉Transformer解锁CNN和Transformer正确结合方法,字节跳动提出有效的下一代视觉TransformerApr 09, 2023 pm 02:01 PM

由于复杂的注意力机制和模型设计,大多数现有的视觉 Transformer(ViT)在现实的工业部署场景中不能像卷积神经网络(CNN)那样高效地执行。这就带来了一个问题:视觉神经网络能否像 CNN 一样快速推断并像 ViT 一样强大?近期一些工作试图设计 CNN-Transformer 混合架构来解决这个问题,但这些工作的整体性能远不能令人满意。基于此,来自字节跳动的研究者提出了一种能在现实工业场景中有效部署的下一代视觉 Transformer——Next-ViT。从延迟 / 准确性权衡的角度看,

Stable Diffusion XL 现已推出—有什么新功能,你知道吗?Stable Diffusion XL 现已推出—有什么新功能,你知道吗?Apr 07, 2023 pm 11:21 PM

3月27号,Stability AI的创始人兼首席执行官Emad Mostaque在一条推文中宣布,Stable Diffusion XL 现已可用于公开测试。以下是一些事项:“XL”不是这个新的AI模型的官方名称。一旦发布稳定性AI公司的官方公告,名称将会更改。与先前版本相比,图像质量有所提高与先前版本相比,图像生成速度大大加快。示例图像让我们看看新旧AI模型在结果上的差异。Prompt: Luxury sports car with aerodynamic curves, shot in a

五年后AI所需算力超100万倍!十二家机构联合发表88页长文:「智能计算」是解药五年后AI所需算力超100万倍!十二家机构联合发表88页长文:「智能计算」是解药Apr 09, 2023 pm 07:01 PM

人工智能就是一个「拼财力」的行业,如果没有高性能计算设备,别说开发基础模型,就连微调模型都做不到。但如果只靠拼硬件,单靠当前计算性能的发展速度,迟早有一天无法满足日益膨胀的需求,所以还需要配套的软件来协调统筹计算能力,这时候就需要用到「智能计算」技术。最近,来自之江实验室、中国工程院、国防科技大学、浙江大学等多达十二个国内外研究机构共同发表了一篇论文,首次对智能计算领域进行了全面的调研,涵盖了理论基础、智能与计算的技术融合、重要应用、挑战和未来前景。论文链接:​https://spj.scien

​什么是Transformer机器学习模型?​什么是Transformer机器学习模型?Apr 08, 2023 pm 06:31 PM

译者 | 李睿审校 | 孙淑娟​近年来, Transformer 机器学习模型已经成为深度学习和深度神经网络技术进步的主要亮点之一。它主要用于自然语言处理中的高级应用。谷歌正在使用它来增强其搜索引擎结果。OpenAI 使用 Transformer 创建了著名的 GPT-2和 GPT-3模型。自从2017年首次亮相以来,Transformer 架构不断发展并扩展到多种不同的变体,从语言任务扩展到其他领域。它们已被用于时间序列预测。它们是 DeepMind 的蛋白质结构预测模型 AlphaFold

AI模型告诉你,为啥巴西最可能在今年夺冠!曾精准预测前两届冠军AI模型告诉你,为啥巴西最可能在今年夺冠!曾精准预测前两届冠军Apr 09, 2023 pm 01:51 PM

说起2010年南非世界杯的最大网红,一定非「章鱼保罗」莫属!这只位于德国海洋生物中心的神奇章鱼,不仅成功预测了德国队全部七场比赛的结果,还顺利地选出了最终的总冠军西班牙队。不幸的是,保罗已经永远地离开了我们,但它的「遗产」却在人们预测足球比赛结果的尝试中持续存在。在艾伦图灵研究所(The Alan Turing Institute),随着2022年卡塔尔世界杯的持续进行,三位研究员Nick Barlow、Jack Roberts和Ryan Chan决定用一种AI算法预测今年的冠军归属。预测模型图

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

Repo: How To Revive Teammates
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment