search
HomeTechnology peripheralsAIMamba's super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

The Mamba architecture, which previously detonated the AI ​​​​circle, has launched a super variant today!

Artificial intelligence unicorn AI21 Labs has just open sourced Jamba, the world’s first production-level Mamba large model!

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Jamba has performed well in multiple benchmark tests and is on par with some of the strongest open source Transformers currently.

Especially when comparing Mixtral 8x7B, which has the best performance and is also a MoE architecture, there are also winners and losers.

Specifically it——

  • is the first production-grade Mamba model based on the new SSM-Transformer hybrid architecture
  • Long text processing throughput increased by 3 times compared to Mixtral 8x7B
  • Achieved 256K ultra-long context window
  • It is the only model of the same scale that can process 140K contexts on a single GPU
  • Released under the Apache 2.0 open source license, with heavy open rights

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

The previous Mamba could only do 3B due to various restrictions, and was also questioned whether it could take over the Transformer The banner, while RWKV, Griffin, etc., which are also linear RNN families, have only expanded to 14B.

——Jamba directly went to 52B this time, allowing the Mamba architecture to compete head-on with the production-level Transformer for the first time.

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Based on the original Mamba architecture, Jamba incorporates the advantages of Transformer to make up for the shortcomings of the state space model (SSM). inherent limitations.

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

It can be considered that this is actually a new architecture - a mixture of Transformer and Mamba, the most important thing The nice thing is, it can run on a single A100.

It provides an ultra-long context window of up to 256K, a single GPU can run 140K context, and the throughput is 3 times that of Transformer!

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Compared with Transformer, it is very shocking to see how Jamba scales to huge context lengths

Jamba adopts the MoE solution. 12B of the 52B are active parameters. The current model has open weights under Apache 2.0 and can be downloaded on huggingface.

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Model download: https://huggingface.co/ai21labs/Jamba-v0.1

New milestone for LLM

The release of Jamba marks two important milestones for LLM:

First, the successful integration of Mamba with the Transformer architecture , and the second is to successfully upgrade the new form of model (SSM-Transformer) to production-level scale and quality.

The current large models with the strongest performance are all based on Transformer, although everyone has also realized the two main shortcomings of the Transformer architecture:

Large memory footprint: Transformer's memory footprint expands with the context length. Running long context windows or massively parallel batch processing requires a lot of hardware resources, which limits large-scale experimentation and deployment.

As the context grows, the inference speed will slow down: Transformer’s attention mechanism causes the inference time to grow squarely relative to the sequence length, and the throughput will become slower and slower. Because each token depends on the entire sequence before it, it becomes quite difficult to achieve very long contexts.

Years ago, two big guys from Carnegie Mellon and Princeton proposed Mamba, which instantly ignited people’s hopes.

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Mamba以SSM為基礎,增加了選擇性提取資訊的能力、以及硬體上高效的演算法,一舉解決了Transformer存在的問題。

這個新領域馬上就吸引了大量的研究者,arXiv上一時湧現了大量關於Mamba的應用和改進,例如將Mamba用於視覺的Vision Mamba。

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

不得不說,現在的科學研究領域實在是太捲了,把Transformer引入視覺(ViT)用了三年,但Mamba到Vision Mamba只花了一個月。

不過原始Mamba的上下文長度較短,加上模型本身也沒有做大,所以很難打過SOTA的Transformer模型,尤其是在與召回相關的任務上。

Jamba於是更進一步,透過Joint Attention and Mamba架構,整合了Transformer、Mamba、以及專家混合(MoE)的優勢,同時優化了記憶體、吞吐量和效能。

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Jamba是第一個達到生產級規模(52B參數)的混合架構。

如下圖所示,AI21的Jamba架構採用blocks-and-layers的方法,使Jamba能夠成功整合這兩種架構。

每個Jamba區塊都包含一個注意力層或一個Mamba層,然後是一個多層感知器(MLP)。

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Jamba的第二個特點,是利用MoE來增加模型參數的總數,同時簡化推理中所使用的活動參數的數量,從而在不增加計算要求的情況下提高模型容量。

為了在單一80GB GPU上最大限度地提高模型的品質和吞吐量,研究人員優化了使用的MoE層和專家的數量,為常見的推理工作負載留出足夠的記憶體。

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

對比Mixtral 8x7B等類似大小的基於Transformer的模型,Jamba在長上下文上做到了3倍的加速。

Jamba將在不久之後加入NVIDIA API目錄。

長上下文又出新選手

最近,各大公司都在卷長上下文。

具有較小上下文視窗的模型,往往會忘記最近對話的內容,而具有較大上下文的模型則避免了這種陷阱,可以更好地掌握所接收的數據流。

不過,具有長上下文視窗的模型,往往是計算密集的。

新創公司AI21 Labs的生成式模型就證明,事實並非如此。

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Jamba在具有至少80GB顯存的單一GPU(如A100)上運行時,可以處理多達140,000個token。

這相當於大約105,000字,或210頁,是一本長度適中的長篇小說的篇幅。

相較之下,Meta Llama 2的上下文窗口,只有32,000個token,需要12GB的GPU顯存。

以今天的標準來看,這種上下文視窗顯然是偏小的。

對此,有網友也第一時間表示,性能什麼的都不重要,關鍵的是Jamba有256K的上下文,除了Gemini,其他人都沒有這麼長,— —而Jamba可是開源的。

Mambas super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context

Jamba真正的獨特之處

從表面上看,Jamba似乎並不起眼。

無論是昨天風頭正盛的DBRX,還是Llama 2,現在都已經有大量免費提供、可下載的生成式AI模型。

而Jamba的獨特之處,是藏在模型底下的:它同時結合了兩種模型架構-Transformer和狀態空間模型SSM。

一方面,Transformer是複雜推理任務的首選架構。它最核心的定義特徵,就是「注意力機制」。對於每個輸入數據,Transformer會權衡所有其他輸入的相關性,並從中提取以產生輸出。

另一方面,SSM結合了早期AI模型的多個優點,例如遞歸神經網路和卷積神經網絡,因此能夠實現長序列資料的處理,且運算效率更高。

雖然SSM有自己的限制。但一些早期的代表,例如由普林斯頓和CMU提出的Mamba,就可以處理比Transformer模型更大的輸出,在語言生成任務上也更優。

對此,AI21 Labs產品負責人Dagan表示-

#雖然也有一些SSM模型的初步範例,但Jamba是第一個生產規模的商業級模型。

在他看來,Jamba除了創新和趣味性可供社群進一步研究,還提供了巨大的效率,和吞吐量的可能性。

目前,Jamba是基於Apache 2.0許可發布的,使用限制較少但不能商用。後續的微調版本,預計將在幾週內推出。

即便還處在研究的早期階段,但Dagan斷言,Jamba無疑展示了SSM架構的巨大前景。

「這個模型的附加價值-無論是因為尺寸或架構的創新-都可以很容易地安裝到單一GPU上。」

#

The above is the detailed content of Mamba's super evolved form subverts Transformer in one fell swoop! Single A100 running 140K context. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
ai合并图层的快捷键是什么ai合并图层的快捷键是什么Jan 07, 2021 am 10:59 AM

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西怎么办ai橡皮擦擦不掉东西怎么办Jan 13, 2021 am 10:23 AM

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式吗ai可以转成psd格式吗Feb 22, 2023 pm 05:56 PM

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

ai顶部属性栏不见了怎么办ai顶部属性栏不见了怎么办Feb 22, 2023 pm 05:27 PM

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑Apr 04, 2023 am 11:55 AM

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程Mar 31, 2023 pm 10:38 PM

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

ai移动不了东西了怎么办ai移动不了东西了怎么办Mar 07, 2023 am 10:03 AM

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.