search
HomeTechnology peripheralsAIPRO | Why are large models based on MoE more worthy of attention?

PRO | Why are large models based on MoE more worthy of attention?

Aug 07, 2024 pm 07:08 PM
theorymoeMachine Heart Pro

In 2023, almost every field of AI is evolving at an unprecedented speed. At the same time, AI is constantly pushing the technological boundaries of key tracks such as embodied intelligence and autonomous driving. Under the multi-modal trend, will Transformer be shaken as the mainstream architecture for large AI models? Why has exploring large models based on MoE (Mixture of Experts) architecture become a new trend in the industry? Can Large Vision Model (LVM) become a new breakthrough in general vision? ...From the 2023 PRO member newsletter of this site released in the past six months, we have selected 10 special interpretations that provide in-depth analysis of technological trends and industrial changes in the above fields to help you achieve your goals in the new year. be prepared. This interpretation comes from the 2023 Week50 industry newsletter ?

PRO | 为什么基于 MoE 的大模型更值得关注?

Special interpretation Why are large models based on MoE more worthy of attention?

Date: December 12

Event: Mistral AI open sourced the model Mixtral 8x7B based on the MoE (Mixture-of-Experts, expert mixture) architecture, and its performance reached the level of Llama 2 70B and GPT-3.5" event was held Extended interpretation.

First, let’s figure out what MoE is and its ins and outs

1. Concept:

MoE (Mixture of Experts) is a hybrid model composed of multiple sub-models (ie experts), each sub-model It is a local model that specializes in processing a subset of the input space. The core idea of ​​MoE is to use a gating network to decide which model should be trained by each data, thereby mitigating the interference between different types of samples.

2. , Main components:

Mixed expert model technology (MoE) is a deep learning technology controlled by sparse gates composed of expert models and gated models. MoE realizes the distribution of tasks/training data among different expert models through the gated network, allowing everyone to Each model focuses on the tasks it is best at, thereby achieving the sparsity of the model.

① In the training of the gated network, each sample will be assigned to one or more experts;
② In the training of the expert network. , each expert will be trained to minimize the error of the samples assigned to it.

3. The "predecessor" of MoE:

The "predecessor" of MoE is Ensemble Learning. Ensemble learning is the process of training multiple models (base learners) to solve the same problem, and simply combining their predictions (such as voting or averaging). The main goal of ensemble learning is to improve prediction performance by reducing overfitting and improving generalization capabilities. Common ensemble learning methods include Bagging, Boosting and Stacking.

4. MoE historical source:

① The roots of MoE can be traced back to the 1991 paper "Adaptive Mixture of Local Experts". The idea is similar to ensemble approaches, in that it aims to provide a supervisory process for a system composed of different sub-networks, with each individual network or expert specializing in a different region of the input space. The weight of each expert is determined through a gated network. During the training process, both experts and gatekeepers are trained.

② Between 2010 and 2015, two different research areas contributed to the further development of MoE:

One is experts as components: In a traditional MoE setup, the entire system consists of a gated network and Multiple experts. MoEs as whole models have been explored in support vector machines, Gaussian processes, and other methods. The work "Learning Factored Representations in a Deep Mixture of Experts" explores the possibility of MoEs as components of deeper networks. This allows the model to be large and efficient at the same time.

The other is conditional computation: traditional networks process all input data through each layer. During this period, Yoshua Bengio investigated ways to dynamically activate or deactivate components based on input tokens.

③ As a result, people began to explore expert mixture models in the context of natural language processing. In the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer", it was extended to a 137B LSTM by introducing sparsity, thereby achieving fast reasoning at high scale.

Why are MoE-based large models worthy of attention?

1. Generally speaking, the expansion of model scale will lead to a significant increase in training costs, and the limitation of computing resources has become a bottleneck for large-scale intensive model training. To solve this problem, a deep learning model architecture based on sparse MoE layers is proposed.

2. The Sparse Mixed Expert Model (MoE) is a special neural network architecture that can add learnable parameters to large language models (LLM) without increasing the cost of inference, while instruction tuning ) is a technique for training LLM to follow instructions.

3. The combination of MoE+ instruction fine-tuning technology can greatly improve the performance of language models. In July 2023, researchers from Google, UC Berkeley, MIT and other institutions published the paper "Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models", which proved that the hybrid expert model (MoE) and instruction tuning The combination can greatly improve the performance of large language models (LLM).

① Specifically, the researchers used sparse activation MoE in a set of instruction-fine-tuned sparse hybrid expert model FLAN-MOE, and replaced the feedforward component of the Transformer layer with the MoE layer to provide better model capacity and computing flexibility. performance; secondly, fine-tune FLAN-MOE based on the FLAN collective data set.

② Based on the above method, the researchers studied direct fine-tuning on a single downstream task without instruction tuning, in-context few-shot or zero-shot generalization on the downstream task after instruction tuning, and in the instruction tuning Then we further fine-tune a single downstream task and compare the performance differences of LLM under the three experimental settings.

③ Experimental results show that without the use of instruction tuning, MoE models often perform worse than dense models with comparable computational power. But when combined with directive tuning, things change. The instruction-tuned MoE model (Flan-MoE) outperforms the larger dense model on multiple tasks, even though the MoE model is only one-third as computationally expensive as the dense model. Compared to dense models. MoE models gain more significant performance gains from instruction tuning, so when computing efficiency and performance are considered, MoE will become a powerful tool for large language model training.

4. This time, the Mixtral 8x7B model released also uses a sparse mixed expert network.

① Mixtral 8x7B is a decoder-only model. The feedforward module selects from 8 different sets of parameters. In each layer of the network, for each token, the router network selects two of the eight groups (experts) to process the token and aggregate their outputs.

② Mixtral 8x7B model matches or outperforms Llama 2 70B and GPT3.5 on most benchmarks, with inference speeds 6x faster.

Important advantages of MoE: What is sparsity?

1. In traditional dense models, each input needs to be calculated in the complete model. In the sparse mixed expert model, only a few expert models are activated and used when processing input data, while most of the expert models are in an inactive state. This state is "sparse". And sparsity is an important aspect of the mixed expert model. Advantages are also the key to improving the efficiency of model training and inference processes

PRO | 为什么基于 MoE 的大模型更值得关注?

.

The above is the detailed content of PRO | Why are large models based on MoE more worthy of attention?. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
From Friction To Flow: How AI Is Reshaping Legal WorkFrom Friction To Flow: How AI Is Reshaping Legal WorkMay 09, 2025 am 11:29 AM

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

This Is What AI Thinks Of You And Knows About YouThis Is What AI Thinks Of You And Knows About YouMay 09, 2025 am 11:24 AM

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

7 Steps To Building A Thriving, AI-Ready Corporate Culture7 Steps To Building A Thriving, AI-Ready Corporate CultureMay 09, 2025 am 11:23 AM

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Netflix New Scroll, Meta AI's Game Changers, Neuralink Valued At $8.5 BillionNetflix New Scroll, Meta AI's Game Changers, Neuralink Valued At $8.5 BillionMay 09, 2025 am 11:22 AM

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI ​​experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Take These Steps Today To Protect Yourself Against AI CybercrimeTake These Steps Today To Protect Yourself Against AI CybercrimeMay 09, 2025 am 11:19 AM

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber ​​criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

A Symbiotic Dance: Navigating Loops Of Artificial And Natural PerceptionA Symbiotic Dance: Navigating Loops Of Artificial And Natural PerceptionMay 09, 2025 am 11:13 AM

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

AI's Biggest Secret — Creators Don't Understand It, Experts SplitAI's Biggest Secret — Creators Don't Understand It, Experts SplitMay 09, 2025 am 11:09 AM

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

Bulbul-V2 by Sarvam AI: India's Best TTS ModelBulbul-V2 by Sarvam AI: India's Best TTS ModelMay 09, 2025 am 10:52 AM

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment