


Yuanxiang's first MoE large model is open source: 4.2B activation parameters, the effect is comparable to the 13B model
Yuanxiang released the XVERSE-MoE-A4.2B large model, which adopts the industry's most cutting-edge mixed expert model architecture (Mixture of Experts) and activates parameter 4.2B, and the effect is comparable to the 13B model. This model is fully open source and unconditionally free for commercial use, allowing a large number of small and medium-sized enterprises, researchers and developers to use it on demand in Yuanxiang's high-performance "family bucket", promoting low-cost deployment.
The development of mainstream large models such as GPT3, Llama and XVERSE follows the Scaling Law. In the process of model training and inference, a single forward and reverse calculation When, all parameters are activated, which is called Dense activation (densely activated). When the model scale increases, the computing power cost will increase sharply.
As more and more researchers believe that the sparsely activated MoE model can be more effective without significantly increasing the computational cost of training and inference when increasing the model size. Methods. Because the technology is relatively new, most open source models or academic research in China are not yet popular.
In the element self-research, using the same corpus to train 2.7 quadrillion tokens, XVERSE-MoE-A4.2B actually activated 4.2B parameters, and the performance "jumped" beyond XVERSE-13B-2, only the amount of calculation , and reduce training time by 50%. Compared with multiple open source benchmarks Llama, this model significantly surpasses Llama2-13B and is close to Llama1-65B (picture below).
View multiple authoritative reviews
In terms of open source, the element large model "Family Bucket" continues to iterate, leading domestic open source to the world's first-class level. In terms of application, Element leverages the unique advantages of AI 3D technology and launches one-stop solutions such as large model 3D space and AIGC tools, empowering various industries such as entertainment, tourism, and finance, and provides intelligent customer service, creative experience, efficiency improvement tools, etc. Create a leading user experience in multiple scenarios.
MoETechnological self-research and innovation
The Ministry of Education (MoE) is the most cutting-edge model framework in the industry. Due to the relatively new technology, domestic Open source models or academic research are not yet widespread. MetaObject independently developed MoE's efficient training and inference framework, and innovated in three directions:
In terms of performance, a set of efficient fusion operators were developed based on the unique expert routing and weight calculation logic in the MoE architecture, which significantly This significantly improves computing efficiency; in view of the challenges of high memory usage and large communication volume in the MoE model, overlapping operations of computing, communication and memory offloading are designed to effectively improve the overall processing throughput.
In terms of architecture, unlike traditional MoE (such as Mixtral 8x7B), which equates the size of each expert to the standard FFN, Yuanxiang adopts a more fine-grained expert design, and the size of each expert is only one-quarter of the standard FFN. First, it improves model flexibility and performance; it also divides experts into two categories: Shared Experts and Non-shared Experts. Shared experts remain active during calculations, while non-shared experts are selectively activated as needed. This design is conducive to compressing general knowledge into shared expert parameters and reducing knowledge redundancy among non-shared expert parameters.
In terms of training, inspired by Switch Transformers, ST-MoE and DeepSeekMoE, Yuanxiang introduces the load balancing loss term to better balance the load among experts; the router z-loss term is used to ensure efficient and stable training.
The architecture selection was obtained through a series of comparative experiments (picture below). In Experiment 3 and Experiment 2, the total parameter amount and activation parameter amount were the same, but the fine-grained expert design of the former brought higher performance Performance. On this basis, Experiment 4 further divided experts into two types: shared and non-shared, which significantly improved the effect. Experiment 5 explores the method of introducing shared experts when the expert size is equal to the standard FFN, but the effect is not ideal.
Comparing the experimental design plan
Comprehensive test results (picture below), Yuanxiang finally adopted the architecture settings corresponding to Experiment 4. Looking to the future, the newly open sourced projects such as Google Gemma and
Compare experimental results
Free download of large model
- Hugging Face: https://huggingface.co/xverse/ XVERSE-MoE-A4.2B
- ModelScope:https://modelscope.cn/models/xverse/XVERSE-MoE-A4.2B
- Github:https://github. com/xverse-ai/XVERSE-MoE-A4.2B
- For inquiries please send: opensource@xverse.cn
The above is the detailed content of Yuanxiang's first MoE large model is open source: 4.2B activation parameters, the effect is comparable to the 13B model. For more information, please follow other related articles on the PHP Chinese website!

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

Dreamweaver CS6
Visual web development tools