search
HomeTechnology peripheralsAIBaichuan Intelligent released the Baichuan2 large model: it is fully ahead of Llama2, and the training slices are also open source

When the industry was surprised that Baichuan Intelligent released a large model in an average of 28 days, the company did not stop.

At a press conference on the afternoon of September 6, Baichuan Intelligent announced the official open source of the fine-tuned Baichuan-2 large model.

Baichuan Intelligent released the Baichuan2 large model: it is fully ahead of Llama2, and the training slices are also open source Zhang Bo, academician of the Chinese Academy of Sciences and honorary dean of the Institute of Artificial Intelligence of Tsinghua University, was at the press conference.

This is another new release by Baichuan since the release of the Baichuan-53B large model in August. The open source models include Baichuan2-7B, Baichuan2-13B, Baichuan2-13B-Chat and their 4-bit quantized versions, and they are all free and commercially available.

In addition to the full disclosure of the model, Baichuan Intelligence has also open sourced the Check Point for model training and published the Baichuan 2 technical report, detailing the training details of the new model. Wang Xiaochuan, founder and CEO of Baichuan Intelligence, expressed the hope that this move can help large model academic institutions, developers and enterprise users gain an in-depth understanding of the training process of large models, and better promote the technological development of large model academic research and communities.

Baichuan 2 large model original link: https://github.com/baichuan-inc/Baichuan2

Technical report: https://cdn.baichuan-ai.com/paper/ Baichuan2-technical-report.pdf

Today’s open source model is "smaller" in size compared to large models. Among them, Baichuan2-7B-Base and Baichuan2-13B-Base are both based on 2.6 trillion high-quality multi-dimensional Language data is used for training. On the basis of retaining the good generation and creation capabilities of the previous generation open source model, smooth multi-round dialogue capabilities, and low deployment threshold, the two models have excellent performance in mathematics, code, security, logical reasoning, Semantic understanding and other abilities have been significantly improved.

"To put it simply, Baichuan7B's 7 billion parameter model is already on par with LLaMA2's 13 billion parameter model on the English benchmark. Therefore, we can make a big difference with a small one, and the small model is equivalent to the capabilities of the large model. , and models of the same size can achieve higher performance, comprehensively surpassing the performance of LLaMA2," Wang Xiaochuan said.

Compared with the previous generation 13B model, Baichuan2-13B-Base has a 49% improvement in mathematical capabilities, a 46% improvement in coding capabilities, a 37% improvement in security capabilities, a 25% improvement in logical reasoning capabilities, and a 15% improvement in semantic understanding capabilities. .

Baichuan Intelligent released the Baichuan2 large model: it is fully ahead of Llama2, and the training slices are also open source

According to reports, on the new model, researchers from Baichuan Intelligence have made a lot of optimizations from data acquisition to fine-tuning.

"We drew on more experience in previous searches, conducted multi-granularity content quality scoring on a large amount of model training data, used 260 million T of corpus level to train 7B and 13B models, and added multi-language support," Wang Xiaochuan said. "We can achieve a training performance of 180TFLOPS in the Qianka A800 cluster, and the machine utilization rate exceeds 50%. In addition, we have also completed a lot of security alignment work."

Two open source projects this time The model performs well on major evaluation lists. In several authoritative evaluation benchmarks such as MMLU, CMMLU, and GSM8K, it leads LLaMA2 by a large margin. Compared with other models with the same number of parameters, the performance is also very eye-catching and the performance is high. The amplitude is better than LLaMA2 competing models of the same size.

What’s more worth mentioning is that according to multiple authoritative English evaluation benchmarks such as MMLU, Baichuan2-7B has 7 billion parameters on the same level as LLaMA2 with 13 billion parameters on mainstream English tasks.

Baichuan Intelligent released the Baichuan2 large model: it is fully ahead of Llama2, and the training slices are also open source

Benchmark results of the 7B parameter model.

Baichuan Intelligent released the Baichuan2 large model: it is fully ahead of Llama2, and the training slices are also open source

13B Benchmark results of the parametric model.

Baichuan2-7B and Baichuan2-13B are not only fully open to academic research, but developers can also use them for free commercially after applying by email to obtain an official commercial license.

"In addition to model release, we also hope to provide more support to the academic field," Wang Xiaochuan said. "In addition to the technical report, we have also opened up the weight parameter model in the Baichuan2 large model training process. This can help everyone understand pre-training, or perform fine-tuning and enhancement. This is also the first time in China that a company has opened up such a model. Training process."

Large model training includes multiple steps such as acquisition of massive high-quality data, stable training of large-scale training clusters, and model algorithm tuning. Each link requires the investment of a large amount of talents, computing power and other resources. The high cost of training a model from zero to one has hindered the academic community from conducting in-depth research on large model training.

Baichuan Intelligence has open sourced Check Ponit for the entire process of model training from 220B to 2640B. This is of great value for scientific research institutions to study the training process of large models, continued model training and model value alignment, etc., and can promote the scientific research progress of domestic large models.

Baichuan Intelligent released the Baichuan2 large model: it is fully ahead of Llama2, and the training slices are also open source

Previously, most open source models only disclosed their model weights to the outside world, and rarely mentioned training details. Developers could only perform limited fine-tuning, making it difficult to conduct in-depth research. .

The Baichuan 2 technical report published by Baichuan Intelligence details the entire process of Baichuan 2 training, including data processing, model structure optimization, scaling law, process indicators, etc.

Since its establishment, Baichuan Intelligence has regarded promoting the prosperity of China's large model ecology through open source as an important development direction of the company. Less than four months after its establishment, it has released two open source free commercial Chinese large models, Baichuan-7B and Baichuan-13B, as well as a search-enhanced large model Baichuan-53B. The two open source large models have been evaluated in many authoritative reviews. It ranks high on the list and has been downloaded more than 5 million times.

Last week, the launch of the first batch of large-scale model public service photography was important news in the field of science and technology. Among the large model companies founded this year, Baichuan Intelligent is the only one that has been registered under the "Interim Measures for the Management of Generative Artificial Intelligence Services" and can officially provide services to the public.

With industry-leading R&D and innovation capabilities in basic large models, the two open source Baichuan 2 models have received positive responses from upstream and downstream companies, including Tencent Cloud, Alibaba Cloud, Volcano Ark, Huawei, and MediaTek Many well-known companies participated in this conference and reached cooperation with Baichuan Intelligent. According to reports, the number of downloads of Baichuan Intelligence’s large models on Hugging Face has reached 3.37 million in the past month.

According to Baichuan Intelligence’s previous plan, this year they will release a large model with 100 billion parameters and launch a “super application” in the first quarter of next year.

The above is the detailed content of Baichuan Intelligent released the Baichuan2 large model: it is fully ahead of Llama2, and the training slices are also open source. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:机器之心. If there is any infringement, please contact admin@php.cn delete
Tesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserTesla's Robovan Was The Hidden Gem In 2024's Robotaxi TeaserApr 22, 2025 am 11:48 AM

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Sam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailSam's Club Bets On AI To Eliminate Receipt Checks And Enhance RetailApr 22, 2025 am 11:29 AM

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's AI Omniverse Expands At GTC 2025Nvidia's AI Omniverse Expands At GTC 2025Apr 22, 2025 am 11:28 AM

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Exploring the Capabilities of Google's Gemma 2 ModelsExploring the Capabilities of Google's Gemma 2 ModelsApr 22, 2025 am 11:26 AM

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

The Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaThe Next Wave of GenAI: Perspectives with Dr. Kirk Borne - Analytics VidhyaApr 22, 2025 am 11:21 AM

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

AI For Runners And Athletes: We're Making Excellent ProgressAI For Runners And Athletes: We're Making Excellent ProgressApr 22, 2025 am 11:12 AM

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Jamie Engstrom On Technology, Talent And Transformation At CaterpillarJamie Engstrom On Technology, Talent And Transformation At CaterpillarApr 22, 2025 am 11:10 AM

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

New Google Photos Update Makes Any Photo Pop With Ultra HDR QualityNew Google Photos Update Makes Any Photo Pop With Ultra HDR QualityApr 22, 2025 am 11:09 AM

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function