Home  >  Article  >  Technology peripherals  >  The strongest model Llama 3.1 405B is officially released, Zuckerberg: Open source leads a new era

The strongest model Llama 3.1 405B is officially released, Zuckerberg: Open source leads a new era

PHPz
PHPzOriginal
2024-07-24 20:23:06577browse
Just now, the long-awaited Llama 3.1 has been officially released!

Meta officially issued the voice of "Open source leads a new era".
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
In the official blog, Meta said: "Until today, open source large language models have mostly lagged behind closed models in terms of functionality and performance. Now, we are ushering in a new era led by open source. We publicly release Meta Llama 3.1 405B, we believe this is the largest and most powerful open source base model in the world, with more than 300 million downloads of all Llama versions to date, and we are just getting started."

Founder of Meta. , CEO Zuckerberg also personally wrote a long article "Open Source AI Is the Path Forward", explaining why open source is a good thing for all developers, Meta, and the world.
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
Highlights from this release include:

  • The latest series of models extends context length to 128K, adds support for eight languages, and includes the top open source model Llama 3.1 405B;
  • Llama 3.1 405B is in a league of its own, and Meta officially says it is comparable to the best closed source models;
  • This release also provides more components (including reference systems) to be used with the model to make Llama a One system;
  • Users can experience Llama 3.1 405B through WhatsApp and meta.ai.
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
Address: https://llama.meta.com/

You can download it and try it out.

Llama 3.1 Introduction

Llama 3.1 405B is the first publicly available model that is comparable to top AI models in terms of common sense, manipulability, mathematics, tool usage and multi-language translation. .

Meta says the latest generation of Llama will inspire new applications and modeling paradigms, including leveraging synthetic data generation to boost and train smaller models, as well as model distillation - an approach never before seen in the open source space. ability to achieve.

At the same time, Meta has also launched upgraded versions of the 8B and 70B models, supporting multiple languages, with a context length of 128K and stronger reasoning capabilities. The latest models support advanced use cases such as long-form text summarization, multilingual conversational agents, and coding assistants.

For example, Llama 3.1 can translate stories into Spanish:

最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代

When the user asks "There are 3 shirts, 5 pairs of shorts and 1 dress, suppose you want to travel for 10 days. Prepare the clothes Is it enough? "The model can perform inference quickly.

最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代

Long context: For uploaded documents, Llama 3.1 is able to analyze and summarize large documents up to 8k tokens.

最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代

Coding Assistant, for user requirements, you can quickly write code:

最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代

In addition, the developer of Llama 3.1 405B also tweeted "spoiler", stating that the development of a model that integrates voice and visual capabilities like GPT-4o is still under development.
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
Meta has also made changes to the open source license to allow developers to use the output of Llama models (including 405B) to improve other models. Additionally, in keeping with its open source commitment, starting today, Meta is making these models available to the community for download at llama.meta.com and Hugging Face.

Download address:

  • https://huggingface.co/meta-llama
  • https://llama.meta.com/

Model evaluation

Meta is evaluated on more than 150 benchmark datasets, in addition, they also conduct extensive human evaluation.

Experimental results show that the flagship model Llama 3.1 405B is competitive with leading base models including GPT-4, GPT-4o and Claude 3.5 Sonnet across a range of tasks. Furthermore, the 8B and 70B small models are competitive with closed-source and open-source models with similar numbers of parameters.
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
Model Architecture

As Meta’s largest model to date, training Llama 3.1 405B using more than 15 trillion tokens is a major challenge. To enable training at this scale, Meta optimized the entire training stack and trained on over 16,000 H100 GPUs, making this model the first Llama model to be trained at this scale.
最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
To solve this problem, Meta has made the following design choices, focusing on keeping the model development process scalable and simple.

  • A standard decoder Transformer model architecture with only minor adjustments was chosen instead of a hybrid expert model to maximize training stability.
  • Adopts an iterative post-training procedure, using supervised fine-tuning and direct preference optimization at each round. This enables Meta to create the highest quality synthetic data for every round and improve the performance of every feature.

Compared with previous versions of Llama, Meta has improved the quantity and quality of data used for pre-training and post-training, such as developing more careful pre-processing and management pipelines for pre-training data and post-training data. Develop more stringent quality assurance and filtration methods.

As expected from language model scaling laws, Meta’s new flagship model outperforms smaller models trained using the same procedure. Meta also uses 405B parameter models to improve the post-training quality of smaller models.

In order to support the large-scale inference output of 405B models, Meta quantized the model from 16 bits (BF16) to 8 bits (FP8), effectively reducing the required computing requirements and allowing the model to run on a single server node .

Command and Chat Tweaks

Llama 3.1 405B strives to improve the usefulness, quality and detailed instruction following of models in responding to user instructions, while ensuring a high level of security.

In the post-training phase, the research team built the final chat model by performing several rounds of alignment on the basis of the pre-trained model. Each round involves supervised fine-tuning (SFT), rejection sampling (RS), and direct preference optimization (DPO).

The research team uses synthetic data generation to produce the vast majority of SFT examples, and iterates multiple times to generate increasingly higher quality synthetic data across all features. Additionally, the research team employed multiple data processing techniques to filter these synthetic data to the highest quality and fine-tune the data volume across functional scalability.

Llama System

The Llama model has always existed as part of an AI system and can coordinate multiple components, including calling external tools. Meta is designed to go beyond the base model and give developers the flexibility to design and create custom products that fit their vision.

To responsibly develop AI beyond the model layer, Meta has released a complete reference system that includes multiple example applications as well as new components such as Llama Guard 3, a multilingual security model and Prompt Guard (a prompt injection filter). These sample applications are open source and can be built by the open source community.

In order to collaborate more broadly with industry, startups, and the open source community to help better define the interfaces of components, Meta has published a comment request for "Llama Stack" on GitHub. Llama Stack is a set of standardized interfaces for building canonical toolchain components (fine-tuning, synthetic data generation) and agent applications. This helps achieve interoperability more easily. 最强模型Llama 3.1 405B正式发布,扎克伯格:开源引领新时代
Unlike closed models, Llama model weights are available for download. Developers can fully customize the model to their needs and applications, train on new datasets, and perform additional fine-tuning.

Developed using Llama 3.1 405B

For ordinary developers, deploying such a large-scale model as 405B is undoubtedly a challenge, and it requires a lot of computing resources and professional skills. In communicating with the developer community, Meta realized that the development of generative AI is more than just giving input prompts to the model. They expect all developers to exploit the full potential of Llama 3.1 405B in the following areas:

  • Real-time and batch inference
  • Supervised fine-tuning
  • Testing and evaluating model performance in specific applications
  • Continuous pre-training
  • Retrieval Augmented Generation (RAG)
  • Function call
  • Synthetic data generation

Released from now on, Llama 3.1 40 All advanced features of the 5B model are will be open and developers can get started immediately. Developers can also explore higher-order workflows, such as synthetic data generation based on model distillation. In this upgrade, Meta also seamlessly integrates solutions provided by partners AWS, NVIDIA and Databricks to achieve more efficient retrieval augmentation generation (RAG). In addition, Groq has been optimized for low-latency inference for deploying models in the cloud, and similar performance improvements have been made for local systems.

Meta has also built-in a "tool gift package" for Llama 3.1 405B this time, including key projects such as vLLM, TensorRT and PyTorch, from model development to deployment "out of the box", all in one step.

Reference link: https://ai.meta.com/blog/meta-llama-3-1/

The above is the detailed content of The strongest model Llama 3.1 405B is officially released, Zuckerberg: Open source leads a new era. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn