Home  >  Article  >  Technology peripherals  >  Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones

Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones

王林
王林forward
2024-04-24 13:55:25853browse
Data has become the focus of improving the capabilities of large models.

## Not long after Llama-3 was released, competitors came, and they were small models that could be run on mobile phones. .

#On Tuesday, Microsoft released its self-developed small-size model Phi-3.

The new model has three versions, among which Phi-3 mini is a language model with 3.8 billion parameters. After training with 3.3 trillion tokens, its overall performance is on academic benchmarks and excellent results in internal tests.

Although Phi-3 mini is optimized to be deployed on mobile phones, its performance is comparable to models such as Mixtral 8x7B and GPT-3.5. Microsoft said the innovation lies primarily in the data sets used for training.
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
At the same time, Phi-3 uses the same architecture as Llama-2, making it easier for the open source community to develop on its basis.
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
Previously, Microsoft's Phi series models have aroused heated discussions. In June last year, Microsoft released the paper "Textbooks Are All You Need", with a scale of only The "textbook quality" data of 7B tokens trained the model phi-1 with 1.3B parameters and achieved good performance.

Last September, Microsoft further explored this path, allowing the 1.3B parameter Transformer architecture language model Phi-1.5 to show powerful coding capabilities.

At the end of last year, Phi-2 proposed by Microsoft has certain common sense capabilities, and its multiple benchmark test results exceeded Llama2 7B, Llama2 13B, and Llama2 13B in the order of 2.7B. Mistral 7B and other advanced models.
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
Phi-3 Technical Report: https://arxiv.org/abs/2404.14219

Just proposed phi-3-mini is a 3.8 billion parameter language model trained on 3.3 trillion tokens. Experimental tests show that the overall performance of phi-3-mini is comparable to models such as Mixtral 8x7B and GPT-3.5. For example, phi-3-mini reaches 69% on MMLU and 8.38 on MT-bench.

Microsoft's previous research on the phi series of models has shown that high-quality "small data" can enable smaller models to achieve good performance. phi-3-mini is trained on heavily filtered network data and synthetic data (similar to phi-2) and further tuned for robustness, security, and chat format.

In addition, the research team also provides the initial parameter expansion results of the 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3- medium, both are more capable than phi-3-mini.
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
Academic Benchmarks

In standard open source benchmarks, phi-3-mini The comparison results with phi-2, Mistral-7b-v0.1, Mixtral-8x7B, Gemma 7B, Llama-3-instruct8B and GPT-3.5 are shown in the table below. To ensure comparability, all results are passed exactly The pipeline is obtained.
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
Security

Phi-3-mini is based on Microsoft Responsible Artificial Intelligence developed in principle. The overall approach to securing large models includes post-training security tuning, red-teaming testing, automated testing, and assessment of dozens of RAI hazard categories. Microsoft leverages a modified useful and harmless preference dataset [BJN 22, JLD 23] inspired by [BSA 24] and multiple internally generated datasets to address RAI hazard categories for security post-training. An independent red team at Microsoft re-examined phi-3-mini to further identify areas for improvement in the post-training process.

Based on the feedback from the red team, the research team compiled additional data sets to improve the post-training data set. This process resulted in a significant reduction in deleterious response rates, as shown in Figure 3 .
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
The following table shows the relationship between phi-3-mini-4k and phi-3-mini-128k and phi-2, Mistral-7B-v0.1, Gemma 7B Internal multi-turn dialogue RAI benchmark results.This benchmark leverages GPT-4 to simulate multiple rounds of conversations across five different categories and evaluate model responses.
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
Defects

##Microsoft stated that in terms of LLM capabilities, although phi-3 -The mini model achieves similar levels of language understanding and reasoning capabilities as larger models, but it is still fundamentally limited by its size on some tasks. For example, the model simply doesn’t have the ability to store much “factual knowledge,” which can be seen in the low rating on TriviaQA. However, researchers believe these issues can be addressed through search engine enhancements.
Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones
Reference content: https://news.ycombinator.com/item?id= 40127806

The above is the detailed content of Microsoft releases Phi-3, which has superior performance to Llama-3 and can be run on mobile phones. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete