Home  >  Article  >  Technology peripherals  >  750,000 rounds of one-on-one battle between large models, GPT-4 won the championship, and Llama 3 ranked fifth

750,000 rounds of one-on-one battle between large models, GPT-4 won the championship, and Llama 3 ranked fifth

WBOY
WBOYforward
2024-04-23 15:28:01528browse

Regarding Llama 3, another test result has been released——

The large model evaluation community LMSYS released a large model ranking list, Llama 3 ranked fifth, English single item and GPT -4 tied for first place.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

Different from other Benchmarks, this list is based on model one-on-one battles, and the evaluators from all over the network make their own propositions and scores.

In the end, Llama 3 ranked fifth on the list, followed by three different versions of GPT-4 and Claude 3 Super Cup Opus.

In the English single list, Llama 3 overtook Claude and tied with GPT-4.

Regarding this result, Meta's chief scientist LeCun was very happy, retweeted the tweet and left a "Nice".

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

Soumith Chintala, the father of PyTorch, also said excitedly that such results are incredible and he is proud of Meta .

The 400B version of Llama 3 has not yet come out, and it won the fifth place based on 70B parameters alone...
I still remember that when GPT-4 was released in March last year, it achieved the same level. Performance is almost impossible.
……
The popularization of AI now is truly incredible, and I am very proud of my colleagues at Meta AI for achieving such success.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

So, what specific results does this list show?

Nearly 90 models competed in 750,000 rounds

As of the release of the latest list, LMSYS has collected nearly 750,000 large model solo battle results, involving 89 models.

Among them, Llama 3 has participated 12,700 times, and GPT-4 has multiple different versions, with the most participating 68,000 times.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

The picture below shows the number of competitions and winning rates of some popular models. Neither of the two indicators in the picture counts the number of draws.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

In terms of the list, LMSYS is divided into a general list and multiple sub-lists. GPT-4-Turbo ranked first and tied with it. The ones are the earlier 1106 version and the Claude 3 extra large Opus.

Another version (0125) of GPT-4 ranks second, followed closely by Llama 3.

But what’s more interesting is that the newer version 0125 does not perform as well as the older version 1106.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

In the English single list, Llama 3's results directly tied with the two GPT-4s, and even surpassed 0125 Version.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

The first place in the Chinese proficiency ranking is shared by Claude 3 Opus and GPT-4-1106, while Llama 3 has been ranked 20 Outstanding name.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

In addition to language ability, the list also includes long text and code ability rankings, and Llama 3 is also among the best.

However, what are the specific "rules of the game" of LMSYS?

A large model test that everyone can participate in

This is a large model test that everyone can participate in. The questions and evaluation criteria are decided by the participants.

The specific "competition" process is divided into two modes: battle and side-by-side.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

In battle mode, after entering the question in the test interface, the system will randomly call the two models in the library, and the tester will not Knowing who the system has drawn, only "Model A" and "Model B" are displayed on the interface.

After the model outputs the answer, the evaluator needs to choose which one is better or a tie. Of course, if the model's performance does not meet expectations, there are corresponding options.

Only after a selection is made, the model's identity is revealed.

Side-by-side is where the user selects the specified model for PK. The rest of the test process is the same as the battle mode.

However, only the voting results in the anonymous mode of the battle will be counted, and in If the model accidentally reveals its identity during the conversation, the results will be invalid.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

According to the Win Rate of each model to other models, such an image can be drawn:

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

△Schematic diagram, earlier version

The final ranking is obtained by using Win Rate data and converting it into scores through the Elo evaluation system.

The Elo rating system is a method of calculating the relative skill level of players, designed by American physics professor Arpad Elo.

Specifically for LMSYS, under the initial conditions, the ratings (R) of all models are set to 1000, and then the expected winning rate (E) is calculated based on this formula.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

As the test continues, the score will be corrected based on the actual score (S). There are three types of S: 1, 0 and 0.5 The values ​​correspond to three situations: winning, losing and drawing.

The correction algorithm is shown in the following formula, where K is the coefficient, which needs to be adjusted by the tester according to the actual situation.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

After all valid data are finally included in the calculation, the Elo score of the model is obtained.

However, during the actual operation, the LMSYS team found that the stability of this algorithm was insufficient, so it used statistical methods to correct it.

They used the Bootstrap method for repeated sampling, obtained more stable results, and estimated confidence intervals.

The final revised Elo score became the basis for ranking in the list.

One More Thing

Llama 3 can already run on the large model inference platform Groq (not Musk’s Grok).

The biggest highlight of this platform is "fast". Previously, the Mixtral model has been used to run at a speed of nearly 500 tokens per second.

It is also quite fast when running Llama 3. According to the actual test, the 70B version can run about 300 Tokens per second, and the 8B version is close to 800.

大模型一对一战斗75万轮,GPT-4夺冠,Llama 3位列第五Picture

Reference link:
[1]https://lmsys.org/blog/2023-05-03- arena/
[2]https://chat.lmsys.org/?leaderboard
[3]https://twitter.com/lmsysorg/status/1782483699449332144

The above is the detailed content of 750,000 rounds of one-on-one battle between large models, GPT-4 won the championship, and Llama 3 ranked fifth. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete