Home > Article > Technology peripherals > GPT-4 refused to accept and was overtaken by Bard: the latest model has entered the market
"Large Model Qualifying" authoritative list Chatbot Arena refreshed:
Google Bard surpassed GPT-4 and ranked second, second only to GPT-4 Turbo.
However, many netizens expressed "dissatisfaction" and "unfair" about this.
It turns out that Jeff Dean, the head of Google AI, revealed that Bard’s performance has been greatly improved because it is equipped with a new version of the large model-Gemini Pro-scale.
This also means that Bard playing in "ranked matches" has networking capabilities.
The doubts of netizens revolve around this point:
It is extremely unfair to mix online and offline large models on the same ranking list. Easily misleading.
Hugging Face's "Chief Alpaca Officer" Omar Sanseviero also said:
In this case... I can also submit it to lmsys Mixtral with search functionality?
Faced with various doubts, Imsys officially responded, stating:
Regarding the issue that netizens are most concerned about, GPT-4, which was surpassed by Bard, is a non-networked version. Imsys said that "if the access to real-time data can improve the user experience, the rankings will reflect it."
And directly @OpenAI and Bing as well as Microsoft executive Mikhail Parakhin, expressing his willingness to add GPT-4 online version or Bing Copilot to the arena.
The latest news is that OpenAI’s latest model gpt-4-0125-preview has now entered the arena and is waiting for users to participate in voting.
Chatbot Arena is an authoritative list of large models, created by the Imsys (Large Model Systems Organization) organization led by UC Berkeley researchers.
This ranking uses anonymous 1V1battle voting rules and is ranked based on the Elo rating system.
Specifically, the voting page is as follows. The two models, Model A and B, are both anonymous. Users rate the model’s answers after asking multiple questions. There are a total of four options: A is better, B is better. , A and B are equally good, A and B are both bad.
It is worth mentioning that if the identity of the model is leaked during the question and answer process, the vote will be invalid.
According to the current list, there are 56 large models in the arena:
Previously GPT-4 relied on " However, after the new version of Bard was released, it directly surpassed the two versions of GPT-4 and rushed to the second place, only 34 points behind the first GPT-4 Turbo:
In more detail, in all Model A vs. B matchups without a tie, the proportion of Model A winning is as follows:
There are also the number of duels for each pair of model combinations (no ties):
Additionally, Chatbot Arena leaderboards use bootstrapping to randomly sample Elo score estimates 1,000 times to evaluate confidence intervals and more.
The average winning rate of a single model relative to all other models is as follows:
However, it is worth noting that Arena ranking The ranking is real-time. Although Bard is currently ranked second, it only has a total of more than 3,000 votes.
In comparison, the number of votes for GPT-4 Turbo has reached 30,000, and the votes of the two versions that were surpassed are also several times that of Bard.
Now that the latest version of GPT-4 has entered the market (although it has not been updated on the ranking list), we have to wait for the subsequent results~
Reference link: https://twitter.com/lmsysorg/status/1752035632489300239.
The above is the detailed content of GPT-4 refused to accept and was overtaken by Bard: the latest model has entered the market. For more information, please follow other related articles on the PHP Chinese website!