Yesterday, Cailian News exclusively revealed that Baidu’s Wenxin Model 4.0 is intensifying training and is close to being ready for release. Everyone has always been curious about Wen Xinyiyan's information. Today we also got more news about Wenxin 4.0, which involves key information such as underlying architecture, infrastructure, training data sets, costs, etc. It has a very high degree of credibility!
Let’s talk about the core conclusions first:
1. Yesterday’s revelations are basically true. It is currently understood that Wenxin Large Model 4.0 has actually been tested with small traffic.
2. The number of parameters of Wenxin 4.0 is larger than that of all LLMs with publicly released parameters. It is also the first large model in China to be trained using Wanka cluster.
3. The reasoning cost is much higher than that of Wenxin 3.5, it is said to be about 8-10 times! (Large models are really expensive!)
If these revelations are true, then this will be a major node for Baidu and even domestic large models to catch up with GPT-4.
Next, let’s take a look at the details of the revelations.
The largest parameter model in the history of Wanka cluster training?
According to the information we have received, the parameter scale of Wenxin Large Model 4.0 is larger than all LLMs currently publicly releasing parameters, which means that the parameter scale of Wenxin Large Model 4.0 is expected to exceed the trillion level.
Looking at this parameter amount alone, many people will think it's okay. After all, according to the currently revealed information, the parameter amount of GPT-4 is already around 1.8 trillion. However, the person who broke the news further stated that Wenxin Large Model 4.0 is still a single model and does not adopt the mixed expert model (MoE) used by GPT and many other large language models.
Previously, "genius hacker" George Hotez broke the news that the reason why GPT-4 uses a hybrid model is because the parameter size of the model cannot exceed 220 billion. OpenAI wants the model to get better, but if it just takes longer to train, the effect is already diminishing.
So, if Baidu can achieve a breakthrough in a single model, whether the model capabilities will also be significantly improved, we can only wait and see after the actual release.
A model with such a large number of parameters is bound to have high computing power requirements. The current news is that Wenxin 4.0 was trained on the Wanka AI cluster. It should be regarded as the first large language model in China to be trained using a Wanka-scale cluster.
What is the concept of Wanka cluster? In China, only Huawei and Alibaba have revealed that they have built Wanka AI cluster, but we have not seen a specific model based on it.
This shows that Wanka cluster is not easy to build, and it is even more difficult to use it to maximize its effect. According to analysis, it is precisely because of the deep integration of Fei Paddle that such a large-scale model can be efficiently trained based on the Wanka cluster.
The cost has surged, and low-traffic testing has been conducted for the public in a low-key manner
Not only is the training cost increasing, but the inference cost of Wenxin 4.0 has also been revealed to be much higher than that of 3.5. We have not yet obtained the specific inference cost per thousand tokens, but it is rumored that it was probably before 8-10 times, this is still in the case of high utilization (MFU). If utilization is even lower, costs are expected to continue to increase.
I have to say that large models are really expensive. Creating a leading underlying foundation model is a game for giants!
Finally, according to internal employees, Baidu has actually begun secretly testing Wenxin Big Model 4.0 with low traffic, and a small number of Wenxin Yiyan users are already using the latest model version.
Many people think this statement is more reliable, and we can also get some clues from some recent revelations in the technology community.
Perhaps, when you ask questions on Wenxin Yiyan now, you are using Wenxin Big Model 4.0. I don’t know if the generated results can compete with GPT-4.
I emphasize again that the above is not officially confirmed information, and everyone can judge its accuracy by themselves.
The above is the detailed content of The latest news! Baidu Wenxin Big Model 4.0: The largest parameter model in the history of Wanka training, see you as soon as next week. For more information, please follow other related articles on the PHP Chinese website!