Home >Technology peripherals >AI >Kai-Fu Lee officially announced the launch of the 'world's most powerful' open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English

Kai-Fu Lee officially announced the launch of the 'world's most powerful' open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English

PHPz
PHPzforward
2023-11-06 18:13:01592browse

Kai-fu Lee pointed out: "We must make Zero One Thousand Things join the first echelon of global large models."

Kai-Fu Lee officially announced the launch of the worlds most powerful open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English

##The universe of open source large models has a new heavyweight member. This time it is the "Yi" series of open source large models launched by Kai-Fu Lee, Chairman and CEO of Innovation Works, a large model company. It is reported that Zero One Thousand Things was officially established at the end of March this year and started operations in June and July. Dr. Kaifu Li is the founder and CEO.

On November 6, 01Wangwu officially released the "Yi" series of pre-trained open source large models, including
Yi-6B and Yi-34B Version gave the open source large model community "a little shock".

According to the latest lists of Hugging Face English open source community platform and C-Eval Chinese evaluation, the Yi-34B pre-training model has achieved multiple SOTA international best performance indicator recognitions , becoming the "double champion" of global open source large models, defeating open source competing products such as LLaMA2 and Falcon.

Kai-Fu Lee officially announced the launch of the worlds most powerful open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English

Yi-34B has also become the only domestic model
to date that has successfully topped the Hugging Face global open source model rankings.

Kai-Fu Lee officially announced the launch of the worlds most powerful open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English

With a small but powerful approach, he has reached No. 1 on the global authoritative large model list in English and Chinese
## We learned that in the Hugging Face English test public list Pretrained pre-trained open source model rankings, Yi-34B performed well in various indicators, ranking first in the world with a score of 70.72,
With small and big results , crushing many large-size models
such as LLaMA2-70B and Falcon-180B.
In terms of parameter quantity and performance, Yi-34B is equivalent to using less than half of the parameters of LLaMA2-70B and one-fifth of the parameters of Falcon-180B, achieving the best results in Outperforming global leaders in various testing tasks. With its outstanding performance, Yi-34B ranks among the most powerful open source basic models in the world.

Kai-Fu Lee officially announced the launch of the worlds most powerful open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English##                               Source: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard


At the same time, as a large domestic model, Kai-fu Lee said that Yi-34B "understands" Chinese better,
surpassed all open source models in the world on the C-Eval Chinese authoritative list
.
Compared with GPT-4, the strongest king of large models, Yi-34B has absolute advantages in the three main Chinese indicators of CMMLU, E-Eval, and Gaokao, highlighting the Chinese The world's outstanding capabilities can better meet the needs of the domestic market.

From a more comprehensive evaluation, the most critical "MMLU" (Massive Multitask Language Understanding, large-scale multi-task language understanding) in the evaluation of global large models , BBH and other evaluation sets that reflect the comprehensive ability of the model, Yi-34B performed the most outstandingly, winning all the evaluations of multiple indicators such as general ability, knowledge reasoning, and reading comprehension, which is highly consistent with the Hugging Face evaluation.

Kai-Fu Lee officially announced the launch of the worlds most powerful open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English

However, like LLaMA2, the Yi series open source large models perform slightly worse than the GPT model in the mathematics and code evaluation of GSM8k and MBPP. In the future, the Yi series of large models will launch continuous training models that specialize in coding ability and mathematical ability.

The context window size exceeds 200k and is directly open source

In actual combat with large models In terms of the context window where the effect is crucial, the open source Yi-34B has released the version of the world's longest context window that supports 200K ultra-long context window , which can handle ultra-long text input of about 400,000 Chinese characters, which is roughly equivalent to The length of The Scholars. In comparison, OpenAI's GPT-4 context window is only 32K, and the text processing volume is about 25,000 words.

Kai-Fu Lee officially announced the launch of the worlds most powerful open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English

How to do it? It is understood that the Zero-One Everything technical team has implemented a series of optimizations, including computing communication overlap, sequence parallelism, communication compression, etc. Through these capability enhancements, a nearly 100-fold improvement in capabilities in large-scale model training is achieved.

It is worth mentioning that Zero One Thing is also the first large model company to open source the ultra-long context window, allowing developers to use it directly.

Yi-34B's 200K context window is directly open source, which not only provides richer semantic information, but also understands PDF documents with more than 1000 pages, allowing many external vector databases to be built. Knowledge base scenarios can be replaced with context windows. The open source nature of Yi-34B also provides more possibilities for developers who want to fine-tune within a longer context window.

Unique scientific model training method, training cost reduced by 40%

Yi- 34B is so powerful thanks to the following two key factors, namely AI Infra team and self-developed large-scale training platform.

Li Kaifu introduced that Zero One Wan has set up an AI Infra (AI Infrastructure) team internally, which is mainly responsible for large model training and deployment and providing various underlying technical facilities, including processing Servers, operating systems, storage systems, network infrastructure, cloud computing platforms, etc. have become extremely critical "guarantee technologies" behind Yi series model training.

With the powerful support of AI Infra, the Zero One Wanwu team has achieved training results that exceed the industry level. Yi-34B model training cost has been measured to drop by 40%. The difference between the actual training completion time and the predicted time is less than one hour. Further simulations can reduce the training cost by as much as 50% to a scale of 100 billion.

At the same time, Zero One Wish has achieved the transformation from "extensive alchemy" to "scientific training" methodology.

After several months of modeling and experiments, Zero One Wish has developed a "scaled training experiment platform" to guide the design and optimization of the model. . Data proportioning, hyperparameter search, and model structure experiments can all be performed on a small-scale experimental platform, and the prediction error of each node of the 34B model can be controlled within 0.5%. The model has stronger prediction ability, which greatly reduces the resources required for comparative experiments and reduces the waste of computing resources caused by training errors.

The data processing pipeline and the construction of training capabilities to increase large-scale predictions have turned the previous "alchemy" process of large model training into an extremely detailed and scientific one. It ensures the high performance of the currently released Yi-34B and Yi-6B models, reduces the time and cost for training larger-scale models in the future, and has the ability to expand the model scale several times faster than the industry.

Finally, Kai-fu Lee also announced that while completing the pre-training of Yi-34B, the training of the next 100 billion parameter model has been started immediately.
Kai-Fu Lee officially announced the launch of the worlds most powerful open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English
We expect to see more follow-up models of Yi unveiled in the coming months.

The above is the detailed content of Kai-Fu Lee officially announced the launch of the 'world's most powerful' open source large model: processing 400,000 Chinese characters, ranking first in both Chinese and English. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete