Home >Technology peripherals >AI >With hundreds of billions of parameters, Alibaba Cloud Tongyi Qianwen has evolved to 2.0: performance exceeding GPT-3.5 and accelerating to catch up with GPT-4
Alibaba Cloud officially released Tongyi Qianwen 2.0, a large model with hundreds of billions of parameters, on October 31. According to 10 authoritative evaluation results, the comprehensive performance of Tongyi Qianwen 2.0 exceeds GPT-3.5 and is quickly catching up with GPT-4. On the same day, Tongyi Qianwen APP was launched in major mobile application markets. Anyone can directly experience the capabilities of the latest model through the APP
In the past 6 months, Tongyi Qianwen 2.0 has made a huge leap in performance Compared with version 1.0 released in April, Tongyi Qianwen 2.0 has significantly improved its capabilities in complex command understanding, literary creation, general mathematics, knowledge memory, and hallucination resistance. At present, the comprehensive performance of Tongyi Qianwen has exceeded GPT-3.5, accelerating to catch up with GPT-4.
In MMLU , C-Eval, GSM8K, HumanEval, MATH and other 10 mainstream Benchmark evaluation sets, the score of Tongyi Qianwen 2.0 overall surpassed Meta's Llama-2-70B, and compared with OpenAI's Chat-3.5, it was nine wins and one loss. Compared with GPT-4, it is four wins and six losses, and the gap with GPT-4 is further narrowed.
Chinese and English understanding ability is the basic skill of large language models. In terms of English tasks, Tongyi Qianwen 2.0 scored 82.5 on the MMLU benchmark, second only to GPT-4. By significantly increasing the number of parameters, Tongyi Qianwen 2.0 can better understand and process complex language structures and concepts; Chinese In terms of tasks, Tongyi Qianwen 2.0 achieved the highest score on the C-Eval benchmark with a clear advantage. This is because the model learned more Chinese corpus during training, further strengthening its Chinese understanding and expression capabilities.
In areas such as mathematical reasoning and code understanding, Tongyi Qianwen 2.0 has made significant progress. In the reasoning benchmark test GSM8K, Tongyi Qianwen ranked second, demonstrating strong computing and logical reasoning capabilities; in the HumanEval test, Tongyi Qianwen's score closely followed GPT-4 and GPT-3.5, which mainly measures large-scale The ability of the model to understand and execute code fragments is the basis for large models to be used in scenarios such as programming assistance and automatic code repair.
Tongyi Qianwen is more mature and easier to use. Tongyi Qianwen 2.0 has made technical optimizations in terms of instruction compliance, tool use, refined creation, etc., so that it can be better integrated into downstream application scenarios. The official website of Tongyi Large Model has launched multi-modal and plug-in functions, supporting segmented tasks such as image input and document parsing.
At the same time, eight major industry model groups based on Tongyi large model training were launched. They are Tongyi Lingma - intelligent coding assistant, Tongyi Zhiwen - AI reading assistant, and Tongyi listening - work Learning AI assistant, Tongyi Stardust - personalized character creation platform, Tongyi Midianjin - intelligent investment research assistant, Tongyi Xiaomi - intelligent customer service, Tongyi Renxin - personal health assistant, Tongyi Farui - AI law consultant. The 8 major industry models are designed for the most popular vertical scenarios and are specially trained using domain data. Users can directly experience model functions on the official website, and developers can integrate model capabilities into their own large model applications and services through web page embedding, API/SDK calls, etc.
As of October, Alibaba Cloud has carried out in-depth cooperation with leading partners in more than 60 industries to promote the practical application of General Qianwen in the fields of office, cultural tourism, electric power, government affairs, medical insurance, transportation, manufacturing, finance, software development and other fields
Zhou Jingren said that Alibaba Cloud plans to open source the 72B version of Tongyi Qianwen in the near future. Previously, Alibaba Cloud has open sourced the 7B and 14B versions of the model, and the cumulative downloads of these models have exceeded 1 million. Alibaba Cloud will continue to support developers in various industries to use the Tongyi Qianwen open source model to innovate models and applications
The above is the detailed content of With hundreds of billions of parameters, Alibaba Cloud Tongyi Qianwen has evolved to 2.0: performance exceeding GPT-3.5 and accelerating to catch up with GPT-4. For more information, please follow other related articles on the PHP Chinese website!