Home >Technology peripherals >AI >Zhipu AI launches the third-generation large base model ChatGLM3 with a comprehensive breakthrough in performance
On October 27, 2023, Zhipu AI launched the fully self-developed third-generation base model ChatGLM3 and related series of products at the 2023 China Computer Conference (CNCC). Another major breakthrough after Yibase’s dialogue models ChatGLM and ChatGLM2. The ChatGLM3 launched this time adopts an original multi-stage enhanced pre-training method to make training more complete. Evaluations show that in 44 Chinese and English public data set tests, ChatGLM3 ranked first among domestic models of the same size. Zhang Peng, CEO of Zhipu AI, launched a new product on site and demonstrated the latest product features in real time.
ChatGLM3 new technology upgrade with higher performance and lower cost
With richer training data and better training solutions, the performance of ChatGLM3 launched by Zhipu AI is even more powerful. Compared with ChatGLM2, MMLU is increased by 36%, CEval is increased by 33%, GSM8K is increased by 179%, and BBH is increased by 126%.
At the same time, ChatGLM3 aims at GPT-4V and has implemented iterative upgrades of several new functions, including CogVLM with multi-modal understanding capabilities - image recognition semantics, which has achieved SOTA on more than 10 international standard image and text evaluation data sets. ; Code enhancement module Code Interpreter generates code and executes it according to user needs, automatically completing complex tasks such as data analysis and file processing; Web search enhancement WebGLM-access search enhancement can automatically find relevant information on the Internet based on questions and provide answers when answering Refer to relevant literature or article links. The semantic and logical capabilities of ChatGLM3 have been greatly enhanced.
ChatGLM3 also integrates self-developed AgentTuning technology, activating model agent capabilities, especially in terms of intelligent planning and execution, which is 1000% improved compared to ChatGLM2; it also enables domestic large models to natively support tool calling, code execution, Complex scenarios such as games, database operations, knowledge graph search and reasoning, and operating systems.
In addition, ChatGLM3 this time launches end test models ChatGLM3-1.5B and ChatGLM3-3B that can be deployed on mobile phones, supporting a variety of mobile phones and vehicle platforms including vivo, Xiaomi, and Samsung, and even supporting CPU chips on mobile platforms. Inference speed can reach 20 tokens/s. In terms of accuracy, the performance of the 1.5B and 3B models is close to that of the ChatGLM2-6B model on public benchmarks.
Based on the latest efficient dynamic inference and memory optimization technology, the current inference framework of ChatGLM3 is better than the current best open source implementation under the same hardware and model conditions, including vLLM launched by the University of Berkeley and the latest version of Hugging Face TGI , the inference speed is increased by 2-3 times, and the inference cost is doubled, only 0.5 points per thousand tokens, the lowest cost.
The new generation of "Zhipu Qingyan" is launched, the first code interaction capability in China
Empowered by the newly upgraded ChatGLM3, the generative AI assistant Zhipu Qingyan has become the first domestic large model product (Code Interpreter) with code interaction capabilities (https://chatglm.cn/main/code) .
The "code" function currently supports image processing, mathematical calculations, data analysis and other usage scenarios. The following are:
§ Process data to generate charts
§ Code to draw graphics
§ Upload SQL code analysis
With the addition of WebGLM large model capabilities, Zhipu Qingyan also has search enhancement capabilities, which can help users sort out online literature or article links for related questions and directly provide answers.
The previously released CogVLM model has improved Zhipu Qingyan’s Chinese image and text understanding capabilities and achieved image understanding capabilities close to GPT-4V. It can answer various types of visual questions and complete complex goals. Detect and label to complete automatic data annotation.
Since the beginning of 2022, the GLM series models launched by Zhipu AI have supported large-scale pre-training and inference on Ascend, Sunway Supercomputing, and Haiguang DCU architectures. Up to now, Zhipu AI’s products have supported more than 10 domestic hardware ecosystems, including Ascend, Sunway Supercomputer, Haiguang DCU, Haifeike, Muxixiyun, Computing Technology, Tianshu Intelligent Core, Cambrian, and Moore Thread , Baidu Kunlun Core, Lingxi Technology, Great Wall Chaoyun, etc. Through joint innovation with domestic chip companies, continuous optimization of performance will help domestic native large models and domestic chips to enter the international stage as soon as possible.
The ChatGLM3 and related series of products launched by Zhipu AI have comprehensively improved its model performance, created a more open open source ecosystem for the industry, and further lowered the threshold for ordinary users to use AIGC products. AI is leading us into a new era, and large models will definitely accelerate the arrival of this moment.
The above is the detailed content of Zhipu AI launches the third-generation large base model ChatGLM3 with a comprehensive breakthrough in performance. For more information, please follow other related articles on the PHP Chinese website!