Home >Technology peripherals >AI >Tencent reveals the latest large model training method that saves 50% of computing power costs
In the context of a shortage of computing power, how to improve the efficiency of large model training and inference and reduce costs has become the focus of the industry.
On November 23, Tencent disclosed that the self-developed machine learning framework Angel behind Tencent’s Hunyuan large model has been upgraded again, and the large model training efficiency has been improved to 2.6 that of mainstream open source frameworks. times, hundreds of billions of large model training can save 50% of computing power costs. The upgraded Angel supports ultra-large-scale training at the 10,000-ka level in a single task, further improving the performance and efficiency of Tencent Cloud's HCC large model dedicated computing cluster.
At the same time, Angel also provides a one-stop platform from model development to application implementation, supporting users to quickly call Tencent Hunyuan large model capabilities through API interfaces or fine-tuning, accelerating Large model application construction, more than 300 Tencent products and scenarios such as Tencent Conference, Tencent News, Tencent Video, etc. have been connected to Tencent Hunyuan internal testing.
Currently, relevant capabilities have been opened to the outside world through Tencent Cloud. Based on the upgraded Angel machine learning framework, Tencent Cloud TI platform can provide better training and inference acceleration capabilities, and support customers to use their own data for one-stop training and fine-tuning, and create exclusive intelligent applications based on Tencent's Hunyuan large model.
With the advent of the era of large models, model parameters have increased exponentially growth, reaching the trillion level. Large models gradually develop from supporting a single modality and task to supporting multiple tasks in multiple modalities. Under this trend, large model training requires huge computing power, far exceeding the processing speed of a single chip, and multi-card distributed training communication losses are huge. How to improve the utilization rate of hardware resources has become an important prerequisite for the development and practicality of domestic large model technology.
In order to train large models, Tencent has developed a machine learning training framework called AngelPTM, which accelerates and accelerates the entire process of pre-training, model fine-tuning and reinforcement learning. optimization. AngelPTM adopts the latest FP8 mixed precision training technology, combines the deeply optimized 4D parallelism and ZeROCache mechanism to optimize storage. It can be compatible with a variety of domestic hardware and can train with fewer resources and faster speeds. Larger models
In April 2023, Tencent Cloud released a new generation of HCC high-performance computing clusters for large models, whose performance is three times higher than the previous generation. In addition to hardware upgrades, HCC has also performed system-level optimizations on network protocols, communication strategies, AI frameworks, and model compilation, greatly reducing training, tuning, and computing power costs. AngelPTM has previously provided services through HCC. This upgrade of the Angel machine learning framework will further improve the performance of HCC's dedicated computing cluster for large models and help enterprises accelerate the practical application of large models
In order to solve the training challenges and rising inference costs caused by the increase in model parameters, Tencent's self-developed large model inference framework AngelHCF has improved performance by expanding parallel capabilities and adopting multiple Attention optimization strategies. At the same time, the framework is also adapted to a variety of compression algorithms to improve throughput, thereby achieving faster inference performance and lower costs, and supporting large model inference services
Compared to The mainstream framework in the industry, AngelHCF’s inference speed has been increased by 1.3 times. In the application of Tencent's Hunyuan large model Wenshengtu, the inference time was shortened from the original 10 seconds to 3 to 4 seconds. In addition, AngelHCF also supports a variety of flexible large model compression and quantization strategies, and supports automatic compression
As a practical-level large model, Tencent's Hunyuan large model has been oriented to application scenarios since the beginning of research and development, and has solved the difficulties in implementing large models in practice. Tencent has many types of products and applications and a large amount of traffic, making it very challenging to actually "use" the model. Based on Angel, Tencent has built a one-stop platform for large model access and application development, including services such as data processing, fine-tuning, model evaluation, one-click deployment, and prompt word optimization, allowing large models to be used "out of the box" become possible.
In terms of model access, Tencent Hunyuan Large Model provides models with sizes of more than 100 billion, 10 billion, and 1 billion, fully adapting to the needs of various application scenarios. With simple fine-tuning, you can meet business needs and reduce resource costs for model training and inference services. In common application scenarios such as Q&A and content classification, it is more cost-effective
At the application development level, more than 300 businesses and application scenarios within Tencent have been connected to the Tencent Hunyuan large model The number of internal tests has doubled compared to last month, covering multiple fields such as text summary, abstract, creation, translation, and coding.
In September 2023, Tencent Hunyuan, a practical large-scale model independently developed by Tencent, was officially unveiled and opened through Tencent Cloud. Tencent Hunyuan has a parameter scale of more than 100 billion, and the pre-training corpus contains more than 2 trillion tokens. It integrates Tencent’s independent technology accumulation in pre-training algorithms, machine learning platforms, and underlying computing resources, and continues to iterate in applications to continuously optimize large-scale model capabilities. At present, customers from multiple industries such as retail, education, finance, medical care, media, transportation, government affairs, etc. have accessed Tencent Hunyuan large-scale model through Tencent Cloud
The above is the detailed content of Tencent reveals the latest large model training method that saves 50% of computing power costs. For more information, please follow other related articles on the PHP Chinese website!