Home >Technology peripherals >AI >Behind the birth of Tongyi Tingwu, the first shot of AI large model response application

Behind the birth of Tongyi Tingwu, the first shot of AI large model response application

WBOY
WBOYforward
2023-06-05 13:50:32615browse

Behind the birth of Tongyi Tingwu, the first shot of AI large model response application Picture from Canva Drawable

At the beginning of 2023, the popularity of ChatGPT quickly aroused the industry’s attention to generative AI applications, and the competition for large AI models has intensified.

As an important player involved, Alibaba Cloud first launched the Tongyi Qianwen large model at the Alibaba Cloud Summit held on April 11. Immediately after June 1, Alibaba Cloud announced the progress of the Tongyi large-scale model. The new AI product "Tongyi Listening" focusing on audio and video content was officially launched, becoming the first large-scale model application product in China open to public testing. This means that Alibaba Cloud's large-scale model Language model has taken another big step towards front-end applications. At the same time, the release of its AI-based large model application is equivalent to dropping a blockbuster on the current disputed "large model melee."

The Thousand Model War is about to break out

In recent months, as major Silicon Valley companies such as Microsoft, Google, and Amazon have announced large models and AIGC progress, and started the AI ​​competition through a series of applications such as AI search engines and AI office software, domestic Internet cloud companies have also The whole army attacked one after another. Not only big manufacturers, but also many start-ups, VC/PE institutions, and giants from various industries have poured into the AI ​​large model track, trying to get a piece of the action. According to incomplete statistics, there are currently no fewer than 50 large companies that have announced plans to launch large AI models, and there are countless other participants.

From a structural perspective, Internet technology companies represented by "BATH" and others have firmly ranked in the first echelon of the industry by virtue of their comprehensive strengths such as scenarios, computing power, and full-stack technical capabilities; while they are in the second tier. Important industry companies, such as SenseTime, China Telecom, JD.com, 360, etc., have relied on their influence in related fields to rank in the second echelon of the industry; closely followed by various industry giants, including the co-founder of Meituan Well-known entrepreneurs such as Wang Huiwen and former Sogou CEO Wang Xiaochuan have knowledge, institutional support and relevant backgrounds. However, because they are still in the initial stage, they are temporarily ranked in the third echelon in the melee of large AI models.

From a classification point of view, each enterprise has its own positioning and division of labor around general large models and special large models. According to industry insiders, current domestic large models can be mainly divided into two categories: one is a general large model that benchmarks GPT, focusing on companies at the basic level, such as Alibaba, Baidu and other companies; the other is based on open source large models. We train large vertical models on the platform, focusing on companies in vertical industries, such as large vertical models used in finance, medical care, transportation and other fields. Due to the high technical requirements and high cost of the former, many industries have begun to cooperate with general large model companies to train vertical large models suitable for their own industries based on their own conditions.

From the perspective of the industry chain, computing power manufacturers, cloud service manufacturers, and front-end application manufacturers are all involved. The system is complete, closely connected, and broad in scope. Currently, including computing power manufacturer NVIDIA, cloud service provider Alibaba Cloud, and front-end application manufacturers such as Kingsoft Office, UFIDA, and Yuncong Technology, they are all actively participating in the technology research and development and scenario implementation of AI large models. In short, the current melee over large AI models is heating up suddenly.

Tongyi Tingwu launches the first shot of large model application

From the perspective of the levels involved in AI large models, it can be mainly divided into four levels: application layer, model layer, framework layer and chip layer. At present, most companies on the market are still at the model layer, and go deeper. It involves research on the framework layer and chip layer, and front-end applications are currently not available in the industry. As the industry's first large-scale model application product in public beta, Tongyi Tingwu's demonstration effect is extremely obvious.

On the one hand, compared with the underlying technology, the value of technology close to the application front-end will be more imaginative. Looking at the history of all technological evolutions, it is not difficult to find that the core value of technology lies in the popularity of its application and the degree to which it solves user problems. Because of this, front-end applications that undertake this responsibility and mission often have greater room for imagination.

Take Tongyi Listening, launched by Alibaba Cloud on June 1, as an example. By connecting to Alibaba’s Tongyi large model, its role is no longer just a simple tool for audio and video transcription, but it has It has become an efficient AI assistant to help people in audio and video scenarios. It provides people with a variety of office services such as automatically taking notes, organizing interviews, and extracting PPT. It can also realize the ability to convert audio and video into graphic content, summarize chapter by chapter, and summarize full-text opinions. It has "human-like" efficient retrieval and classification. ability.

In addition, in subdivided scenarios, it also has many "treasure" functions. For example, it can open the Chrome plug-in. Foreign language learners and hearing-impaired people can use the bilingual floating subtitle bar to watch videos without subtitles anytime and anywhere without schedule conflicts. At the same time, Tingwu can also become a "meeting stand-in" for professionals. When the meeting is muted, the AI ​​can record the meeting and organize the key points; the transcription results can be downloaded as subtitle files to facilitate video post-production by new media practitioners; Tingwu organizes The Q&A review allows reporters, analysts, lawyers, HR and other groups to organize interviews more efficiently. In short, in terms of its ability to solve specific scene problems, it has exceeded the capabilities of existing audio and video applications, and has also refreshed the upper limit of previous user experience. It will definitely cause quite a stir in the industry.

On the other hand, from Alibaba Cloud itself, the Tongyi Qianwen big model has just been launched, and it can quickly launch technical applications based on the AI ​​big model, which shows that the Alibaba Cloud AI big model is indeed relatively mature. , has the ability to derive AI applications.

From the model layer to the application layer, from AI large models to the birth of large model applications, it seems simple but in fact it is not easy to implement. Generally speaking, vertical applications are implemented based on the technical base of the general large model. Leaving aside the general large model, these are usually impossible to implement, and this requires that the general large model on which it relies is mature enough, otherwise it will be very difficult to implement. It is difficult to launch applications that are relatively easy to use and exceed the capabilities of existing products. Take Tongyi Tingwu as an example, its technical strength is not shared by all players in the industry.

Full-system AI infrastructure becomes the key to victory

Judging from the hundreds of billions of parameter training requirements required by the large AI model itself, the difficulty and complexity of its advancement may far exceed what the outside world imagines. In the long run, only companies with full-stack AI large-model technical capabilities and infrastructure capabilities will be able to go further.

First of all, because the development speed of generative AI far exceeds external expectations, the progress of any single link has limited promotion effect on the overall large model training. According to OpenAI’s calculations, since 2012, the global demand for computing power for AI model training has doubled in 3-4 months, with an annual growth rate of up to 10 times. However, according to Moore's Law, chip computing performance can only double every 18-24 months, which means that chip performance is far from keeping up with the development requirements of large AI models. Specific to related fields, CPU-based computing systems are difficult to meet the high-bandwidth and low-latency network transmission requirements for large model training. Solving these problems cannot be effective in the short term by relying on a single "pile of computing power" , and may not be economical. We must rely on multi-level overall system support from algorithms, computing power, frameworks, etc. to better cope with this change.

Secondly, due to the large computing power required to develop general-purpose large models, high inference training costs, and high data volume requirements, the threshold itself is very high, and it does not have the ability to develop full-stack large models and implement scenarios. Companies with poor capabilities and ecological openness will find it difficult to maintain a fast pace of change and will easily be eliminated. According to analysis by industry insiders, to create a successful universal large model that can be exported for external commercialization, manufacturers need to have full-stack large model training and R&D capabilities, business scenario implementation experience, AI security governance measures, and ecological openness and other core advantages, and it is difficult for ordinary enterprises to fully possess these capabilities.

As the first cloud computing service provider in Asia and the third largest in the world, Alibaba Cloud has the strongest computing power support system in China. For example, Alibaba Cloud Feitian Cloud operating system can achieve a single cluster scale of 100,000 units and a computing capacity of 100 billion files. Its Feitian intelligent computing platform can achieve 90% parallel efficiency of 1,000 cards. The self-developed network architecture can handle 10,000 cards of scale. AI clusters provide congestion-free, high-performance cluster communication capabilities. Alibaba Cloud's own deep learning platform PAI can increase computing resource utilization by more than 3 times, AI training efficiency by 11 times, and inference efficiency by 6 times. In addition, Alibaba Cloud has also taken the lead in establishing the largest AI model service community "Magic" in China to reduce the cost of large model development and help AI inclusiveness; in terms of algorithms, Alibaba has advanced language and multi-modal capabilities, ultra-large models, and general unified models. In many technical dimensions, it is in the first echelon in China. This is the core reason why Ali Tongyi’s large model can quickly “break out of the circle”.

Thirdly, from the perspective of business possibilities, companies with full-system AI infrastructure capabilities will have greater business value after the arrival of MaaS (Model as a Service) and will have greater market competition. "room for maneuver". Take Alibaba Cloud as an example. In the later period, you can not only obtain platform service fees by providing general large model services; you can also rent out computing power and promote training platforms to earn rent. There are relatively more ways to monetize, and you can compete in the market. Flexibly adjust product pricing based on circumstances to address operational challenges.

The industry welcomes the era of AI HP

With the birth of AI large model applications, a new era characterized by deep AI inclusiveness is gradually beginning. AI is deeply embedded in industrial entities and will become an irreversible industry trend.

On the one hand, the high threshold of general large models and the wide range of differentiated needs in vertical fields determine that exclusive large models and industrial applications based on general large models will become the mainstream application direction in the future, promoting the acceleration of AI. Entering thousands of industries. As mentioned above, the high threshold for general-purpose large models determines that only a few companies at home and abroad can make general-purpose large models. And as AI models become larger, the AI ​​industry is moving from a light industry composed of "handmade workshops" Intensive production requires high-performance, low-cost systematic infrastructure to complete industrial production.

Not only do many small and medium-sized enterprises not have this ability, but even for leading companies in various industries, optimizing large model training from 0 to 1 is not economical in itself. All walks of life need AI infrastructure that is low enough in cost. And for existing manufacturers, it is not necessary to continue to join this field and "reinvent the wheel." In contrast, the training cost of large vertical models is relatively low, and some companies with rich data scenarios in professional fields have better conditions for building large vertical models and better data quality, and the products they launch are more adaptable to the vertical industry. Therefore, GPT in various vertical industries may become mainstream large model applications in the future, promoting the rapid penetration of AI into the industry.

On the other hand, the short-term bottleneck in developing large AI models is computing power, and in the long term is data. Therefore, high-quality front-end applications can help accelerate enterprises to accumulate sufficient data assets and enhance their long-term competitiveness. Accelerate the process of inclusive industrial application. Currently, the rapid iteration and evolution of large models forces all participating players to continuously accumulate computing power resources and optimize configurations from various aspects such as chips and cloud services to ensure the computing power support required for large model training. However, in the longer term, the algorithm for training large AI models is still being continuously optimized and adjusted. With breakthroughs in the algorithm in the future, computing power may no longer be a bottleneck, and high-quality data resources will become a scarce resource. , will receive more attention.

As the industry's first application based on AI large models, the launch of Tongyi Tingwu will help Alibaba accelerate the accumulation of high-quality data resources, accelerate the process of industrial inclusiveness, and lay a good foundation for longer-term development.

The above is the detailed content of Behind the birth of Tongyi Tingwu, the first shot of AI large model response application. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete