Home  >  Article  >  Technology peripherals  >  Can vertical large model competition break through data “stuck points”?

Can vertical large model competition break through data “stuck points”?

WBOY
WBOYforward
2023-05-27 16:14:34866browse

AI large models are popular all over the world, and Chinese industries have also inspired new enthusiasm for artificial intelligence applications.

Can vertical large model competition break through data “stuck points”?

As major manufacturers participate in the competition, the market is dividing into two major paths: general and vertical. The differences between the two in parameter levels, application scenarios, business models, etc. gradually appear.

1. Enterprises are pouring into the vertical large model track

General AI large models like ChatGPT can handle natural language in various fields and scenarios, but they require huge computing resources and data volume. , has become a key project of major domestic and foreign manufacturers.

This type of enterprise usually has a strong technical team and financial support, as well as its own scenario and traffic advantages. Baidu, Alibaba, Tencent, Byte, Huawei and other companies have adopted their own general AI large models in search, social networking, e-commerce, office and other fields.

In comparison, it is difficult for startups and companies in niche fields to gain first-mover advantages or differentiation advantages in such competition.

The vertical AI large model only focuses on a specific field or scenario. It can use industry data and knowledge to provide more accurate and efficient solutions to better satisfy users in a certain field or scenario. needs and expectations, such as: medical, financial, education, etc.

At the same time, it can use some open source or closed source general AI large models as a basis, and then perform instruction tuning on them to adapt to its own target fields or scenarios.

Therefore, its parameter scale is one order of magnitude lower than that of a general large model. If the data flywheel and model training can be well combined, in some specific fields it can even be better and more cost-effective than a general large model. Low.

In this context, more and more companies have joined the vertical large model track.

On May 18, Sangfor released China’s first self-developed large security model, becoming the first application of GPT technology in the security field;

On May 5, Xueersi announced that it was conducting self-research The research and development of a large mathematical model, named MathGPT, is aimed at mathematics enthusiasts and scientific research institutions around the world; .

Clear commercialization scenarios and lower computing power costs have opened the door for various enterprises to enter large vertical models.

2. The test of the vertical large model

The advantage of the vertical large model is that it is not large enough: the computing power is not large enough and the algorithm difficulty is low, but this does not mean that everyone can do the vertical large model.

As we all know, the three elements of large AI models: computing power, algorithms, and data are all the "feed" that feeds AI.

Let’s talk about computing power first.

The reason why large models are "big" is because of the large number of parameters and the huge amount of data. The amount of calculation required for a large AI model is roughly equivalent to the product of the amount of parameters and the amount of data.

In the past five years, the number of parameters of large AI models has increased by an order of magnitude almost every year. For example, the number of parameters of GPT-4 is 16 times that of GPT-3, reaching 1.6 trillion.

With the introduction of multi-modal data such as images, audio and video, the amount of data in large models is also expanding rapidly. This means that if you want to play with large models, you must have large computing power.

As a reference, the training and inference cost of a set of large vertical models can be an order of magnitude lower than Open AI’s models of the same parameter scale in digital human vertebral technology scenarios, such as Qiyuan Wang Sijie, the strategic director of the world, once mentioned: first build smaller vertical models (such as tens of billions of parameters, billions of parameters), so that the data flywheel and model training can be well combined. Vertical models may be better than Open in some fields. AI is more effective and cheaper.

Even if the computing power requirements of vertical large models are far lower than those of general large models, investment in computing power infrastructure will still prevent some small companies from entering.

Let’s talk about the algorithm.

Among the three elements, the difficulty of algorithm development is relatively low. Each company has its own path algorithm for realizing large models, and there are many open source projects that can be used as reference. Chinese companies are the easiest to shorten or even close the gap. .

Finally, let’s talk about data.

High-quality data is the key to assisting AI training and tuning. Sufficient and rich data is the foundation of large AI models.

OpenAI previously disclosed that in order for AI to talk as smoothly as humans, developers provided GPT-3.5 with up to 45TB of text corpus, equivalent to 4.72 million sets of China's "Four Great Classics". These corpus sources come from a wide range of sources, including Wikipedia, online articles, books and journals, etc., and even the open source code platform Github is included.

But when focusing on subdivided industries, it is not so easy to obtain data.

Industrial Securities publicly stated that to train professional large-scale industry models, high-quality industry data and public data are crucial.

As far as the domestic data market is concerned, according to official disclosures from the National Development and Reform Commission, my country’s government data resources account for more than 3/4 of the country’s data resources, but the scale of openness is less than 10% of that of the United States, and the scale that individuals and enterprises can use It is even less than the 7% in the United States.

And industry data is very core private domain data. The larger the amount of private domain data and the higher the quality, the more valuable it is.

If a medical company has rich medical data and case data, it has the ability to develop large-scale vertical model products similar to those in the medical industry. Project data in the construction industry, user profile data in the financial industry, and ship position data in the shipping industry are all key data sources that provide support for large vertical models.

However, these private domain data are all in the hands of the enterprises themselves, and for the sake of data security and compliance, most institutions require local deployment before trying large model training. It is difficult to imagine that enterprises will use their own The core data is given to others for training.

In addition, how to reasonably label and annotate data is also very important. Rewrite the original words as: Classifying data at different levels can improve product efficiency, and highly accurate labeled data can further enhance the professional performance of large models.

However, at this stage, the cost for vertical industries to obtain high-precision annotation data is relatively high, and there is less professional industry data in public databases, so high requirements are placed on the construction of large vertical models.

Generally speaking, if you want to build a large vertical model, the importance of data far exceeds computing power and algorithms.

Data has become a "stuck point" for enterprises to break through large vertical models.

3. Be one step ahead with industry data

Vertical large-scale models emphasize application and scenario-first logic, and in China, they emphasize the value of the industry side.

On the one hand, under the current wave of intelligence in China, there is a broad market demand for digital innovation on the industrial side; on the other hand, under the toB ecosystem, practices based on vertical applications are also conducive to the formation of a data flywheel Flywheel with scene.

The premise of all this is that the company that launched the large vertical model has established technical barriers and moats in the industry, that is, the competitive advantage of "no one has what I have".

It seems that companies that have been deeply involved in vertical industries for many years may have a greater chance of winning.

These companies have deep accumulation in the fields of data processing, large-scale models and knowledge graphs, and have greater advantages in optimizing large-scale models. At the same time, they have a deep understanding of to B customer needs and implementation scenarios, which can better ensure the credibility and reliability of vertical large-model products and meet enterprise-level needs for security, controllability and compliance.

Currently, some large vertical models have been tested in finance, education, medicine, marketing and other scenarios.

For example, Bloomberg uses its own rich financial data sources and retrains based on the open source GPT-3 framework to develop a financial-specific large model BloombergGPT;

NetEase Youdao is oriented to educational scenarios. Launching self-developed ChatGPT-like model "Ziyue";

Only a few weeks after the release of ChatGPT, Google announced Med-PaLM, a large-scale medical language model specifically designed to answer questions related to healthcare.... ..

As more companies join, large-scale models in vertical fields will emerge widely in various industries and subdivisions. And those companies that can specialize and understand a vertical field, use high-quality data to continuously optimize models, run through the business closed loop, and build an industrial ecosystem will eventually make the value chain long enough.

The above is the detailed content of Can vertical large model competition break through data “stuck points”?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete