AI large models are popular all over the world, and Chinese industries have also inspired new enthusiasm for artificial intelligence applications.
As major manufacturers participate in the competition, the market is dividing into two major paths: general and vertical. The differences between the two in parameter levels, application scenarios, business models, etc. gradually appear.
1. Enterprises are pouring into the vertical large model track
General AI large models like ChatGPT can handle natural language in various fields and scenarios, but they require huge computing resources and data volume. , has become a key project of major domestic and foreign manufacturers.
This type of enterprise usually has a strong technical team and financial support, as well as its own scenario and traffic advantages. Baidu, Alibaba, Tencent, Byte, Huawei and other companies have adopted their own general AI large models in search, social networking, e-commerce, office and other fields.
In comparison, it is difficult for startups and companies in niche fields to gain first-mover advantages or differentiation advantages in such competition.
The vertical AI large model only focuses on a specific field or scenario. It can use industry data and knowledge to provide more accurate and efficient solutions to better satisfy users in a certain field or scenario. needs and expectations, such as: medical, financial, education, etc.
At the same time, it can use some open source or closed source general AI large models as a basis, and then perform instruction tuning on them to adapt to its own target fields or scenarios.
Therefore, its parameter scale is one order of magnitude lower than that of a general large model. If the data flywheel and model training can be well combined, in some specific fields it can even be better and more cost-effective than a general large model. Low.
In this context, more and more companies have joined the vertical large model track.
On May 18, Sangfor released China’s first self-developed large security model, becoming the first application of GPT technology in the security field;
On May 5, Xueersi announced that it was conducting self-research The research and development of a large mathematical model, named MathGPT, is aimed at mathematics enthusiasts and scientific research institutions around the world; .
Clear commercialization scenarios and lower computing power costs have opened the door for various enterprises to enter large vertical models.
2. The test of the vertical large model
The advantage of the vertical large model is that it is not large enough: the computing power is not large enough and the algorithm difficulty is low, but this does not mean that everyone can do the vertical large model.
As we all know, the three elements of large AI models: computing power, algorithms, and data are all the "feed" that feeds AI.
Let’s talk about computing power first.
The reason why large models are "big" is because of the large number of parameters and the huge amount of data. The amount of calculation required for a large AI model is roughly equivalent to the product of the amount of parameters and the amount of data.
In the past five years, the number of parameters of large AI models has increased by an order of magnitude almost every year. For example, the number of parameters of GPT-4 is 16 times that of GPT-3, reaching 1.6 trillion.
With the introduction of multi-modal data such as images, audio and video, the amount of data in large models is also expanding rapidly. This means that if you want to play with large models, you must have large computing power.
As a reference, the training and inference cost of a set of large vertical models can be an order of magnitude lower than Open AI’s models of the same parameter scale in digital human vertebral technology scenarios, such as Qiyuan Wang Sijie, the strategic director of the world, once mentioned: first build smaller vertical models (such as tens of billions of parameters, billions of parameters), so that the data flywheel and model training can be well combined. Vertical models may be better than Open in some fields. AI is more effective and cheaper.
Even if the computing power requirements of vertical large models are far lower than those of general large models, investment in computing power infrastructure will still prevent some small companies from entering.
Let’s talk about the algorithm.
Among the three elements, the difficulty of algorithm development is relatively low. Each company has its own path algorithm for realizing large models, and there are many open source projects that can be used as reference. Chinese companies are the easiest to shorten or even close the gap. .
Finally, let’s talk about data.
High-quality data is the key to assisting AI training and tuning. Sufficient and rich data is the foundation of large AI models.
OpenAI previously disclosed that in order for AI to talk as smoothly as humans, developers provided GPT-3.5 with up to 45TB of text corpus, equivalent to 4.72 million sets of China's "Four Great Classics". These corpus sources come from a wide range of sources, including Wikipedia, online articles, books and journals, etc., and even the open source code platform Github is included.
But when focusing on subdivided industries, it is not so easy to obtain data.
Industrial Securities publicly stated that to train professional large-scale industry models, high-quality industry data and public data are crucial.
As far as the domestic data market is concerned, according to official disclosures from the National Development and Reform Commission, my country’s government data resources account for more than 3/4 of the country’s data resources, but the scale of openness is less than 10% of that of the United States, and the scale that individuals and enterprises can use It is even less than the 7% in the United States.
And industry data is very core private domain data. The larger the amount of private domain data and the higher the quality, the more valuable it is.
If a medical company has rich medical data and case data, it has the ability to develop large-scale vertical model products similar to those in the medical industry. Project data in the construction industry, user profile data in the financial industry, and ship position data in the shipping industry are all key data sources that provide support for large vertical models.
However, these private domain data are all in the hands of the enterprises themselves, and for the sake of data security and compliance, most institutions require local deployment before trying large model training. It is difficult to imagine that enterprises will use their own The core data is given to others for training.
In addition, how to reasonably label and annotate data is also very important. Rewrite the original words as: Classifying data at different levels can improve product efficiency, and highly accurate labeled data can further enhance the professional performance of large models.
However, at this stage, the cost for vertical industries to obtain high-precision annotation data is relatively high, and there is less professional industry data in public databases, so high requirements are placed on the construction of large vertical models.
Generally speaking, if you want to build a large vertical model, the importance of data far exceeds computing power and algorithms.
Data has become a "stuck point" for enterprises to break through large vertical models.
3. Be one step ahead with industry data
Vertical large-scale models emphasize application and scenario-first logic, and in China, they emphasize the value of the industry side.
On the one hand, under the current wave of intelligence in China, there is a broad market demand for digital innovation on the industrial side; on the other hand, under the toB ecosystem, practices based on vertical applications are also conducive to the formation of a data flywheel Flywheel with scene.
The premise of all this is that the company that launched the large vertical model has established technical barriers and moats in the industry, that is, the competitive advantage of "no one has what I have".
It seems that companies that have been deeply involved in vertical industries for many years may have a greater chance of winning.
These companies have deep accumulation in the fields of data processing, large-scale models and knowledge graphs, and have greater advantages in optimizing large-scale models. At the same time, they have a deep understanding of to B customer needs and implementation scenarios, which can better ensure the credibility and reliability of vertical large-model products and meet enterprise-level needs for security, controllability and compliance.
Currently, some large vertical models have been tested in finance, education, medicine, marketing and other scenarios.
For example, Bloomberg uses its own rich financial data sources and retrains based on the open source GPT-3 framework to develop a financial-specific large model BloombergGPT;
NetEase Youdao is oriented to educational scenarios. Launching self-developed ChatGPT-like model "Ziyue";
Only a few weeks after the release of ChatGPT, Google announced Med-PaLM, a large-scale medical language model specifically designed to answer questions related to healthcare.... ..
As more companies join, large-scale models in vertical fields will emerge widely in various industries and subdivisions. And those companies that can specialize and understand a vertical field, use high-quality data to continuously optimize models, run through the business closed loop, and build an industrial ecosystem will eventually make the value chain long enough.
The above is the detailed content of Can vertical large model competition break through data 'stuck points”?. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

WebStorm Mac version
Useful JavaScript development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

SublimeText3 Chinese version
Chinese version, very easy to use

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Dreamweaver Mac version
Visual web development tools
