Home > Article > Technology peripherals > 360 releases large visual model Zhou Hongyi: The combination of large models and the Internet of Things is the next trend
"The original AIoT is only vertical AI, not general AI. AIoT empowered by large models is 'real AI'", May 31, 360 (601360.SH, hereinafter referred to as "360") Wisdom Lifestyle Group held a large-scale visual model and AI hardware new product launch conference. Zhou Hongyi, founder of 360 Group, attended the conference and delivered a speech - the large-scale model opened a new era of AIoT.
Zhou Hongyi said that artificial intelligence in the past was weak artificial intelligence, and the intelligent hardware built on this basis did not have real intelligence. After the emergence of large models, computers can truly understand the world for the first time and can give AIoT real intelligence. He said that the emergence of large models marks the arrival of general artificial intelligence. AI has completed the evolution from the perception layer to the cognitive layer. It is not only a disruptive revolution for traditional artificial intelligence, but also can promote autonomous driving and protein computing. , robot control and other fields of development.
"Big models will bring about a new industrial revolution." Zhou Hongyi believes that all software, APPs, websites, and all industries are worthy of being reshaped with large models, and smart hardware is a hardware-based APP. Judging from the development trend of large models, multi-modality is the only way for the development of large models. The most important change of GPT-4 is that it has multi-modal processing capabilities. Therefore, Zhou Hongyi predicted that the combination of multi-modal large models and the Internet of Things will become the next trend.
He said that the combination of multi-modal technology and intelligent hardware is the general trend. In the future, large models will become the brains of the Internet of Things, and IoT devices are equivalent to the sensing ends of large models, allowing large models to evolve "eyes and ears". It is also possible for large models to control Internet of Things devices, evolve mouths, hands and feet, thereby possessing mobility, and ultimately realize the transition from perception to cognition, and from understanding to execution.
At the meeting, Zhou Hongyi announced the release of the "360 Intelligent Brain-Visual Large Model". He said that the large language model is the basis for building a large visual model. The core of multi-modal capability enhancement is the cognitive and cognitive capabilities of the large language model. Reasoning and decision-making skills. At the same time, the large visual model is also an important component of the "360 Intelligent Brain", allowing the "360 Intelligent Brain" to understand pictures, videos, and sounds in the future.
It is understood that on the basis of visual perception capabilities, 360 integrates the "360 Intelligent Brain" large model with hundreds of billions of parameters, conducts cleaning training based on billions of Internet graphic and text data, and fine-tunes millions of industry data in security scenarios. Finally, a professional visual and multi-modal large model - 360 Intelligent Brain-Visual large model was created.
"At present, the capabilities of large models are mainly reflected in the software layer. When large models are connected to intelligent hardware, the capabilities of large models will move from the digital world to the physical world." Zhou Hongyi said.
The above is the detailed content of 360 releases large visual model Zhou Hongyi: The combination of large models and the Internet of Things is the next trend. For more information, please follow other related articles on the PHP Chinese website!