Home >Technology peripherals >AI >The era of big models is here! How to deal with data security risks? Tips from experts at Xiaomanyao AI Summit

The era of big models is here! How to deal with data security risks? Tips from experts at Xiaomanyao AI Summit

WBOY
WBOYforward
2023-05-31 15:25:43806browse

What is the reason behind the popularity of big language models such as ChatGPT? Which industries will it bring good news to? Where are the potential bubble risks? From May 25th to 26th, at the 2023 Xiaomanyao Technology Conference sub-forum - "Toward the Intelligent Era and Realizing the Civilization Leap" AIGC Special Summit, more than 20 researchers and practitioners in the field of AI discussed AIGC applications. and new business paradigms, new development paths in various industries and fields, as well as potential data security risks and ethical issues.

At the new book release ceremony at the summit on the 26th, Long Zhiyong, author of "The Age of Big Models", former senior product expert and deputy general manager of the business unit of Alibaba, and co-founder and chief operating officer of Silicon Valley AI startups, accepted During the interview, Nandu said frankly that generative AI should follow the model of standardization first and development later. To deal with the potential bubble risks of large models, there are both technical means, such as large model self-assessment, compliance algorithm review, etc., as well as manual processes, and more importantly, the industry Only by having reasonable expectations about the difficulty and cycle of problem solving can we avoid the risks caused by over-optimism.

Big models set off a new round of intellectual revolution and industrial restructuring

The real intelligent “brain” behind generative artificial intelligence such as ChatGPT is the big language model! Technological breakthroughs based on large generative pre-training models have brought multiple applications to individuals and industries, triggering a new round of intellectual revolution and industrial reconstruction, and building a new brain-computer collaboration relationship.

The era of large models has arrived! Long Zhiyong revealed that "The Era of Big Models" provides in-depth analysis and elaboration of technology, applications and industrial changes, vividly explains the principles behind the ChatGPT large model, depicts how large models will drive society into the era of intelligent revolution and brain-computer collaboration, and Summarize the precautions and methodologies for enterprises to apply large models in their own business, and provide suggestions for individuals and enterprises to cope with changes. According to him, large models have been specifically applied in fields such as knowledge work, commercial enterprises, and creative entertainment, and mainly bring about two types of innovation: incremental innovation and disruptive innovation.

In the keynote speech at the summit, artificial intelligence scientist Liu Zhiyi also mentioned that artificial intelligence empowers various fields of economic and social development, and the demand for large models continues to rise for industrial upgrading in various downstream fields. It is estimated that the market size of China's artificial intelligence industry will be 370 million yuan in 2022, and is expected to reach 1.537 billion yuan in 2027. It is expected to continue to penetrate into many fields such as downstream manufacturing, transportation, finance, and medical care, and achieve large-scale implementation.

The era of big models is here! How to deal with data security risks? Tips from experts at Xiaomanyao AI Summit

"The Era of Big Models" was released on May 26 at the 2023 AIGC special summit "Toward the Intelligent Era and Achieving a Civilization Leap".

Generative artificial intelligence brings risks such as trust erosion

However, with the widespread application of large models, potential bubbles have also emerged. Less than 20 days after Samsung introduced ChatGPT, it was revealed that confidential data had been leaked. People are increasingly paying attention to the legal risks, ethical issues and data security issues brought about by technologies such as AI face changing and AI painting.

When talking about "AI technological innovation and ethical governance in the era of big models," Liu Zhiyi said that generative artificial intelligence does have certain risks. If these risks are not considered and mitigated when expanding the scale, the transformation speed may be slowed down. Continuous updates of trained models to improve performance may raise concerns about sensitive data, privacy, and security. Everyone involved in the development, consumption, discussion and regulation of generative AI should work to manage risks such as erosion of trust, long-term risk of employee unemployment, bias and discrimination, data privacy and protection of intellectual property.

Liu Zhiyi shared three views in an interview with Nandu. He said that first, AI technology will naturally enter various fields of the national economy and social systems, and the risks will expand because the technology itself is a black box, such as a deep neural network. Through the calculation of technology and algorithms, no one knows every step of it. How to achieve it is opaque and unexplainable, so there are risks. Second, AI technology is often related to the creation of the digital world. For example, deep forgery, including fake sounds and images, is to turn physical identities into digital identities. The more developed the digital economy is, the more it needs these technical supports and the stronger the dependence, but the greater the risks it brings. Third, our country attaches great importance to application scenarios and ecology. The implementation of these application scenarios must be innovative and will inevitably bring risks. These risks will expand with the innovation of scenarios, so there will be preemptive supervision. For example, the Cyberspace Administration of China issued " The "Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments)" and the "Opinions on Strengthening Ethical Governance of Science and Technology" issued by the Ministry of Science and Technology are all preemptive considerations of some risks.

The era of big models is here! How to deal with data security risks? Tips from experts at Xiaomanyao AI Summit

Long Zhiyong, author of "The Era of Big Models", former senior product expert and deputy general manager of the business unit of Alibaba, and co-founder and chief operating officer of Silicon Valley AI startup company, spoke at the new book release ceremony.

Put forward requirements for the reliability and transparency of large model algorithms

"Data privacy is indeed an important issue for the GPT large model." Long Zhiyong said in an interview with Nandu that OpenAI recently made preparations in advance when responding to inquiries in the United States. For example, it provided the ability to turn off chat records in ChatGPT. For personal options, users can refuse large models to use their own private data for training; for corporate customers, OpenAI will provide privately deployed models to avoid companies worrying about their fine-tuned training data being shared by large models to competitors. These measures are likely to Will be adopted by domestic large models.

Regarding how to deal with the potential bubble risks of large models and how to balance the relationship between strong regulation and development promotion of generative artificial intelligence, Long Zhiyong said frankly that generative AI should follow the model of standardization first and development later. As the main bearers of legal responsibilities for AI-generated products, service providers of large models are responsible for the correctness and value orientation of AIGC content, and their compliance pressure is still considerable. This is a strong norm," stated in the "Beijing Promotion In the document "Several Measures for the Innovative Development of General Artificial Intelligence", it is mentioned that generative AI should be encouraged to achieve positive applications in non-public service fields such as scientific research, and should be piloted in the core area of ​​​​Zhongguancun to conduct inclusive and prudent regulatory pilots. I think it is It is a positive signal that strikes a balance between norms and development."

He mentioned that the regulatory agency’s thinking requires improvements in the reliability and transparency of large algorithms. In "The Age of Big Models," a warning was issued about potential industry bubble risks, and one of the key factors was the reliability and transparency of big models. Ilya, chief scientist of OpenAI, believes that large model illusion and information falsification are the main obstacles hindering the application of GPT in various industries. The reason why the hallucination problem is difficult to eradicate is firstly due to the training objectives and methods of large models. Secondly, the black box attribute of AI since the era of deep learning is opaque and cannot locate specific problems in the model. Considering that the emergence mechanism of new capabilities of large models is also transparent and unpredictable, the large model industry needs to pursue controllability amid loss of control and seek development within regulations. This is the biggest challenge.

Produced by: Nandu Big Data Research Institute

Researcher: Yuan Jiongxian

The above is the detailed content of The era of big models is here! How to deal with data security risks? Tips from experts at Xiaomanyao AI Summit. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete