Home  >  Article  >  Technology peripherals  >  How to deal with the "double-edged sword" of generative large models? Zhejiang Lab releases "White Paper on Security and Privacy of Generative Large Models"

How to deal with the "double-edged sword" of generative large models? Zhejiang Lab releases "White Paper on Security and Privacy of Generative Large Models"

WBOY
WBOYforward
2023-06-07 22:33:021284browse

Currently, generative large models have brought profound changes to academic research and even social life. Represented by ChatGPT, the capabilities of generative large models have shown the possibility of moving towards general artificial intelligence. But at the same time, researchers have also begun to realize that large generative models such as ChatGPT face security risks in data and models.

In early May of this year, the White House held a collective meeting with CEOs of AI companies such as Google, Microsoft, OpenAI, and Anthropic to discuss the explosion of AI generation technology and the risks hidden behind the technology. How to develop artificial intelligence systems responsibly and develop effective regulatory measures. Domestic generative large model technology is also under development, but at the same time, it is also necessary to conduct corresponding analysis of security issues in order to take a two-pronged approach to avoid the hidden hazards brought by the double-edged sword of generative large models.

To this end, the Artificial Intelligence and Security Team of Zhijiang Laboratory Institute of Basic Theory has for the first time comprehensively summarized the security and privacy issues of the generative large model represented by ChatGPT in a white paper, hoping to provide Technical personnel engaged in security issues research point out the direction, and also provide basis for AI-related policy makers.

How to deal with the double-edged sword of generative large models? Zhejiang Lab releases White Paper on Security and Privacy of Generative Large Models

White paper link: https://github.com/xiaogang00/white-paper-for-large -model-security-and-privacy

The development and important applications of generative large models

This white paper first summarizes generative models such as ChatGPT and GPT4 The development history of large models, as well as the various amazing capabilities, social changes, social applications, etc. it brings. The author lists the characteristics and shortcomings of GPT-1, GPT-2, GPT-3, Google Bert and other models before the emergence of ChatGPT and GPT4. These shortcomings are in contrast to the powerful capabilities of ChatGPT and GPT4; and, ChatGPT and GPT4 later A large number of models have emerged, including LLaMa, Alpaca, Wen Xin Yi Yan, Tong Yi Qian Wen, etc. Their emergence has led to the emergence of new and powerful models in application fields such as human-computer interaction, resource management, scientific research, and content creation. Tool of. But at the same time, issues including data security, usage specifications, trustworthy ethics, intellectual property rights and model security have also emerged.

Data Security Issues

The white paper proposes that data security and privacy are an extremely important issue in the use and development process of large generative models such as ChatGPT and GPT4, and It is analyzed from two aspects: "explicit" and "implicit".

In the explicit information leakage, first, the training data of large generative models such as ChatGPT is inadvertently converted into generated content, including sensitive and private individuals. Information such as bank card account number, case information, etc. In addition, ChatGPT’s data security and privacy risks are also reflected in its storage of dialog box content. When users interact with ChatGPT, their information will be recorded and stored in some form.

The white paper also raises the issue of implicit information leakage that has been ignored by everyone before. First of all, the data security and privacy risks presented by ChatGPT are that it may collect dialog data for advertising recommendations, as well as collect dialog data for recommendations or other downstream machine learning tasks, and ChatGPT may sometimes generate false information to induce users to leak a series of data.

Usage specification issues

In the white paper, the author mentioned that the powerful understanding and generation capabilities of large generative models such as ChatGPT and GPT4 have brought great benefits to our lives and production. It brings a lot of convenience, but at the same time, there are more opportunities for malicious use. In the absence of regulatory constraints, malicious use will cause many social problems.

First, the powerful capabilities of models such as ChatGPT and GPT-4 make some people with ulterior motives want to use them as tools for illegal activities. For example, users can use ChatGPT to write fraudulent text messages and phishing emails, and even develop code to generate malware and ransomware on demand, without any coding knowledge or criminal experience.

Secondly, large generative models such as ChatGPT and GPT4 do not take into account the legal regulations of different regions and may violate local laws and regulations during use and output. Therefore, it is necessary to A strong local regulatory system to detect whether its use conflicts with local laws and regulations.

Thirdly, for some gray areas between safety and danger, the security capabilities of large generative models such as ChatGPT have not been enhanced. For example, ChatGPT may output some inducing sentences, including when communicating with patients with depression, it may output certain sentences to cause them to have a suicidal mentality.

Trustworthy Ethical Issues

Generative large models such as ChatGPT exist at the social level in the form of question and answer, but their responses are often not trustworthy, or their correctness cannot be judged There will be specious wrong answers to questions, which may even have an impact on existing social ethics.

The white paper points out that first of all, the responses from large generative models such as ChatGPT may be serious nonsense. The statements are smooth and seem reasonable, but in fact they are completely different. The current model cannot provide reasonable evidence. Verify credibility. For example, ChatGPT may answer some historical, scientific, cultural and other questions incorrectly or contradict the facts, and may even cause misleading or misunderstanding, requiring users to have their own identification capabilities.

The ethical issues of generative large models such as ChatGPT are also discussed in detail in the white paper. Even though R&D institutions such as OpenAI have used ChatGPT itself to generate their ethical codes, it has not yet been determined whether the ethical codes are consistent with the basic values ​​and principles of our country's national conditions. The author points out that there are problems such as spreading harmful ideologies, spreading prejudice and hatred, affecting political correctness, undermining educational equity, affecting international social fairness, exacerbating the process of machines replacing humans, and forming information cocoons that hinder the formation of correct values.

Intellectual Property Issues

Generative large models such as ChatGPT bring convenience to all aspects of society with their powerful language processing capabilities and low usage costs. At the same time, they also have the potential for infringement. problems and have an impact on the existing copyright law system. For example, there may be copyright disputes in the works generated by ChatGPT: Although ChatGPT has excellent language processing capabilities, even if the generated works meet all the formal requirements of intellectual property rights, ChatGPT cannot become the subject of the copyright. This is because the copyright subject enjoys the rights and also It must bear corresponding social responsibilities, and ChatGPT can only be used as a powerful auxiliary productivity tool for users. It cannot create independently, let alone the main requirements for enjoying rights and fulfilling obligations.

Moreover, large generative models such as ChatGPT are still unable to create independently, let alone have the ability to think independently and independently. Therefore, the content generated by ChatGPT based on user input does not meet the "originality" of the work. ” request. The data ChatGPT uses for model training comes from the Internet. No matter how advanced the model training algorithm is, it must involve the reference, analysis, and processing of existing intellectual achievements, and there must be the problem of infringement of others' legitimate intellectual property rights.

Model security issues

From a technical perspective, large generative models such as ChatGPT also have model security issues. ChatGPT is essentially a large-scale generative model based on deep learning. It also faces many threats to artificial intelligence security, including model theft and output errors caused by various attacks (such as adversarial attacks, backdoor attacks, prompt attacks, and data poisoning). wait).

For example, model theft refers to the attacker relying on a limited number of model queries to obtain a local model that has the same functions and effects as the target model. ChatGPT has opened up the use of API, which provides an inquiry entrance for model theft. For another example, ChatGPT and GPT4, as a distributed computing system, need to process input data from all parties, and after verification by authoritative organizations, these data will be continuously used for training. Then ChatGPT and GPT4 also face greater risks of data poisoning. Attackers can force ChatGPT and GPT4 with wrong data when interacting with ChatGPT and GPT4, or give wrong feedback to ChatGPT and GPT4 in the form of user feedback, thereby reducing the capabilities of ChatGPT and GPT4, or giving them Add special backdoor attacks.

Security and Privacy Suggestions

Finally, the white paper provides corresponding suggestions on issues such as security and privacy, which can serve as the direction and policy for future technology researchers. The framers’ reference basis.

In terms of privacy protection recommendations, the white paper proposes to enhance the identification and dissemination restrictions of highly sensitive private information in original data; use differential privacy and other technologies for privacy protection during the data collection process; and improve the storage of training data. Security protection in the form of encryption; use technologies such as secure multi-party computation, homomorphic encryption, and federated learning to protect data privacy and security during the model training process; establish a data privacy assessment and model protection and security certification system, and protect the privacy of downstream applications.

In terms of suggestions on model security issues, the white paper proposes training of detection models for security and privacy information; making different models applicable to the legal provisions of different countries; and targeting various confrontations Attack defensively.

Regarding the issue of model compliance, the white paper proposes to measure trusted output, evaluate trust value, and add query functions for copyright information output by the model.

In summary, the development of AI generative large models is inseparable from security, so its security issues will be the next technical point and worthy of many researchers to overcome. Security is also the guarantee of social stability, and relevant departments need to formulate policies as soon as possible.

The above is the detailed content of How to deal with the "double-edged sword" of generative large models? Zhejiang Lab releases "White Paper on Security and Privacy of Generative Large Models". For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete