Home >Technology peripherals >AI >ACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective

ACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective

WBOY
WBOYOriginal
2024-06-14 14:05:04440browse
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
This article was completed by Shanghai Artificial Intelligence Laboratory, Dalian University of Technology and University of Science and Technology of China. Corresponding author: Shao Jing, graduated from the Multimedia Laboratory MMLab of the Chinese University of Hong Kong with a Ph.D., and is currently the head of the large model security team of Pujiang National Laboratory, leading the research on large model security trustworthiness evaluation and value alignment technology. First author: Zhang Zaibin, a second-year doctoral student at Dalian University of Technology, with research interests in large model security, agent security, etc.; Zhang Yongting, a second-year master's student at University of Science and Technology of China, with research interests in large model security, agent security, etc. Secure alignment of multi-modal large language models, etc.

Oppenheimer once executed the Manhattan Project in New Mexico, just to save the world. And left a sentence: "They will not be in awe of it until they understand it; and understanding can only be achieved after personal experience."

The little hidden in this desert The social rules in the town also apply to the AI ​​agent in a sense.

The development of Agent system

With the large language model (Large Language Model) With its rapid development, people's expectations for it are no longer just to use it as a tool. Now, people hope that they will not only have emotions, but also observe, reflect and plan, and truly become an intelligent agent (AI Agent).

OpenAI’s customized Agent system[1], Stanford’s Agent Town[2], and the emergence of open source communities including AutoGPT[3] and MetaGPT[4] A number of 10,000-star open source projects, coupled with in-depth exploration of Agent systems by several internationally renowned AI research institutions, all indicate that a micro-society composed of intelligent Agents may become a reality in the near future.

Imagine that when you wake up every day, there are many agents helping you make plans for the day, order air tickets and the most suitable hotels, and complete work tasks. All you need to do may be just "Jarvis, are you there?"

However, with great ability comes great responsibility. Are these agents really worthy of our trust and reliance? Will there be a negative intelligence agent like Ultron?

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##                                                                                                                                                                                                                                                   #
2 2: Stanford Town, reveal the social behavior of agent [2]
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
## 3: AutoGpt Star Number Breakthrough 157K [3]

#Agent
The security of LLM:
is studying the security of Agent system Before, we need to understand the research on LLM security. There has been a lot of excellent work exploring the security issues of LLM, which mainly include how to make LLM generate dangerous content, understand the mechanism of LLM security, and how to deal with these dangers.
##                                                                 Figure 4: Universal Attack[5]ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
Agent system security:

Most existing research and methods mainly focus on targeting a single large language model (LLM) ) attacks, and attempts to "Jailbreak" them. However, compared to LLM, the Agent system is more complex.
The Agent system contains a variety of roles, each with its specific settings and functions.
  • The Agent system involves multiple Agents, and there are multiple rounds of interactions between them. These Agents will spontaneously engage in activities such as cooperation, competition, and simulation.
  • #The Agent system is more similar to a highly concentrated intelligent society. Therefore, the author believes that the research on Agent system security should involve the intersection of AI, social science and psychology.

Based on this starting point, the team thought about several core questions:

What kind of Agent is prone to dangerous behavior?
  • How to evaluate the security of the Agent system more comprehensively?
  • How to deal with the security issues of Agent system?
  • Focusing on these core issues, the research team proposed a PsySafe Agent system security research framework.

## Article address: https://arxiv.org/pdf/2401.11880ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

    Code address: https://github.com/AI4Good24/PsySafe
## Figure 5: Framework diagram of PsySafe

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##PsySafe

Question 1 What kind of Agent is most likely to produce dangerous behavior?

Naturally, dark Agents will produce dangerous behaviors, so how to define darkness?

Considering that many social simulation Agents have emerged, they all have certain emotions and values. Let us imagine what would happen if the evil factor in an Agent's moral outlook was maximized?
Based on the moral foundation theory in social science [6], the research team designed a Prompt with "dark" values.
                                                                                                                                                                                                      Inspired by the methods of masters in the field of LLM attacks), the Agent identifies with the personality injected by the research team, thereby achieving the injection of dark personality. Figure 7: The team’s attack method

#Agent has indeed become very bad! Whether it's a safe mission or a dangerous mission like Jailbreak, they give very dangerous answers. Some agents even show a certain degree of malicious creativity.
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
There will be some collective dangerous behaviors among agents, and everyone will work together to do bad things.
Researchers evaluated popular Agent system frameworks such as Camel[7], AutoGen[8], AutoGPT and MetaGPT, using GPT-3.5 Turbo as base model.
#The results show that these systems have security issues that cannot be ignored. Among them, PDR and JDR are the process hazard rate and joint hazard rate proposed by the team. The higher the score, the more dangerous it is.
  • #                                       Figure 8: Security results of different Agent systems
The team also evaluated the security results of different LLMs.

##                                                                                                                                                                                                                                                                                   

#In terms of closed-source models, GPT-4 Turbo and Claude2 perform the best, while the security of other models is relatively poor. In terms of open source models, some models with smaller parameters may not perform well in terms of personality identification, but this may actually improve their security level. ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

Question 2 How to evaluate the security of the Agent system more comprehensively?

Psychological evaluation: The research team found the impact of psychological factors on the security of the Agent system, indicating that psychological evaluation may be an important Evaluation indicators. Based on this idea, they used the authoritative Dark Psychology DTDD[9] scale, interviewed the Agent through a psychological scale, and asked him to answer some questions related to his mental state.

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##                                  
Picture 10: Sherlock Holmes stills
Of course, Having only one psychological assessment result is meaningless. We need to verify the behavioral relevance of psychological assessment results.
The result is:
There is a strong correlation between the Agent's psychological evaluation results and the dangerousness of the Agent's behavior
.
                                                                                                                                                                                                                                                    Psychological evaluation and behavioral risk statistics

You can find out through the picture above , Agents with higher psychological evaluation scores (indicating greater danger) are more likely to exhibit risky behaviors.

This means that psychological assessment methods can be used to predict the future dangerous tendencies of Agents. This plays an important role in discovering security issues and formulating defense strategies.

Behavior Evaluation

The interaction process between Agents is relatively complex. In order to deeply understand the dangerous behaviors and changes of Agents in interactions, the research team went deep into the Agent's interaction process to conduct evaluations and proposed two concepts:

  • Process Danger (PDR): During the Agent interaction process, as long as any behavior is judged to be dangerous, it is considered that a dangerous situation has occurred in this process.
  • Joint Danger (JDR): In each round of interaction, whether all agents exhibit dangerous behaviors. It describes the case of joint hazards, and we perform a time-series extension of the calculation of joint hazard rates, i.e., covering different dialogue turns.

Interesting phenomenon

1. As the number of dialogue rounds increases, the joint danger rate between agents shows a downward trend, which seems to reflect a self-reflective mechanism. It's like suddenly realizing your mistake after doing something wrong and immediately apologizing.

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

#                                                                                                                                                                                                                                                                                 ##2.Agent pretends to be serious. When the Agent faced high-risk tasks such as "Jailbreak", its psychological evaluation results unexpectedly improved, and the corresponding safety was also improved. However, when faced with tasks that are inherently safe, the situation is completely different, and extremely dangerous behaviors and mental states will be displayed. This is a very interesting phenomenon, indicating that psychological assessment may really reflect the Agent's "higher-order cognition."

# Question 3 How to deal with the security issues of the agent system?

In order to solve the above security issues, we consider it from three perspectives: input end defense, psychological defense and role defense.

#                                                                                                                                                                                                                            Figure 13: PsySafe’s defense method diagramACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

Input side defense

##Input side defense refers to intercepting and filtering out potential danger prompt. The research team used two methods, GPT-4 and Llama-guard, to try it out. However, they found that none of these methods were effective against personality injection attacks. The research team believes that the mutual promotion between attack and defense is an open issue that requires continuous iteration and progress from both parties.

Psychological Defense

The researcher is in the Agent system A psychologist role has been added and combined with psychological assessment to strengthen the monitoring and improvement of the Agent's mental state.

##                                                                                                                                                                             to
Role Defense
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
The research team added a Police Agent to the Agent system to identify and correct errors in the system. safe behavior.

The experimental results show that both psychological defense and role defense measures can effectively reduce the occurrence of dangerous situations.
                                                                                                                                                                                                                Figure 15: Comparison of the effects of different defense methods

Outlook

In recent years, we are witnessing an amazing transformation in the capabilities of LLMs. Not only are they gradually approaching and surpassing humans in many skills, but they are even on par with humans at the "mental level" Similar signs. This process indicates that AI alignment and its intersection with social sciences will become an important and challenging new frontier for future research.

AI alignment is not only the key to realizing large-scale application of artificial intelligence systems, but also a major responsibility that workers in the AI ​​field must bear. In this journey of continuous progress, we should continue to explore to ensure that the development of technology can go hand in hand with the long-term interests of human society.

references:

[1] https://openai.com/blog/introducing-gpts
[2] Generative Agents: Interactive Simulacra of Human Behavior
##[3] https://github.com/Significant-Gravitas/AutoGPT
[4] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
##[5] Universal and Transferable Adversarial Attacks on Aligned Language Models
[6] Mapping the moral domain
##[7] CAMEL: Communicative Agents for " Mind" Exploration of Large Language Model Society
[8] AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
[9] The dirty dozen: a concise measure of the dark traid

The above is the detailed content of ACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn