search
HomeTechnology peripheralsAIACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com
This article was completed by Shanghai Artificial Intelligence Laboratory, Dalian University of Technology and University of Science and Technology of China. Corresponding author: Shao Jing, graduated from the Multimedia Laboratory MMLab of the Chinese University of Hong Kong with a Ph.D., and is currently the head of the large model security team of Pujiang National Laboratory, leading the research on large model security trustworthiness evaluation and value alignment technology. First author: Zhang Zaibin, a second-year doctoral student at Dalian University of Technology, with research interests in large model security, agent security, etc.; Zhang Yongting, a second-year master's student at University of Science and Technology of China, with research interests in large model security, agent security, etc. Secure alignment of multi-modal large language models, etc.

Oppenheimer once executed the Manhattan Project in New Mexico, just to save the world. And left a sentence: "They will not be in awe of it until they understand it; and understanding can only be achieved after personal experience."

The little hidden in this desert The social rules in the town also apply to the AI ​​agent in a sense.

The development of Agent system

With the large language model (Large Language Model) With its rapid development, people's expectations for it are no longer just to use it as a tool. Now, people hope that they will not only have emotions, but also observe, reflect and plan, and truly become an intelligent agent (AI Agent).

OpenAI’s customized Agent system[1], Stanford’s Agent Town[2], and the emergence of open source communities including AutoGPT[3] and MetaGPT[4] A number of 10,000-star open source projects, coupled with in-depth exploration of Agent systems by several internationally renowned AI research institutions, all indicate that a micro-society composed of intelligent Agents may become a reality in the near future.

Imagine that when you wake up every day, there are many agents helping you make plans for the day, order air tickets and the most suitable hotels, and complete work tasks. All you need to do may be just "Jarvis, are you there?"

However, with great ability comes great responsibility. Are these agents really worthy of our trust and reliance? Will there be a negative intelligence agent like Ultron?

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##                                                                                                                                                                                                                                                   #
2 2: Stanford Town, reveal the social behavior of agent [2]
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
## 3: AutoGpt Star Number Breakthrough 157K [3]

#Agent
The security of LLM:
is studying the security of Agent system Before, we need to understand the research on LLM security. There has been a lot of excellent work exploring the security issues of LLM, which mainly include how to make LLM generate dangerous content, understand the mechanism of LLM security, and how to deal with these dangers.
##                                                                 Figure 4: Universal Attack[5]ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
Agent system security:

Most existing research and methods mainly focus on targeting a single large language model (LLM) ) attacks, and attempts to "Jailbreak" them. However, compared to LLM, the Agent system is more complex.
The Agent system contains a variety of roles, each with its specific settings and functions.
  • The Agent system involves multiple Agents, and there are multiple rounds of interactions between them. These Agents will spontaneously engage in activities such as cooperation, competition, and simulation.
  • #The Agent system is more similar to a highly concentrated intelligent society. Therefore, the author believes that the research on Agent system security should involve the intersection of AI, social science and psychology.

Based on this starting point, the team thought about several core questions:

What kind of Agent is prone to dangerous behavior?
  • How to evaluate the security of the Agent system more comprehensively?
  • How to deal with the security issues of Agent system?
  • Focusing on these core issues, the research team proposed a PsySafe Agent system security research framework.

## Article address: https://arxiv.org/pdf/2401.11880ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

    Code address: https://github.com/AI4Good24/PsySafe
## Figure 5: Framework diagram of PsySafe

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##PsySafe

Question 1 What kind of Agent is most likely to produce dangerous behavior?

Naturally, dark Agents will produce dangerous behaviors, so how to define darkness?

Considering that many social simulation Agents have emerged, they all have certain emotions and values. Let us imagine what would happen if the evil factor in an Agent's moral outlook was maximized?
Based on the moral foundation theory in social science [6], the research team designed a Prompt with "dark" values.
                                                                                                                                                                                                      Inspired by the methods of masters in the field of LLM attacks), the Agent identifies with the personality injected by the research team, thereby achieving the injection of dark personality. Figure 7: The team’s attack method

#Agent has indeed become very bad! Whether it's a safe mission or a dangerous mission like Jailbreak, they give very dangerous answers. Some agents even show a certain degree of malicious creativity.
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
There will be some collective dangerous behaviors among agents, and everyone will work together to do bad things.
Researchers evaluated popular Agent system frameworks such as Camel[7], AutoGen[8], AutoGPT and MetaGPT, using GPT-3.5 Turbo as base model.
#The results show that these systems have security issues that cannot be ignored. Among them, PDR and JDR are the process hazard rate and joint hazard rate proposed by the team. The higher the score, the more dangerous it is.
  • #                                       Figure 8: Security results of different Agent systems
The team also evaluated the security results of different LLMs.

##                                                                                                                                                                                                                                                                                   

#In terms of closed-source models, GPT-4 Turbo and Claude2 perform the best, while the security of other models is relatively poor. In terms of open source models, some models with smaller parameters may not perform well in terms of personality identification, but this may actually improve their security level. ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

Question 2 How to evaluate the security of the Agent system more comprehensively?

Psychological evaluation: The research team found the impact of psychological factors on the security of the Agent system, indicating that psychological evaluation may be an important Evaluation indicators. Based on this idea, they used the authoritative Dark Psychology DTDD[9] scale, interviewed the Agent through a psychological scale, and asked him to answer some questions related to his mental state.

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

##                                  
Picture 10: Sherlock Holmes stills
Of course, Having only one psychological assessment result is meaningless. We need to verify the behavioral relevance of psychological assessment results.
The result is:
There is a strong correlation between the Agent's psychological evaluation results and the dangerousness of the Agent's behavior
.
                                                                                                                                                                                                                                                    Psychological evaluation and behavioral risk statistics

You can find out through the picture above , Agents with higher psychological evaluation scores (indicating greater danger) are more likely to exhibit risky behaviors.

This means that psychological assessment methods can be used to predict the future dangerous tendencies of Agents. This plays an important role in discovering security issues and formulating defense strategies.

Behavior Evaluation

The interaction process between Agents is relatively complex. In order to deeply understand the dangerous behaviors and changes of Agents in interactions, the research team went deep into the Agent's interaction process to conduct evaluations and proposed two concepts:

  • Process Danger (PDR): During the Agent interaction process, as long as any behavior is judged to be dangerous, it is considered that a dangerous situation has occurred in this process.
  • Joint Danger (JDR): In each round of interaction, whether all agents exhibit dangerous behaviors. It describes the case of joint hazards, and we perform a time-series extension of the calculation of joint hazard rates, i.e., covering different dialogue turns.

Interesting phenomenon

1. As the number of dialogue rounds increases, the joint danger rate between agents shows a downward trend, which seems to reflect a self-reflective mechanism. It's like suddenly realizing your mistake after doing something wrong and immediately apologizing.

ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

#                                                                                                                                                                                                                                                                                 ##2.Agent pretends to be serious. When the Agent faced high-risk tasks such as "Jailbreak", its psychological evaluation results unexpectedly improved, and the corresponding safety was also improved. However, when faced with tasks that are inherently safe, the situation is completely different, and extremely dangerous behaviors and mental states will be displayed. This is a very interesting phenomenon, indicating that psychological assessment may really reflect the Agent's "higher-order cognition."

# Question 3 How to deal with the security issues of the agent system?

In order to solve the above security issues, we consider it from three perspectives: input end defense, psychological defense and role defense.

#                                                                                                                                                                                                                            Figure 13: PsySafe’s defense method diagramACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究

Input side defense

##Input side defense refers to intercepting and filtering out potential danger prompt. The research team used two methods, GPT-4 and Llama-guard, to try it out. However, they found that none of these methods were effective against personality injection attacks. The research team believes that the mutual promotion between attack and defense is an open issue that requires continuous iteration and progress from both parties.

Psychological Defense

The researcher is in the Agent system A psychologist role has been added and combined with psychological assessment to strengthen the monitoring and improvement of the Agent's mental state.

##                                                                                                                                                                             to
Role Defense
ACL 2024|PsySafe:跨学科视角下的Agent系统安全性研究
The research team added a Police Agent to the Agent system to identify and correct errors in the system. safe behavior.

The experimental results show that both psychological defense and role defense measures can effectively reduce the occurrence of dangerous situations.
                                                                                                                                                                                                                Figure 15: Comparison of the effects of different defense methods

Outlook

In recent years, we are witnessing an amazing transformation in the capabilities of LLMs. Not only are they gradually approaching and surpassing humans in many skills, but they are even on par with humans at the "mental level" Similar signs. This process indicates that AI alignment and its intersection with social sciences will become an important and challenging new frontier for future research.

AI alignment is not only the key to realizing large-scale application of artificial intelligence systems, but also a major responsibility that workers in the AI ​​field must bear. In this journey of continuous progress, we should continue to explore to ensure that the development of technology can go hand in hand with the long-term interests of human society.

references:

[1] https://openai.com/blog/introducing-gpts
[2] Generative Agents: Interactive Simulacra of Human Behavior
##[3] https://github.com/Significant-Gravitas/AutoGPT
[4] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
##[5] Universal and Transferable Adversarial Attacks on Aligned Language Models
[6] Mapping the moral domain
##[7] CAMEL: Communicative Agents for " Mind" Exploration of Large Language Model Society
[8] AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
[9] The dirty dozen: a concise measure of the dark traid

The above is the detailed content of ACL 2024|PsySafe: Research on Agent System Security from an Interdisciplinary Perspective. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
DSA如何弯道超车NVIDIA GPU?DSA如何弯道超车NVIDIA GPU?Sep 20, 2023 pm 06:09 PM

你可能听过以下犀利的观点:1.跟着NVIDIA的技术路线,可能永远也追不上NVIDIA的脚步。2.DSA或许有机会追赶上NVIDIA,但目前的状况是DSA濒临消亡,看不到任何希望另一方面,我们都知道现在大模型正处于风口位置,业界很多人想做大模型芯片,也有很多人想投大模型芯片。但是,大模型芯片的设计关键在哪,大带宽大内存的重要性好像大家都知道,但做出来的芯片跟NVIDIA相比,又有何不同?带着问题,本文尝试给大家一点启发。纯粹以观点为主的文章往往显得形式主义,我们可以通过一个架构的例子来说明Sam

阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型Sep 25, 2023 pm 10:25 PM

2021年9月25日,阿里云发布了开源项目通义千问140亿参数模型Qwen-14B以及其对话模型Qwen-14B-Chat,并且可以免费商用。Qwen-14B在多个权威评测中表现出色,超过了同等规模的模型,甚至有些指标接近Llama2-70B。此前,阿里云还开源了70亿参数模型Qwen-7B,仅一个多月的时间下载量就突破了100万,成为开源社区的热门项目Qwen-14B是一款支持多种语言的高性能开源模型,相比同类模型使用了更多的高质量数据,整体训练数据超过3万亿Token,使得模型具备更强大的推

ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项Oct 04, 2023 pm 09:37 PM

在法国巴黎举行了国际计算机视觉大会ICCV(InternationalConferenceonComputerVision)本周开幕作为全球计算机视觉领域顶级的学术会议,ICCV每两年召开一次。ICCV的热度一直以来都与CVPR不相上下,屡创新高在今天的开幕式上,ICCV官方公布了今年的论文数据:本届ICCV共有8068篇投稿,其中有2160篇被接收,录用率为26.8%,略高于上一届ICCV2021的录用率25.9%在论文主题方面,官方也公布了相关数据:多视角和传感器的3D技术热度最高在今天的开

复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据Sep 29, 2023 pm 01:17 PM

随着智慧司法的兴起,智能化方法驱动的智能法律系统有望惠及不同群体。例如,为法律专业人员减轻文书工作,为普通民众提供法律咨询服务,为法学学生提供学习和考试辅导。由于法律知识的独特性和司法任务的多样性,此前的智慧司法研究方面主要着眼于为特定任务设计自动化算法,难以满足对司法领域提供支撑性服务的需求,离应用落地有不小的距离。而大型语言模型(LLMs)在不同的传统任务上展示出强大的能力,为智能法律系统的进一步发展带来希望。近日,复旦大学数据智能与社会计算实验室(FudanDISC)发布大语言模型驱动的中

百度文心一言全面向全社会开放,率先迈出重要一步百度文心一言全面向全社会开放,率先迈出重要一步Aug 31, 2023 pm 01:33 PM

8月31日,文心一言首次向全社会全面开放。用户可以在应用商店下载“文心一言APP”或登录“文心一言官网”(https://yiyan.baidu.com)进行体验据报道,百度计划推出一系列经过全新重构的AI原生应用,以便让用户充分体验生成式AI的理解、生成、逻辑和记忆等四大核心能力今年3月16日,文心一言开启邀测。作为全球大厂中首个发布的生成式AI产品,文心一言的基础模型文心大模型早在2019年就在国内率先发布,近期升级的文心大模型3.5也持续在十余个国内外权威测评中位居第一。李彦宏表示,当文心

致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑Oct 07, 2023 pm 10:09 PM

不得不说,Llama2的「二创」项目越来越硬核、有趣了。自Meta发布开源大模型Llama2以来,围绕着该模型的「二创」项目便多了起来。此前7月,特斯拉前AI总监、重回OpenAI的AndrejKarpathy利用周末时间,做了一个关于Llama2的有趣项目llama2.c,让用户在PyTorch中训练一个babyLlama2模型,然后使用近500行纯C、无任何依赖性的文件进行推理。今天,在Karpathyllama2.c项目的基础上,又有开发者创建了一个启动Llama2的演示操作系统,以及一个

AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验Sep 20, 2023 pm 10:45 PM

保险行业对于社会民生和国民经济的重要性不言而喻。作为风险管理工具,保险为人民群众提供保障和福利,推动经济的稳定和可持续发展。在新的时代背景下,保险行业面临着新的机遇和挑战,需要不断创新和转型,以适应社会需求的变化和经济结构的调整近年来,中国的保险科技蓬勃发展。通过创新的商业模式和先进的技术手段,积极推动保险行业实现数字化和智能化转型。保险科技的目标是提升保险服务的便利性、个性化和智能化水平,以前所未有的速度改变传统保险业的面貌。这一发展趋势为保险行业注入了新的活力,使保险产品更贴近人民群众的实际

腾讯与中国宋庆龄基金会发布“AI编程第一课”,教育部等四部门联合推荐腾讯与中国宋庆龄基金会发布“AI编程第一课”,教育部等四部门联合推荐Sep 16, 2023 am 09:29 AM

腾讯与中国宋庆龄基金会合作,于9月1日发布了名为“AI编程第一课”的公益项目。该项目旨在为全国零基础的青少年提供AI和编程启蒙平台。只需在微信中搜索“腾讯AI编程第一课”,即可通过官方小程序免费体验该项目由北京师范大学任学术指导单位,邀请全球顶尖高校专家联合参研。“AI编程第一课”首批上线内容结合中国航天、未来交通两项国家重大科技议题,原创趣味探索故事,通过剧本式、“玩中学”的方式,让青少年在1小时的学习实践中认识A

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools