search
HomeTechnology peripheralsAIOpen and closed source model 'Chaos': Let's see which agent can best glimpse human beings' true intentions

Open and closed source model Chaos: Lets see which agent can best glimpse human beings true intentions
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The first authors of this article are Qian Cheng and He Bingxiang, undergraduate students in the Department of Computer Science, Tsinghua University. Both are members of THUNLP. Qian Cheng's main research interests are tool learning and large model-driven agents. He is about to study for a PhD at UIUC. He Bingxiang's main research interests are large model alignment and security, and he will soon study for a PhD at Tsinghua University. The corresponding authors of this article are Cong Xin and Lin Yankai, and the supervisor is Associate Professor Liu Zhiyuan.

Today, with the rapid development of artificial intelligence, we are constantly exploring the intelligence of machines, but we often ignore how these intelligent agents deeply understand us - their creators. Every interaction, every word, every action we humans have in life is filled with intention and emotion. But the real challenge is: how are these implicit intentions captured, parsed, and reacted to by the agent? Traditional intelligent agents respond quickly to explicit commands, but they often fail to understand complex human implicit intentions.

In recent years, language models such as GPT and LLaMA have demonstrated amazing capabilities in solving complex tasks. However, although the agents with them as their core are good at formulating strategies and executing tasks, they rarely take into account robust user interaction strategies. The tasks given by users are usually vague and short, which requires the agent to not only understand our literal requests, but also to see through our implicit intentions.

Therefore, for a new generation of intelligent agents to be implemented and used by the public, it needs to be human-centered, focusing not only on the accuracy of task execution, but also on how to establish a more natural, smooth and rich relationship with humans. Insightful communication style.

In order to make up for this lack, a joint team from Tsinghua University, Renmin University, and Tencent recently proposed a new intelligent agent interaction design plan. This work first introduces Intention-in-Interaction (IN3), a new benchmark that aims to understand users' implicit intentions through explicit interactions with users.

Using Mistral-7B as the framework and based on IN3 training, Mistral-Interact can proactively evaluate the ambiguity of tasks, query user intentions, and refine them into actionable goals before launching downstream agent task execution. After embedding the model into the XAgent framework, the article conducts a comprehensive evaluation of the fully stateful agent system.

The results show that this solution has outstanding performance in identifying ambiguous user tasks, recovering and summarizing key missing information, setting accurate and necessary agent execution goals, and reducing the use of redundant tools. This innovative method not only fills the gap in the interaction between intelligent agents and users, truly putting humans at the center of intelligent agent design, but also means that we are taking a step closer to the goal of designing intelligent agents that are more in line with human intentions.

Open and closed source model Chaos: Lets see which agent can best glimpse human beings true intentions

  • Paper title: Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents
  • Paper link: https://arxiv.org/abs/2402.09205
  • Code repository: https ://github.com/HBX-hbx/Mistral-Interact
  • Open source model: https://huggingface.co/hbx/Mistral-Interact
  • Open source dataset: https://huggingface.co/ datasets/hbx/IN3

Open and closed source model Chaos: Lets see which agent can best glimpse human beings true intentions

                                                                                                                                                                                                                                                       Comparison of fuzzy task and clear task execution                Current agent benchmarks often assume that a given task is clear , and does not consider user intent understanding as an important aspect of evaluation. In view of the incompleteness of evaluation indicators, this work developed the Intention-in-Interaction (IN3) benchmark, which aims to evaluate the interactive capabilities of agents through clear task ambiguity judgment and user intention understanding.
                                                                                                                                                                                                                   IN3 benchmark data construction process

Open and closed source model Chaos: Lets see which agent can best glimpse human beings true intentions As shown in the figure above, the seed task based on human writing is Step 1 , the model iteratively generates new tasks to enhance the dataset while sampling from the dataset as new examples for the next round of generation (Step 2). After this Self-Instruct generation method, the ambiguity, missing details of each task, the importance of each detail, and potential options are manually annotated (Step 3).

Mistral-Interact training process

Since large language models are at the core of agent design, this work first conducted a preliminary study to evaluate the implicit performance of current open source and closed source models in the interaction process. Formula intention understanding ability.
Specifically, the article randomly selects ten tasks from IN3, applies them to test LLaMA-2-7B-Chat, Mistral-7B-Instruct-v0.2 and GPT-4 and indicates these modelsi ) determine the ambiguity of the task, ii) ask the user for missing details when the task is ambiguous, iii) summarize the detailed user task.
                                                                                                                                                                                                          Better, but still lacking understanding of human intent problem. In contrast, GPT-4 is closest to human intention in terms of task ambiguity and important missing details. At the same time, preliminary exploration also shows that in order to further improve the agent's ability to understand implicit intentions in interaction, simple prompt engineering is not enough. It is necessary to further train based on the current open source model to achieve the goal of The degree of application of intelligent agents.

                                                                                                                                                      The construction process of training data (IN3 conversation records)

Referring to the figure above, according to IN3's annotation of task ambiguity, missing details and potential options, the article is used in the process of constructing conversation records several strategies (orange box), which includes: the construction of a clear initial reasoning chain, the construction of a query with suggested options, the construction of different user response tones, and the construction of a clear summary of the reasoning chain. These dialogue construction strategies will better stimulate the query and reasoning capabilities of the target model.

Comprehensive evaluation of agent interaction capabilities

An agent’s implicit intention understanding ability can be evaluated directly through user interaction or indirectly through the agent performing downstream tasks. Among them, user interaction focuses on intention understanding itself, while task execution focuses on the ultimate goal of intention understanding, which is to enhance the agent's ability to handle tasks.

Therefore, in order to comprehensively evaluate the interactive agent design, the article divides the experiment into two parts: i) Instruction Understanding: Evaluating the agent’s intention understanding ability during user interaction; ii) Instruction Execution : Evaluate the task execution performance of the agent after integrating the interaction model.

Instruction understanding does not involve any real-time agent execution, so the article directly evaluates the performance of different language models during the interaction process to determine their interaction capabilities as an upstream module in agent design. The results are as follows Show:

Open and closed source model Chaos: Lets see which agent can best glimpse human beings true intentions

The instructions understand the test results. Among them, the arrow represents the higher the score/ lower the capability. It performs best on indicators such as judging task ambiguity and coverage of missing details, and can make clear and comprehensive summaries based on detailed user intentions. Compared with other open source models, Mistral-Interact can provide more reasonable options for asking for missing details in fuzzy tasks, the query method is more friendly, and its performance is comparable to GPT-4.

In terms of instruction execution, in order to evaluate the effectiveness of implicit intention understanding for agent task execution, the article integrates Mistral-Interact as an upstream interaction module into the XAgent framework for testing. Among them, XAgent can interact in environments such as network search, code execution, command line and file system.

                                                                                                                                                                                       Instruction execution test results (ST stands for subtask, MS stands for milestone)

Open and closed source model Chaos: Lets see which agent can best glimpse human beings true intentions

Quantitative evaluation results show that integrating Mistral-Interact helps: i) avoid setting unnecessary goals during the execution process, ii) make the execution process of the agent more consistent with detailed user intentions, and iii) reduce unnecessary tool calls and promote the efficiency of agent tool usage.

Agent Interaction Case Analysis

In terms of command understanding, in order to further demonstrate the robustness of Mistral-Interact in different dialogue scenarios, the article also provides three case analyses.
                                                                                                       Case studies of Mistral-Interact and users in different scenarios

Case A shows the impact of different user tones and conversation styles on Mistral-Interact. The article found that regardless of whether the user's answer was short or detailed, enthusiastic or cold, or even contained spelling errors, Mistral-Interact was able to accurately understand and provide an appropriate response, proving its robustness.

In case B, it was tested whether Mistral-Interact can continue to ask questions and guide the conversation back on track when the user shows an uncooperative attitude. The results showed that even when users avoided questions, they were still able to effectively redirect the conversation.

In case C it can be observed that Mistral-Interact can incorporate additional information provided by the user into the summary, which is not explicitly asked by the interaction model. This shows that when the model's query cannot fully cover missing details or the user has specific requirements, the model is still able to reasonably and comprehensively summarize all user intentions, making it more user-friendly.

In order to more clearly illustrate the role of Mistral-Interact in terms of instruction execution, a comparative case study is provided in the figure below.

Open and closed source model Chaos: Lets see which agent can best glimpse human beings true intentions

                                                                                                                                                    The text in light red can be found when the user’s target is blurred. XAgent cannot accurately set subtasks that accurately reflect user needs. According to the text marked purple
, it can be found that XAgent often sets some unnecessary subtasks. These are because the user's task is too vague to perform, and the agent tends to fabricate unnecessary details, which is inconsistent with the user's true intention.

In contrast, clear task goals enable XAgent to formulate more specific subtasks after active interaction with Mistral-Interact. The text marked green in the figure demonstrates this consistency. At the same time, the agent execution process becomes simpler and the number of tool calls is reduced. All of these reflect a more efficient agent execution process.

Conclusion

We are standing at a new starting point, ready to witness a new chapter of human-machine collaboration, mutual understanding and learning. Intelligent agents will soon no longer be cold information processors, but empathetic partners, capable of deeply understanding our needs and desires that may not be initially clearly expressed through delicate interactive experiences. This revolution in human-centered intelligent agent design will reveal infinite possibilities in interaction, making intelligent agents truly an indispensable help in our lives.

The above is the detailed content of Open and closed source model 'Chaos': Let's see which agent can best glimpse human beings' true intentions. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
DSA如何弯道超车NVIDIA GPU?DSA如何弯道超车NVIDIA GPU?Sep 20, 2023 pm 06:09 PM

你可能听过以下犀利的观点:1.跟着NVIDIA的技术路线,可能永远也追不上NVIDIA的脚步。2.DSA或许有机会追赶上NVIDIA,但目前的状况是DSA濒临消亡,看不到任何希望另一方面,我们都知道现在大模型正处于风口位置,业界很多人想做大模型芯片,也有很多人想投大模型芯片。但是,大模型芯片的设计关键在哪,大带宽大内存的重要性好像大家都知道,但做出来的芯片跟NVIDIA相比,又有何不同?带着问题,本文尝试给大家一点启发。纯粹以观点为主的文章往往显得形式主义,我们可以通过一个架构的例子来说明Sam

阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型阿里云通义千问14B模型开源!性能超越Llama2等同等尺寸模型Sep 25, 2023 pm 10:25 PM

2021年9月25日,阿里云发布了开源项目通义千问140亿参数模型Qwen-14B以及其对话模型Qwen-14B-Chat,并且可以免费商用。Qwen-14B在多个权威评测中表现出色,超过了同等规模的模型,甚至有些指标接近Llama2-70B。此前,阿里云还开源了70亿参数模型Qwen-7B,仅一个多月的时间下载量就突破了100万,成为开源社区的热门项目Qwen-14B是一款支持多种语言的高性能开源模型,相比同类模型使用了更多的高质量数据,整体训练数据超过3万亿Token,使得模型具备更强大的推

ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项ICCV 2023揭晓:ControlNet、SAM等热门论文斩获奖项Oct 04, 2023 pm 09:37 PM

在法国巴黎举行了国际计算机视觉大会ICCV(InternationalConferenceonComputerVision)本周开幕作为全球计算机视觉领域顶级的学术会议,ICCV每两年召开一次。ICCV的热度一直以来都与CVPR不相上下,屡创新高在今天的开幕式上,ICCV官方公布了今年的论文数据:本届ICCV共有8068篇投稿,其中有2160篇被接收,录用率为26.8%,略高于上一届ICCV2021的录用率25.9%在论文主题方面,官方也公布了相关数据:多视角和传感器的3D技术热度最高在今天的开

百度文心一言全面向全社会开放,率先迈出重要一步百度文心一言全面向全社会开放,率先迈出重要一步Aug 31, 2023 pm 01:33 PM

8月31日,文心一言首次向全社会全面开放。用户可以在应用商店下载“文心一言APP”或登录“文心一言官网”(https://yiyan.baidu.com)进行体验据报道,百度计划推出一系列经过全新重构的AI原生应用,以便让用户充分体验生成式AI的理解、生成、逻辑和记忆等四大核心能力今年3月16日,文心一言开启邀测。作为全球大厂中首个发布的生成式AI产品,文心一言的基础模型文心大模型早在2019年就在国内率先发布,近期升级的文心大模型3.5也持续在十余个国内外权威测评中位居第一。李彦宏表示,当文心

AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验AI技术在蚂蚁集团保险业务中的应用:革新保险服务,带来全新体验Sep 20, 2023 pm 10:45 PM

保险行业对于社会民生和国民经济的重要性不言而喻。作为风险管理工具,保险为人民群众提供保障和福利,推动经济的稳定和可持续发展。在新的时代背景下,保险行业面临着新的机遇和挑战,需要不断创新和转型,以适应社会需求的变化和经济结构的调整近年来,中国的保险科技蓬勃发展。通过创新的商业模式和先进的技术手段,积极推动保险行业实现数字化和智能化转型。保险科技的目标是提升保险服务的便利性、个性化和智能化水平,以前所未有的速度改变传统保险业的面貌。这一发展趋势为保险行业注入了新的活力,使保险产品更贴近人民群众的实际

复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据复旦大学团队发布中文智慧法律系统DISC-LawLLM,构建司法评测基准,开源30万微调数据Sep 29, 2023 pm 01:17 PM

随着智慧司法的兴起,智能化方法驱动的智能法律系统有望惠及不同群体。例如,为法律专业人员减轻文书工作,为普通民众提供法律咨询服务,为法学学生提供学习和考试辅导。由于法律知识的独特性和司法任务的多样性,此前的智慧司法研究方面主要着眼于为特定任务设计自动化算法,难以满足对司法领域提供支撑性服务的需求,离应用落地有不小的距离。而大型语言模型(LLMs)在不同的传统任务上展示出强大的能力,为智能法律系统的进一步发展带来希望。近日,复旦大学数据智能与社会计算实验室(FudanDISC)发布大语言模型驱动的中

致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑致敬TempleOS,有开发者创建了启动Llama 2的操作系统,网友:8G内存老电脑就能跑Oct 07, 2023 pm 10:09 PM

不得不说,Llama2的「二创」项目越来越硬核、有趣了。自Meta发布开源大模型Llama2以来,围绕着该模型的「二创」项目便多了起来。此前7月,特斯拉前AI总监、重回OpenAI的AndrejKarpathy利用周末时间,做了一个关于Llama2的有趣项目llama2.c,让用户在PyTorch中训练一个babyLlama2模型,然后使用近500行纯C、无任何依赖性的文件进行推理。今天,在Karpathyllama2.c项目的基础上,又有开发者创建了一个启动Llama2的演示操作系统,以及一个

快手黑科技“子弹时间”赋能亚运转播,打造智慧观赛新体验快手黑科技“子弹时间”赋能亚运转播,打造智慧观赛新体验Oct 11, 2023 am 11:21 AM

杭州第19届亚运会不仅是国际顶级体育盛会,更是一场精彩绝伦的中国科技盛宴。本届亚运会中,快手StreamLake与杭州电信深度合作,联合打造智慧观赛新体验,在击剑赛事的转播中,全面应用了快手StreamLake六自由度技术,其中“子弹时间”也是首次应用于击剑项目国际顶级赛事。中国电信杭州分公司智能亚运专班组长芮杰表示,依托快手StreamLake自研的4K3D虚拟运镜视频技术和中国电信5G/全光网,通过赛场内部署的4K专业摄像机阵列实时采集的高清竞赛视频,

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software