Home >Technology peripherals >AI >Kuaishou has open sourced the Agents system, models and data!
Can 7B-sized models also play with AI Agents? Recently, Kuaishou open sourced "KwaiAgents". Ask it about weekend skiing. It will not only help you find a venue, but also consider the weather for you that day.
As we all know, large language models (LLM) master a large amount of knowledge through language modeling and have certain cognition and reasoning ability. However, even the most powerful GPT-4 currently produces false content when used alone and cannot interact with the world in real time. AI Agents are one way to solve this problem. By stimulating the ability of large models to plan tasks, reflect, and call tools, large models can use real-world tools to improve the accuracy of generated content and even have the ability to solve complex problems. This time, the "KwaiAgents" jointly developed by Kuaishou and Harbin Institute of Technology enables the "small" large model of 7B/13B to surpass the effect of GPT-3.5, and these systems, models, data and evaluations are all open source!
The following content can be found on the Github homepage of "KwaiAgents":
The main components of the KAgentSys system include Cognitive core, memory mechanism and tool library based on large-scale models to achieve iterative automation
Some functions of KAgentSys will be gradually upgraded and opened. This is the content of this open source
In order to avoid over-fitting problems caused by a single template during training, the team proposed the Meta-Agent Tuning (MAT) method to improve the performance of large models by introducing more Agent Prompt templates into the training data. The versatility of Agent capabilities and improved effects.
Meta-Agent Tuning (MAT) is divided into two stages:
KAgentBench uses thousands of pieces of manually annotated data to enable it to be used out of the box, allowing everyone to Use one line of commands to evaluate the Agents capabilities of a large model in various aspects under different templates.
In KAgentBench, as shown in the figure above, we will input constructs for different types of abilities. Each query comes with multiple templates and multiple real, human-edited answers. The purpose of this is to comprehensively assess accuracy and generalization. After MAT tuning, the following table shows the improvement of the 7B-13B model in various capabilities, and exceeds the effect of GPT-3.5
The study also conducted a cross-evaluation, inviting human annotators to annotate 200 factual and time-sensitive questions, such as "How old is Andy Lau this year?" The results show that the model after the KAgentSys system and MAT is significantly improved (the accuracy is expressed in percentage, and the average score on a 5-point scale is in parentheses)
For some long-tail questions and popular questions, the results that usually rely solely on web searches are not ideal. For example, if you ask a long-tail question like "How many days is Antonella older than Messi?", the search results will usually return some gossip about them without providing key information. KAgentSys can accurately answer this question by calling the encyclopedia search tool to obtain the precise date of birth, and then using the time difference tool to calculate the age difference.
The team said that AI Agents are a very promising technology path of. In the future, we will continue to accumulate core technologies and continuously inject new vitality into the entire community. At the same time, we will also actively explore the combination of Agents technology and Kuaishou business, and try to implement more interesting and valuable innovative applications
The above is the detailed content of Kuaishou has open sourced the Agents system, models and data!. For more information, please follow other related articles on the PHP Chinese website!