Home > Article > Technology peripherals > CREATOR makes and uses tools to realize LLM's 'self-evolution'
Since ancient times, the use of tools has been regarded as a major difference between humans and other species, and is also regarded as a fundamental manifestation of intelligence. Nowadays, artificial intelligence is no longer limited to the simple use of tools. They can already creatively build their own tools based on problems to find solutions. In terms of thinking, this means that the current large models have been able to master higher-level abstract thinking and cognition, and divide it with concrete thinking to solve problems together; and in terms of capabilities, the emergence of tool creation also means that the model has been able to Transform through "learning" and use what you know to "create" infinite possibilities for the future.
In recent years, large-scale language models (Large Language Models) have made significant research progress, including GPT-3, Codex, PaLM, LLaMA, ChatGPT and the recently released GPT-4, etc. These models excel at in-context learning, code generation, and a variety of other natural language processing tasks, pushing the potential of the models further toward general artificial intelligence.
Although large models have achieved great success in these areas, they still have many shortcomings, including the inability to recognize or answer the latest real-time information and the difficulty in large-scale data calculations. Achieve high accuracy, unstable reasoning ability when the question stem is logically complex, etc. In response to these shortcomings, researchers have begun to work on introducing the ability to utilize external resources into the current model architecture, such as introducing calculators, question and answer systems, Wikipedia and other external knowledge sources to enhance model capabilities. This series of research laid the foundation for the modelTool Learningability.
However, the number of external tools utilized in current research is still limited, while the potential new task types are almost endless. Therefore, when faced with new problem types, it is difficult to find existing tools that are suitable for solving the problems. Furthermore, even if effective exploitable tools are provided, models require extensive search, matching, and problem-specific planning in toolkit documentation. This will bring a lot of cognitive load to the model and require a high learning cost.
Therefore, the research team proposed a new research paradigm: Tool Creation (Tool Creation). It is no longer simply the ability to use large models to use tools, but adds a new tool creation module, allowing the model to create tools and find solutions to the problems it faces.
Utilizing large models to create tools increases the ubiquity, reusability, and diversity of tools beyond the limitations of a given API. The design of the tool creation module can also reduce the cognitive load of large models and decouple them for abstract reasoning (creating generalizable and universal tools) and concrete reasoning (based on tool implementation details and tool usage documentation) Decision-making ability. At the same time, the model under this framework uses code as the medium for tool creation, which makes the model more sensitive to errors and can be backtracked and corrected based on problems in tool creation and use.
The tool creation paradigm is more flexible than tool use and has stronger adaptability to different scenarios
CREATOR Research FrameworkA framework for creating tools to solve problems using large models. CREATOR is mainly divided into the following four stages:
Process framework for tool creation and decision-making using large models
The large model will first create the required tools and their related instructions based on the problem; thereafter, the problem content and tool information will be returned to the large model at the same time to decide on a solution to this problem. and how to use these tools. Thereafter, the model will adapt tools and decisions based on execution to better fit the problem and seek answers.
The entire tool creation framework flexibly uses the different thinking abilities of large models: Abstract thinking reasoning to extract key information of the problem, and concrete thinking reasoning to make decisions based on the task implementation plan , and Self-healing reasoning that seeks solutions based on problems. The decoupling of these capabilities helps large models avoid failures caused by confusion in the ordinary reasoning chain (Chain-of-Thought, CoT), and effectively improves the adaptability and performance of large models to tasks.
The author compares the CREATOR framework with the current common reasoning chain method (CoT), program reasoning chain method (Program-of-Thought, PoT) and A simple tool use without creation was compared. At the same time, in order to verify the effectiveness of separating abstract reasoning and concrete reasoning in the framework, the author also introduced Tool Create - whole as a baseline. This method combines the creation phase and decision-making phase in the CREATOR framework into one. No more decoupling of reasoning capabilities.
##Creation Challenge Dataset problem, standard tools and decision-making examples
The performance of the CREATOR framework on the MATH data set is higher than other inference methods and simple tool applications
In terms of the selection of data sets, the author chose the MATH and TabMWP data sets as the main verification. The former includes difficult mathematical problems in American mathematics competitions, while the latter combines problems with rich data tables. Both test the model's problem reasoning and solving capabilities in diverse scenarios. In addition, the author also introduced a newly constructed Creation Challenge data set, in which the problems cannot be directly solved by existing tools or code packages, thus testing the model's ability to create tools.
#In the TabMWP dataset as well The effect of the CREATOR framework on Creation Challenge is also significantly stronger Judging from the experimental results, the reasoning results of the CREATOR framework are significantly better than all baselines, especially compared to standard reasoning methods and program reasoning methods, they have achieved better results. At the same time, experiments also prove that decoupling abstract and concrete reasoning capabilities can effectively help the model improve accuracy. On the Creation Challenge test set, the author also additionally verified that the model will have a stronger ability to solve the problem if there are hints about what tools to create. Therefore, prompts and thinking decoupling have also become important factors in tool creation.
Accuracy statistics of different methods for task difficulty
The effect is improved with the participation of the correction phase
In addition, the author also verified the change curves of different methods for task difficulty, as well as the connection between the number of participation rounds in the correction stage and the improvement of large model effects. The results show that the CREATOR framework can maintain better robustness in the face of difficult problems, and the participation in the correction phase can greatly improve not only the CREATOR framework, but even the PoT reasoning method, Confirms the rationality and effectiveness of introducing the correction stage in the experiment.
In addition to the main experiment, the author of the article also focused on other advantages of tool creation and the current large-model tool creation capabilities of different presentation forms. Since it is a creation tool, one of its advantages as a tool must be its reusability. The author also followed this idea to further demonstrate the improvement of task effects through the reuse of tools.
The author designed 300 questions and divided them into 100 groups of three. Although the three questions in each group have different scenarios, they all involve the same core knowledge (Core Knowledge), that is, similar questions. The author verified whether using a tool created for one problem in all scenarios in a set of problems can effectively solve and improve accuracy.
The tools created for large models can be migrated to other problems, which can effectively improve the accuracy
Experimental statistics show that migrating the correct and usable tools created by the model to other similar problem scenarios can effectively improve the accuracy of problem solving. This shows that the tools created by large models have good reusability and also have good universality for similar problems.
In addition, the author also shows three dimensions of tool creation using large models: Encapsulating existing tools to achieve different purposes, and combining different tools Achieve target functions , and create hierarchical tools. These three dimensions from low to high demonstrate the capabilities of current large model tools, and these capabilities also help large models adapt to different scenarios more efficiently.
Three dimensions of tool creation for large models
The CREATOR framework achieves the decoupling of large model abstraction and concrete thinking capabilities through tool creation. It is another major breakthrough in exploring the edge of model capabilities after tool learning. I believe that more research in the future will be based on this, continue to prove and enhance the potential of the model in the use and creation of tools, and bring us more surprises.
Qian Cheng is a third-year undergraduate student at Tsinghua University, a member of the THUNLP laboratory, and his mentor Liu Zhiyuan. Current research directions include large model pre-training, large model efficient fine-tuning, and tool learning. He was awarded the Outstanding Comprehensive Computing Scholarship of Tsinghua University and published papers as a co-author in international conferences such as EMNLP and ACL.
Personal homepage: https://qiancheng0.github.io/
The above is the detailed content of CREATOR makes and uses tools to realize LLM's 'self-evolution'. For more information, please follow other related articles on the PHP Chinese website!