Home >Technology peripherals >AI >Interpretation of TaskMatrix.AI

Interpretation of TaskMatrix.AI

王林
王林forward
2023-04-28 15:37:061535browse

ChatGPT demonstrates impressively powerful dialogue, context learning, and code generation capabilities on a wide range of open-domain tasks, and the commonsense knowledge it acquires can also generate high-level solution summaries for domain-specific tasks. However, in addition to more powerful learning, understanding and generation capabilities, what other problems does ChatGPT need to solve?

Microsoft recently released TaskMatrix.AI, which may be another direction in the artificial intelligence ecosystem, connecting basic models with millions of APIs to complete tasks. It is a combination of Toolformer and chatGPT, and may also be Another future for LLM.

1. Problem

ChatGPT or GPT-4 still face difficulties in some professional tasks because they lack sufficient domain-specific data during pre-training or they perform tasks that require accurate execution Errors often occur in neural network calculations. On the other hand, there are many existing models and systems (symbolic-based or neural network-based) that can accomplish some domain-specific tasks very well. However, they are not compatible with the base model due to different implementations or working mechanisms.

Furthermore, the use cases for AI are endless, helping not only in the digital world but also in the physical world to help with a variety of tasks, from photo manipulation to controlling smart home devices, it can do quite a lot Beyond imagination.

Therefore, a mechanism is needed that can leverage the base model to propose an outline of a task solution, and then automatically match some subtasks in the outline with ready-made models and system APIs with special capabilities to complete they. TaskMatrix.AI is such a mechanism.

2. TaskMatrix.AI Overview

TaskMatrix.AI serves a wide variety of tasks by combining base models with existing models and APIs. The following are the tasks that TaskMatrix.AI can perform:

  • Artificial intelligence can understand different types of inputs (such as text, images, videos, audio, and code), perform digital tasks by using the underlying model as the core system and physical tasks, and then generate code to call the API to complete the task.
  • TaskMatrix.AI has an API platform that serves as a repository for tasks in various domains. All APIs on this platform have a consistent documentation format, making it easy to use the base model and easy for developers to add new APIs.
  • TaskMatrix.AI has strong lifelong learning capabilities as it can expand its skills to handle new tasks by adding new APIs with specific functionality to the API platform.
  • TaskMatrix.AI's responses are better interpretable because both the task resolution logic (i.e. the operation code) and the results of the API are understandable.

3. TaskMatrix.AI de Architecture

The overall architecture of TaskMatrix.AI and its four main components:

  • Multimodal dialogue Base Model (MCFM): It is responsible for communicating with users, understanding their goals and (multimodal) context, and generating executable code based on APIs to accomplish specific tasks.
  • API Platform: Provides a unified API documentation schema to store millions of APIs with varying functionality and allows API developers or owners to register, update, and delete their APIs.
  • API selector: Recommend relevant APIs based on MCFM’s understanding of user commands.
  • API executor: Execute the generated operation code by calling relevant APIs, and return intermediate and final execution results.

Interpretation of TaskMatrix.AI

These 4 subsystems work together to enable TaskMatrix.AI to understand user goals and execute API-based executable code for specific tasks. The Multimodal Conversation Foundation Model (MCFM), as the main interface for user communication, can understand multimodal contexts. API Platform provides a unified API documentation schema and a place to store millions of APIs. The API selector uses MCFM's understanding of the user's goals to recommend relevant APIs. Finally, the API executor executes the action code generated by the relevant API and returns the results. In addition, the team also used reinforcement learning with human feedback (RLHF) technology to train a reward model that can optimize the task matrix (taskMatrix). This method can help MCFM and API selectors find optimal strategies and improve the performance of complex tasks.

3.1 Multimodal Conversation Base Model (MCFM)

MCFM has four inputs: parameters of the base model, API platform, user instructions, and session context. Using these inputs, the model generates operational code to complete the user's instructions. Furthermore, an ideal Multimodal Conversation Framework Model (MCFM) should have the following four main features:

  • Get multi-modal input and generate executable code based on task-specific APIs.
  • Extract specific tasks from user instructions and propose a solution outline.
  • Understand how to use the API from the documentation and match it to a specific task based on common sense and API usage history.
  • Contains an explicit code verification mechanism to confirm reliability and trustworthiness.

ChatGPT and GPT-4 are two examples of models with these capabilities required for MCFM. However, GPT-4 is more suitable as it supports multi-modal input.

3.2 API Platform

The API platform has two main functions: storing APIs and managing API developers or owners. The API platform has a unified API document template that includes five aspects of each API document:

  • API name: Provides an overview of the API and serves as an entry point for operation performers.
  • Parameter list: Including input parameters and return values, each parameter has a name, description, data type and default value.
  • API Description: Contains information about what the API does, how it works, its inputs and outputs, and potential errors or exceptions.
  • Application example (optional): Show how to use the API.
  • Composition guidance (optional): Provides guidance on how to combine multiple APIs to complete complex user instructions.
  • The API platform has two main functions: storage of APIs and management of APIs by developers or owners.

API description example: Open a file

<code>API Name: open_local_fileAPI Parameter: (file_path:string, model:string="r"). file_path: string, the pathname (absolute or relative to the current working directory) of the file to be opened.mode: string="r", the mode is an optional string that specifies the mode in which the file is opened. It defaults to "r" which means open for reading in text mode. Other common values are "w" for writing. This file will return a File object or OSError.API Description: Open the file and return a corresponding file object. If the file cannot be opened,an OSError is raised.Usage Example: f = open_local_file("example.txt", "w")Composition Instructions: Open should be used before reading and editing. The file should be closed by close_local_file after all operations.</code>

3.3 API selector

The API selector is designed to identify and select from the API platform that best suits the task requirements API. It can reduce the plethora of APIs that an API platform may have by retrieving semantically relevant APIs. API selectors can use module strategies to quickly locate related APIs.

Module strategy refers to a method of organizing APIs into specific packages or modules based on their domains. Each module corresponds to a specific area, such as visual models, mathematics, specific software, or physical equipment. By using this strategy, the API selector can quickly locate relevant APIs that fit the mission requirements and solution outline as understood by MCFM. This approach helps simplify the API selection process and makes it easier to retrieve semantically relevant APIs from the API platform.

3.4 Action Executor

The action executor is designed to execute action code. AI uses an action executor to run a variety of APIs, from simple HTTP requests to complex algorithms or AI models that require multiple input parameters.

Action executors also need a verification mechanism to improve accuracy and reliability, and to confirm whether the results of the generated code meet the tasks specified by humans.

3.5 Human-centered Reinforcement Learning (RLHF)

TaskMatrix.AI will leverage RLHF to enhance MCFM and API selectors to provide better performance in complex tasks.

RLHF will be used specifically to optimize the API selector, using a trained reward model based on API feedback:

  • Feedback to API developers
  • API Developers will receive feedback on whether their API is doing the job.

This will allow API documentation to be created in the most optimized way to use a given API.

4. Use cases of TaskMatrix

TaskMatrix. What tasks can AI help solve?

TaskMatrix. AI combined with continued developments in underlying models, cloud services, robotics and the Internet of Things has the potential to create a future world of increased productivity and creativity.

4.1 Visualization Task

Based on the multi-modal characteristics of MCFM, TaskMatrix.AI can perform visualization tasks and can take language and images as input. Some of the visual tasks it can perform, the image below shows how TaskMatrix.AI is built on top of VisualChatGPT and is able to better handle VQA tasks.

Interpretation of TaskMatrix.AI

#Image editing, which can delete or replace objects in the image, can also be done through TaskMatrix.AI. Using image processing techniques or computer algorithms Image-to-Sketch/Depth/Hed/Line, images can be converted into sketches, depth, overall nested edge detection or lines. Sketch/Depth/Hed/Line-to-Image is the opposite of the above, it will generate an image based on the given options.

The image below shows an example of how TaskMatrix.AI is defined and executed on a solution outline using three API calls (Image Q&A, Image Captioning, and Replace Objects in Image).

Interpretation of TaskMatrix.AI

4.2 Multi-modal long content generation

Another use case for TaskMatrix.AI is the creation of large multi-modal (image and text) content to remove the character limitations of other models.

In the example below, we can see how TaskMatrix.AI takes high-level instructions from the user and generates a reasonable response.

Interpretation of TaskMatrix.AI

4.3 Office Automation

TaskMatrix.AI can easily reduce office workload by understanding user instructions received through voice and automating tasks. Additionally, it enables the use of complex software without extensive training, allowing employees to focus on more urgent tasks.

The example below shows a conversation between TaskMatrix.AI and someone using different APIs when creating PowerPoint slides.

Interpretation of TaskMatrix.AI

4.4 Utilization of cloud services

TaskMatrix.AI can work like smart home automation, able to communicate with all devices in the home and act as a link between them center connection point. The image below shows a conversation between a person and TaskMatrix.AI, which uses in-house robot software and hardware to complete daily tasks.

Interpretation of TaskMatrix.AI

Additionally, TaskMatrix.AI can be used in many other scenarios, the only requirement is that it can leverage APIs such as access to the Metaverse or Web3.

5. Challenges of TaskMatrix.AI

TaskMatrix.AI still has quite a few shortcomings and limitations that need to be addressed and dealt with, for example:

  • pairs need to be created An underlying model capable of handling a variety of tasks and a variety of inputs, learning from human feedback, and using common sense reasoning to complete tasks at the highest quality. Determining the minimum set of modalities required for TaskMatrix.AI and training it remains challenging.
  • Creating and maintaining a platform that hosts millions of APIs requires solving several challenges, API documentation generation, API quality assurance, and API creation recommendations. Based on this, the API platform should provide further guidance for API developers, Create new APIs to solve these tasks.
  • Leveraging millions of APIs to complete user instructions brings new challenges beyond free text generation, and it is critical to recommend relevant APIs to MCFM to solve specific tasks. For complex tasks, TaskMatrix.AI may not be able to come up with a solution immediately. Instead, MCFM should interact with the user and try out different possible solutions to find the most suitable one.
  • Security and privacy can be an issue, requiring verification that the model completes user instructions and does not do anything beyond the user's intent. Data transfer should be secure and allow for authorized data access when integrating with various APIs from different domains that require access to sensitive data.
  • TaskMatrix.AI needs a personalization strategy to help individual developers build their own personalized AI interfaces, as well as to help users have their own personal assistants. Reducing scaling costs and aligning with users' small number of examples are challenges.

6. Summary

Looking back at Moore's Law, perhaps, "the number of AIs doubles every 18 months" will become a new law.

TaskMatrix.AI integrates underlying models with millions of existing models and system APIs, resulting in a “super artificial intelligence” capable of performing a variety of digital and physical tasks. As an AI platform, it allows humans to utilize large models and APIs to perform a large number of diverse tasks. It can handle every common task (for example, making PPT slides or running a cleaning robot to clean the house on a schedule), making us more productive and creative.

【Reference】

TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs, https://arxiv.org/pdf/2303.16434.pdf


The above is the detailed content of Interpretation of TaskMatrix.AI. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete