Home >Technology peripherals >AI >Interpretation of TaskMatrix.AI
ChatGPT demonstrates impressively powerful dialogue, context learning, and code generation capabilities on a wide range of open-domain tasks, and the commonsense knowledge it acquires can also generate high-level solution summaries for domain-specific tasks. However, in addition to more powerful learning, understanding and generation capabilities, what other problems does ChatGPT need to solve?
Microsoft recently released TaskMatrix.AI, which may be another direction in the artificial intelligence ecosystem, connecting basic models with millions of APIs to complete tasks. It is a combination of Toolformer and chatGPT, and may also be Another future for LLM.
ChatGPT or GPT-4 still face difficulties in some professional tasks because they lack sufficient domain-specific data during pre-training or they perform tasks that require accurate execution Errors often occur in neural network calculations. On the other hand, there are many existing models and systems (symbolic-based or neural network-based) that can accomplish some domain-specific tasks very well. However, they are not compatible with the base model due to different implementations or working mechanisms.
Furthermore, the use cases for AI are endless, helping not only in the digital world but also in the physical world to help with a variety of tasks, from photo manipulation to controlling smart home devices, it can do quite a lot Beyond imagination.
Therefore, a mechanism is needed that can leverage the base model to propose an outline of a task solution, and then automatically match some subtasks in the outline with ready-made models and system APIs with special capabilities to complete they. TaskMatrix.AI is such a mechanism.
TaskMatrix.AI serves a wide variety of tasks by combining base models with existing models and APIs. The following are the tasks that TaskMatrix.AI can perform:
The overall architecture of TaskMatrix.AI and its four main components:
These 4 subsystems work together to enable TaskMatrix.AI to understand user goals and execute API-based executable code for specific tasks. The Multimodal Conversation Foundation Model (MCFM), as the main interface for user communication, can understand multimodal contexts. API Platform provides a unified API documentation schema and a place to store millions of APIs. The API selector uses MCFM's understanding of the user's goals to recommend relevant APIs. Finally, the API executor executes the action code generated by the relevant API and returns the results. In addition, the team also used reinforcement learning with human feedback (RLHF) technology to train a reward model that can optimize the task matrix (taskMatrix). This method can help MCFM and API selectors find optimal strategies and improve the performance of complex tasks.
MCFM has four inputs: parameters of the base model, API platform, user instructions, and session context. Using these inputs, the model generates operational code to complete the user's instructions. Furthermore, an ideal Multimodal Conversation Framework Model (MCFM) should have the following four main features:
ChatGPT and GPT-4 are two examples of models with these capabilities required for MCFM. However, GPT-4 is more suitable as it supports multi-modal input.
The API platform has two main functions: storing APIs and managing API developers or owners. The API platform has a unified API document template that includes five aspects of each API document:
API description example: Open a file
<code>API Name: open_local_fileAPI Parameter: (file_path:string, model:string="r"). file_path: string, the pathname (absolute or relative to the current working directory) of the file to be opened.mode: string="r", the mode is an optional string that specifies the mode in which the file is opened. It defaults to "r" which means open for reading in text mode. Other common values are "w" for writing. This file will return a File object or OSError.API Description: Open the file and return a corresponding file object. If the file cannot be opened,an OSError is raised.Usage Example: f = open_local_file("example.txt", "w")Composition Instructions: Open should be used before reading and editing. The file should be closed by close_local_file after all operations.</code>
The API selector is designed to identify and select from the API platform that best suits the task requirements API. It can reduce the plethora of APIs that an API platform may have by retrieving semantically relevant APIs. API selectors can use module strategies to quickly locate related APIs.
Module strategy refers to a method of organizing APIs into specific packages or modules based on their domains. Each module corresponds to a specific area, such as visual models, mathematics, specific software, or physical equipment. By using this strategy, the API selector can quickly locate relevant APIs that fit the mission requirements and solution outline as understood by MCFM. This approach helps simplify the API selection process and makes it easier to retrieve semantically relevant APIs from the API platform.
The action executor is designed to execute action code. AI uses an action executor to run a variety of APIs, from simple HTTP requests to complex algorithms or AI models that require multiple input parameters.
Action executors also need a verification mechanism to improve accuracy and reliability, and to confirm whether the results of the generated code meet the tasks specified by humans.
TaskMatrix.AI will leverage RLHF to enhance MCFM and API selectors to provide better performance in complex tasks.
RLHF will be used specifically to optimize the API selector, using a trained reward model based on API feedback:
This will allow API documentation to be created in the most optimized way to use a given API.
TaskMatrix. What tasks can AI help solve?
TaskMatrix. AI combined with continued developments in underlying models, cloud services, robotics and the Internet of Things has the potential to create a future world of increased productivity and creativity.
Based on the multi-modal characteristics of MCFM, TaskMatrix.AI can perform visualization tasks and can take language and images as input. Some of the visual tasks it can perform, the image below shows how TaskMatrix.AI is built on top of VisualChatGPT and is able to better handle VQA tasks.
#Image editing, which can delete or replace objects in the image, can also be done through TaskMatrix.AI. Using image processing techniques or computer algorithms Image-to-Sketch/Depth/Hed/Line, images can be converted into sketches, depth, overall nested edge detection or lines. Sketch/Depth/Hed/Line-to-Image is the opposite of the above, it will generate an image based on the given options.
The image below shows an example of how TaskMatrix.AI is defined and executed on a solution outline using three API calls (Image Q&A, Image Captioning, and Replace Objects in Image).
Another use case for TaskMatrix.AI is the creation of large multi-modal (image and text) content to remove the character limitations of other models.
In the example below, we can see how TaskMatrix.AI takes high-level instructions from the user and generates a reasonable response.
TaskMatrix.AI can easily reduce office workload by understanding user instructions received through voice and automating tasks. Additionally, it enables the use of complex software without extensive training, allowing employees to focus on more urgent tasks.
The example below shows a conversation between TaskMatrix.AI and someone using different APIs when creating PowerPoint slides.
TaskMatrix.AI can work like smart home automation, able to communicate with all devices in the home and act as a link between them center connection point. The image below shows a conversation between a person and TaskMatrix.AI, which uses in-house robot software and hardware to complete daily tasks.
Additionally, TaskMatrix.AI can be used in many other scenarios, the only requirement is that it can leverage APIs such as access to the Metaverse or Web3.
TaskMatrix.AI still has quite a few shortcomings and limitations that need to be addressed and dealt with, for example:
Looking back at Moore's Law, perhaps, "the number of AIs doubles every 18 months" will become a new law.
TaskMatrix.AI integrates underlying models with millions of existing models and system APIs, resulting in a “super artificial intelligence” capable of performing a variety of digital and physical tasks. As an AI platform, it allows humans to utilize large models and APIs to perform a large number of diverse tasks. It can handle every common task (for example, making PPT slides or running a cleaning robot to clean the house on a schedule), making us more productive and creative.
【Reference】
TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs, https://arxiv.org/pdf/2303.16434.pdf
The above is the detailed content of Interpretation of TaskMatrix.AI. For more information, please follow other related articles on the PHP Chinese website!