Home >Technology peripherals >AI >So cool that it explodes! The HuggingGPT online demonstration made a stunning appearance, and netizens tested the image generation in person
The strongest combination HuggingFace ChatGPT="Jarvis" is now open for demo.
Some time ago, Zhejiang University and Microsoft released HuggingGPT, a large model collaboration system that became an instant hit.
The researchers proposed using ChatGPT as a controller to connect various AI models in the HuggingFace community to complete multi-modal complex tasks.
In the entire process, all you need to do is: output your requirements in natural language.
NVIDIA scientists said this is the most interesting paper I have read this week. Its idea is very close to the "Everything App" I mentioned before, that is, everything is an App, and the information is directly read by AI.
Now, HuggingGPT has added Gradio demo.
## Project address: https://github.com/microsoft/JARVIS
Some netizens started to experience it. First, let’s “identify how many people are on the picture”?
HuggingGPT based on the inference results, concluded that there are two people walking on the street in the picture.
The specific process is as follows:
First use the image to text model nlpconnect/vit-gpt2-image-captioning for image description, and the generated text "Two women walking on a street with a train."
Next, the target detection model facebook/detrresnet 50 is used to detect the number of people in the picture. The model detected 7 objects and 2 people.
Then use the visual question answering model dandelin/vilt-b32-finetuned-vqa to get the results. Finally, the system provides detailed responses and model information for answering the question.
Also, let it understand the emotion of the sentence "I love you" and translate it into Tamil (Tamiḻ).
HuggingGPT called the following model:
First, the model "dslim/bert-base-NER" was used to match the text "l love you "Classification of emotions is "romantic".
Then, use "ChatGPT" to translate the text into Tamil, which is "Nan unnai kadalikiren".
There are no generated images, audio or video files in the inference results.
HuggingGPT failed when transcribing MP3 files. A netizen said, "Not sure if this is a problem with my input file."
Let's look at the image generation capabilities.
Enter "A dancing cat" and add the text "I LOVE YOU" as an overlay to the image.
HuggingGPT first used the "runwayml/stable-diffusion-1-5" model to generate a picture of "dancing cat" based on the given text.
Then, the same model was used to generate an image of "I LOVE YOU" based on the given text.
Finally, merge the two pictures together and output the following picture:
A few days after the project was released, Jarvis has received 12.5k stars on GitHub and 811 forks.
Researchers pointed out that solving the current problems of large language models (LLMs) may be the first and crucial step towards AGI.
Because the current technology of large language models still has some shortcomings, there are some pressing challenges on the road to building AGI systems.
To handle complex AI tasks, LLMs should be able to coordinate with external models to leverage their capabilities.
Therefore, the key point is how to choose the appropriate middleware to bridge LLMs and AI models.
In this research paper, the researcher proposes that language is a universal interface in HuggingGPT. The workflow is mainly divided into four steps:
Paper address: https://arxiv.org/pdf/2303.17580.pdf
The first is task planning. ChatGPT parses user requests, breaks them down into multiple tasks, and plans task sequences and dependencies based on its knowledge.
Next, proceed to model selection. LLM assigns parsed tasks to expert models based on the model description in HuggingFace.
Then execute the task. The expert model executes the assigned tasks on the inference endpoint and records the execution information and inference results into the LLM.
Finally, the response is generated. LLM summarizes execution process logs and inference results, and returns the summary to the user.
If such a request is given:
Please generate a picture of a girl reading a book, she The pose is the same as the boy in example.jpg. Then use your voice to describe the new image.
You can see how HuggingGPT decomposes it into 6 subtasks, and selects the model to execute respectively to obtain the final result.
By incorporating AI model descriptions into prompts, ChatGPT can be considered the brain that manages AI models. Therefore, this method allows ChatGPT to call external models to solve practical tasks.
To put it simply, HuggingGPT is a collaboration system, not a large model.
Its function is to connect ChatGPT and HuggingFace to process input in different modalities and solve many complex artificial intelligence tasks.
So, every AI model in the HuggingFace community has a corresponding model description in the HuggingGPT library, and is integrated into the prompt to establish a connection with ChatGPT.
HuggingGPT then uses ChatGPT as the brain to determine the answer to the question.
So far, HuggingGPT has integrated hundreds of models on HuggingFace around ChatGPT, covering text classification, target detection, semantic segmentation, image generation, question and answer, text to speech, 24 tasks such as text to video.
Experimental results prove that HuggingGPT can show good performance on various forms of complex tasks.
Some netizens said that HuggingGPT is similar to the Visual ChatGPT previously proposed by Microsoft. It seems that they expanded the original idea to a huge set of pre-trained models. superior.
Visual ChatGPT is built directly on ChatGPT and injects many visual models (VFMs) into it. Prompt Manage is proposed in the article.
With the help of PMs, ChatGPT can utilize these VFMs and receive their feedback in an iterative manner until the user's requirements are met or the end condition is reached.
Some netizens believe that this idea is indeed very similar to ChatGPT. Using LLM as the center for semantic understanding and task planning can infinitely improve the capabilities of LLM. By combining LLM with other functional or domain experts, we can create more powerful and flexible AI systems that can better adapt to a variety of tasks and needs.
This is what I have always thought about AGI. Artificial intelligence models can understand complex tasks and then assign smaller tasks to other more specialized tasks. AI model.
Just like the brain, it also has different parts to accomplish specific tasks, which sounds logical.
The above is the detailed content of So cool that it explodes! The HuggingGPT online demonstration made a stunning appearance, and netizens tested the image generation in person. For more information, please follow other related articles on the PHP Chinese website!