Home > Article > Technology peripherals > New technology launched, IDEA Research Institute released T-Rex model, allowing users to select "Prompt" prompts directly on the image
Following the popularity of Grounded SAM, the IDEA Research Institute team is back with a blockbuster new work: Brand new Visual Prompt(Visual Prompt)ModelT-Rex, identified by pictures Picture, ready to use right out of the box,
Pull the box, detect, and complete! At the just-concluded 2023 IDEA conference, Shen Xiangyang, founding chairman of the IDEA Research Institute and foreign academician of the National Academy of Engineering, demonstrated a new target detection experience based on visual cues and released the model laboratory (playground) of the new visual cues model T-Rex ), Interactive Visual Prompt (iVP), set off a wave of trial climaxes on site.
On iVP, users can personally unlock the "a picture is worth a thousand words" prompting experience: mark objects of interest on the picture, provide visual examples to the model, and the model will Detect all similar instances in the target image. The entire process is interactive and can be easily completed in just a few steps.
Grounded SAM (Grounding DINO SAM) released by IDEA Research Institute in April was once very popular on Github and has collected 11K stars so far. Different from Grounded SAM, which only supports text prompts, the T-Rex model released this time provides a visual prompt function that focuses on creating strong interaction.
T-Rex has strong out-of-the-box features and can detect objects that the model has never seen during the training phase without retraining or fine-tuning. This model can not only be applied to all detection tasks including counting, but also provides new solutions for intelligent interactive annotation scenarios.
#The team revealed that the development of visual prompt technology was derived from the observation of pain points in real scenes. Some partners hope to use visual models to count the number of goods on trucks. However, the model cannot individually identify each goods through text prompts only. The reason is that objects in industrial scenes are rare in daily life and difficult to describe in words. In this case, visual cues are clearly a more efficient approach. At the same time, intuitive visual feedback and strong interactivity also help improve the efficiency and accuracy of detection.
Based on insights into actual usage requirements, the team designed T-Rex as a model that can accept multiple visual prompts and has the ability to prompt across images. In addition to the most basic single-round prompt mode, the current model also supports the following three advanced modes.
In the technical report released at the same time, the team summarized the four main features of the T-Rex model:
The research team pointed out that in the target detection scenario, the addition of visual cues can make up for some of the shortcomings of text cues. In the future, the combination of the two will further unleash the potential of CV technology in more vertical fields.
For technical details of the T-Rex model, please refer to the technical report released at the same time.
iVPModel Lab: https://deepdataspace.com/playground/ivp
Github link: trex-counting.github.io
This work comes from the Computer Vision and Robotics Research Center of the IDEA Institute. The team's previously open source target detection model DINO was the first DETR model to achieve first place in the COCO target detection rankings; the very popular zero-shot detector Grounding DINO on Github and the DINO can detect and segment any object. Grounded SAM, also the work of this team
The above is the detailed content of New technology launched, IDEA Research Institute released T-Rex model, allowing users to select "Prompt" prompts directly on the image. For more information, please follow other related articles on the PHP Chinese website!