Home  >  Article  >  Technology peripherals  >  Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

WBOY
WBOYforward
2023-05-23 14:07:061383browse

Recently, the ReLER Laboratory of Zhejiang University deeply combined SAM with video segmentation and released Segment-and-Track Anything (SAM-Track).

SAM-Track gives SAM the ability to track video targets and supports multiple ways of interaction (points, brushes, text).

On this basis, SAM-Track unifies multiple traditional video segmentation tasks, achieves one-click segmentation and tracking of any target in any video, and extrapolates traditional video segmentation to universal Video segmentation.

SAM-Track has excellent performance and can stably track hundreds of targets with high quality in complex scenarios with only a single card.

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

Project address: https://github.com/z-x-yang/Segment-and-Track -Anything

##Paper address: https://arxiv.org/abs/2305.06558

Effect display

SAM-Track supports language input as Prompt. For example, given the category text "Panda", one-click instance-level segmentation can be used to track all targets belonging to the category "Panda".

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

You can also give a more detailed description, such as entering the text "The leftmost panda", SAM-Track You can locate specific targets for segmentation tracking.

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

Compared with traditional video tracking algorithms, another powerful feature of SAM-Track is that it can target a large number of targets simultaneously. Perform tracking segmentation and automatically detect emerging objects.

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

SAM-Track also supports the combination of multiple interactive methods, and users can match them according to actual needs. For example, use a brush to frame a skateboard that is closely connected to the human body to prevent segmentation of redundant objects, and then use clicks to select the human body.

Fully automatic video target segmentation and tracking is naturally a problem. Various application scenarios include street views, aerial photography, AR, animation, medical images, etc., all of which can be segmented and tracked automatically with one click. Detect emerging objects.

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

If the user is not satisfied with the automatic segmentation result, the user can edit and correct it on this basis, for example, use click to correct it Over-divided trams.

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

At the same time, the latest version of SAM-Track supports online browsing of tracking results, and you can choose to segment any frame in the middle As a result, modify and add goals, and track again.

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

In order to facilitate users’ online experience, the project provides WebUI, which can be deployed with one click through Colab:

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

Model composition

The SAM-Track model is based on the four-track championship scheme DeAOT of the ECCV'22 VOT Workshop.

DeAOT is an efficient multi-objective VOS model. Given the object annotation of the first frame, it can track and segment objects in the remaining frames of the video.

DeAOT uses a recognition mechanism to embed multiple targets in a video into the same high-dimensional space, thereby achieving simultaneous tracking of multiple objects.

DeAOT’s speed performance in multi-object tracking is comparable to other VOS methods for single-object tracking.

In addition, through the layered Transformer-based propagation mechanism, DeAOT better aggregates long-term and short-term information, showing excellent tracking performance.

Since DeAOT requires reference frame annotation for initialization, in order to improve convenience, SAM-Track uses the Segment Anything Model (SAM) model that has recently made a splash in the field of image segmentation to obtain Label information.

Using SAM’s excellent zero-sample migration capabilities and multiple interaction methods, SAM-Track can efficiently obtain high-quality reference frame annotation information for DeAOT.

Although the SAM model performs well in the field of image segmentation, it cannot output semantic labels, and text prompts cannot well support Referring Object Segmentation and other tasks that rely on deep semantic understanding.

Therefore, the SAM-Track model further integrates Grounding-DINO to achieve high-precision language-guided video segmentation. Grounding DINO is an open set object detection model with good language understanding capabilities.

Based on the input category or detailed description of the target object, Grounding-DINO can detect the target and return the location box.

SAM-Track model architecture

As shown in the figure below, the SAM-Track model supports three object tracking modes, namely interactive tracking mode, automatic tracking mode and Fusion mode.

Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click

For interactive tracking mode, the SAM-Track model first applies SAM, using clicks or frames in the reference frame Select the target in this way until the interactive segmentation result that satisfies the user is obtained.

If you want to implement language-guided video object segmentation, SAM-Track will call Grounding-DINO based on the input text to first obtain the position frame of the target object, and based on this Obtain the segmentation results of the object of interest through SAM.

Finally, DeAOT uses the interactive segmentation result as a reference frame to track the selected target. During the tracking process, DeAOT will layer-wise propagate the visual embedding and high-dimensional ID embedding in past frames to the current frame to achieve frame-by-frame tracking and segmentation of multiple target objects. Therefore, SAM-Track can track objects of interest in segmented videos by supporting multi-modal interactions.

However, the interactive tracking mode cannot handle newly emerged objects appearing in the video. Limits the application of SAM-Track in specific fields, such as autonomous driving, smart cities, etc.

In order to further expand the application scope and performance of SAM-Track, SAM-Track implements automatic tracking mode to track new objects appearing in the video.

The automatic tracking mode uses Segment Everything and Object of Interest Segmentation to obtain annotations of new objects appearing in every n frames. For the ID assignment problem of newly emerging objects, SAM-Track uses the comparison mask module (CMR) to determine the ID of the new object.

The fusion mode combines the interactive tracking mode and the automatic tracking mode. Interactive tracking mode allows users to easily obtain annotations for the first frame of a video, while automatic tracking mode handles new, unselected objects that appear in subsequent frames of the video. The combination of tracking methods expands the application scope of SAM-Track and increases the practicality of SAM-Track.

The above is the detailed content of Video segmentation finale! Zhejiang University recently released SAM-Track: universal intelligent video segmentation with one click. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete