search
HomeTechnology peripheralsAIVPR 2024 perfect score paper! Meta proposes EfficientSAM: quickly split everything!

EfficientSAM This work was included in CVPR 2024 with a perfect score of 5/5/5! The author shared the result on a social media, as shown in the picture below:

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

LeCun Turing Award winner also strongly recommended this work!

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

In recent research, Meta researchers have proposed a new improved method, which uses SAM masking. Code image pre-training (SAMI). This approach combines MAE pre-training techniques and SAM models to achieve high-quality pre-trained ViT encoders. Through SAMI, researchers try to improve the performance and efficiency of the model and provide better solutions for vision tasks. The proposal of this method brings new ideas and opportunities to further explore and develop the fields of computer vision and deep learning. By combining different pre-training techniques and model structures, researchers continue to


VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!


  • Paper link: https://arxiv.org/pdf/2312.00863
  • ##Code: github.com/yformer/ EfficientSAM
  • Homepage: https://yformer.github.io/efficient-sam/

This approach reduces the complexity of SAM while maintaining good performance. Specifically, SAMI utilizes the SAM encoder ViT-H to generate feature embeddings and trains a mask image model with a lightweight encoder, thereby reconstructing features from SAM's ViT-H instead of image patches, and the resulting universal ViT backbone can be used downstream Tasks such as image classification, object detection and segmentation, etc. We then use the SAM decoder to fine-tune the pre-trained lightweight encoder to complete any segmentation task.

To verify the effectiveness of this approach, the researchers used a transfer learning setting pre-trained on masked images. Specifically, they first pre-trained the model with reconstruction loss on the ImageNet dataset with an image resolution of 224×224. They then fine-tune the model using supervised data from the target task. This transfer learning method can help the model learn quickly and improve performance on new tasks because the model has learned to extract features from the original data through the pre-training stage. This transfer learning strategy effectively utilizes the knowledge learned on large-scale data sets, making it easier for the model to adapt to different tasks. At the same time,

through SAMI pre-training, it can be used on ImageNet- Train models such as ViT-Tiny/-Small/-Base on 1K and improve generalization performance. For the ViT-Small model, after 100 times of fine-tuning on ImageNet-1K, the researchers achieved a Top-1 accuracy of 82.7%, which is better than other state-of-the-art image pre-training baselines.

The researchers fine-tuned the pre-trained model on target detection, instance segmentation and semantic segmentation. In all these tasks, our method achieves better results than other pre-trained baselines, and more importantly, achieves significant gains on small models.

Yunyang Xiong, the author of the paper, said: The EfficientSAM parameters proposed in this article are reduced by 20 times, but the running time is 20 times faster. The difference with the original SAM model is only within 2 percentage points, which is greatly Better than MobileSAM/FastSAM.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

In the demo demonstration, click on the animal in the picture, and EfficientSAM can quickly segment the object:

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

EfficientSAM can also accurately identify the person in the picture:

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Trial address: https: //ab348ea7942fe2af48.gradio.live/

Method

EfficientSAM contains two stages: 1) Pre-training SAMI on ImageNet ( Top); 2) Fine-tuning SAM on SA-1B (bottom).

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

EfficientSAM mainly contains the following components:

Cross-attention decoder: Under the supervision of SAM features, this paper observes that only The mask token needs to be reconstructed by the decoder, and the output of the encoder can act as anchors during the reconstruction process. In the cross-attention decoder, the query comes from the masked tokens, and the keys and values ​​are derived from the unmasked features and masked features from the encoder. This paper merges the output features from the masked tokens of the cross-attention decoder and the output features of the unmasked tokens from the encoder for MAE output embedding. These combined features will then be reordered to the original positions of the input image tokens in the final MAE output.

Linear projection head. We then fed the image outputs obtained through the encoder and cross-attention decoder into a small project head to align the features in the SAM image encoder. For simplicity, this paper only uses a linear projection head to solve the feature dimension mismatch between the SAM image encoder and MAE output.

Reconstruction loss. In each training iteration, SAMI includes forward feature extraction from the SAM image encoder and forward and backpropagation processes of the MAE. The outputs from the SAM image encoder and the MAE linear projection head are compared to calculate the reconstruction loss.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

After pre-training, the encoder can extract feature representations for various visual tasks, and the decoder will also be discarded. In particular, in order to build an efficient SAM model for any segmentation task, this paper adopts SAMI pre-trained lightweight encoders (such as ViT-Tiny and ViT-Small) as the image encoder of EfficientSAM and the default mask decoder of SAM. , as shown in Figure 2 (bottom). This paper fine-tunes the EfficientSAM model on the SA-1B dataset to achieve segmentation of any task.

Experiment

Image classification. In order to evaluate the effectiveness of this method on image classification tasks, the researchers applied SAMI ideas to the ViT model and compared their performance on ImageNet-1K.

As shown in Table 1, SAMI is compared with pre-training methods such as MAE, iBOT, CAE and BEiT, and distillation methods such as DeiT and SSTA.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

SAMI-B’s top1 accuracy reaches 84.8%, which is higher than the pre-trained baseline, MAE, DMAE, iBOT, CAE and BEiT. SAMI also shows large improvements compared to distillation methods such as DeiT and SSTA. For lightweight models such as ViT-Tiny and ViT-Small, SAMI results show significant gains compared to DeiT, SSTA, DMAE, and MAE.

Object detection and instance segmentation. This paper also extends the SAMI-pretrained ViT backbone to downstream object detection and instance segmentation tasks and compares it with a baseline pre-trained on the COCO dataset. As shown in Table 2, SAMI consistently outperforms the performance of other baselines.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

These experimental results show that the pre-trained detector backbone provided by SAMI is very effective in object detection and instance segmentation tasks. efficient.

Semantic segmentation. This paper further extends the pre-trained backbone to semantic segmentation tasks to evaluate its effectiveness. The results are shown in Table 3. Mask2former using SAMI pre-trained backbone achieves better mIoU on ImageNet-1K than using MAE pre-trained backbone. These experimental results verify that the technology proposed in this paper can generalize well to various downstream tasks.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Table 4 compares EfficientSAMs with SAM, MobileSAM, and SAM-MAE-Ti. On COCO, EfficientSAM-Ti outperforms MobileSAM. EfficientSAM-Ti has SAMI pre-trained weights and also performs better than MAE pre-trained weights.

In addition, EfficientSAM-S is only 1.5 mIoU lower than SAM on the COCO box and 3.5 mIoU lower than SAM on the LVIS box, with 20 times fewer parameters. This paper also found that EfficientSAM also showed good performance in multiple clicks compared with MobileSAM and SAM-MAE-Ti.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Table 5 shows the AP, APS, APM and APL for zero-shot instance segmentation. The researchers compared EfficientSAM with MobileSAM and FastSAM, and it can be seen that compared to FastSAM, EfficientSAM-S gained more than 6.5 APs on COCO and 7.8 APs on LVIS. In the case of EffidientSAM-Ti, it is still significantly better than FastSAM, with 4.1 APs on COCO and 5.3 APs on LVIS, while MobileSAM has 3.6 APs on COCO and 5.5 APs on LVIS.

Moreover, EfficientSAM is much lighter than FastSAM. The parameters of efficientSAM-Ti are 9.8M, while the parameters of FastSAM are 68M.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

Figures 3, 4, and 5 provide some qualitative results so that readers can have a complementary understanding of the instance segmentation capabilities of EfficientSAMs.

VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!


VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!


VPR 2024 满分论文!Meta提出EfficientSAM:快速分割一切!

#For more research details, please refer to the original paper.

The above is the detailed content of VPR 2024 perfect score paper! Meta proposes EfficientSAM: quickly split everything!. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
扩散+超分辨率模型强强联合,谷歌图像生成器Imagen背后的技术扩散+超分辨率模型强强联合,谷歌图像生成器Imagen背后的技术Apr 10, 2023 am 10:21 AM

近年来,多模态学习受到重视,特别是文本 - 图像合成和图像 - 文本对比学习两个方向。一些 AI 模型因在创意图像生成、编辑方面的应用引起了公众的广泛关注,例如 OpenAI 先后推出的文本图像模型 DALL・E 和 DALL-E 2,以及英伟达的 GauGAN 和 GauGAN2。谷歌也不甘落后,在 5 月底发布了自己的文本到图像模型 Imagen,看起来进一步拓展了字幕条件(caption-conditional)图像生成的边界。仅仅给出一个场景的描述,Imagen 就能生成高质量、高分辨率

GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑Apr 04, 2023 am 11:55 AM

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝Apr 09, 2023 am 09:41 AM

今天的主角,是一对AI界相爱相杀的老冤家:Yann LeCun和Gary Marcus在正式讲述这一次的「新仇」之前,我们先来回顾一下,两位大神的「旧恨」。LeCun与Marcus之争Facebook首席人工智能科学家和纽约大学教授,2018年图灵奖(Turing Award)得主杨立昆(Yann LeCun)在NOEMA杂志发表文章,回应此前Gary Marcus对AI与深度学习的评论。此前,Marcus在杂志Nautilus中发文,称深度学习已经「无法前进」Marcus此人,属于是看热闹的不

科学家展示世界上有史以来超小的“螃蟹”遥控步行机器人,体积比跳蚤还小科学家展示世界上有史以来超小的“螃蟹”遥控步行机器人,体积比跳蚤还小Apr 09, 2023 pm 10:41 PM

日前,美国西北大学工程师开发出有史以来最小的遥控步行机器人,它以一种小巧可爱的螃蟹形式出现。这种微小的“螃蟹”机器人宽度只有半毫米,可以弯曲、扭曲、爬行、行走、转弯甚至跳跃,无需液压或电力。IT之家了解到,相关研究成果发表在《科学・机器人》上。据介绍,这种机器人是用形状记忆合金材料所制造的,然后可以变成所需的形状,当你加热后又会变回原来的形状,而热量消失时可以再次弹回变形时的样子。据介绍,其热量是由激光所带来的。激光通过“螃蟹”加热合金,但因为它们非常小,所以热量传播非常快,这使得它们的响应速度

用魔法打败魔法!一个叫板顶级人类棋手的围棋AI输给了它的同类用魔法打败魔法!一个叫板顶级人类棋手的围棋AI输给了它的同类Apr 12, 2023 am 08:40 AM

近几年,自我博弈中的强化学习已经在围棋、国际象棋等一系列游戏中取得了超人的表现。此外,自我博弈的理想化版本还收敛于纳什均衡。纳什均衡在博弈论中非常著名,该理论是由博弈论创始人,诺贝尔奖获得者约翰 · 纳什提出,即在一个博弈过程中,无论对方的策略选择如何,当事人一方都会选择某个确定的策略,则该策略被称作支配性策略。如果任意一位参与者在其他所有参与者的策略确定的情况下,其选择的策略是最优的,那么这个组合就被定义为纳什均衡。之前就有研究表明,自我博弈中看似有效的连续控制策略也可以被对抗策略利用,这表明

介绍全球首个基于自监督学习的分子图像生成框架ImageMol介绍全球首个基于自监督学习的分子图像生成框架ImageMolApr 23, 2023 pm 12:46 PM

分子是维持物质化学稳定性的最小单位。对分子的研究,是药学、材料学、生物学、化学等众多科学领域的基础性问题。分子的表征学习(MolecularRepresentationLearning)是近年来非常热门的方向,目前可分为诸多门派:计算药学家说:分子可以表示为一串指纹,或者描述符,如上海药物所提出的AttentiveFP,是这方面的杰出代表。NLPer说:分子可以表示为SMILES(序列),然后当作自然语言处理,如百度的X-Mol,是这方面的杰出代表。图神经网络研究者说:分子可以表示为一个图(G

参数少量提升,性能指数爆发!谷歌:大语言模型暗藏「神秘技能」参数少量提升,性能指数爆发!谷歌:大语言模型暗藏「神秘技能」Apr 11, 2023 pm 11:16 PM

由于可以做一些没训练过的事情,大型语言模型似乎具有某种魔力,也因此成为了媒体和研究员炒作和关注的焦点。当扩展大型语言模型时,偶尔会出现一些较小模型没有的新能力,这种类似于「创造力」的属性被称作「突现」能力,代表我们向通用人工智能迈进了一大步。如今,来自谷歌、斯坦福、Deepmind和北卡罗来纳大学的研究人员,正在探索大型语言模型中的「突现」能力。解码器提示的 DALL-E神奇的「突现」能力自然语言处理(NLP)已经被基于大量文本数据训练的语言模型彻底改变。扩大语言模型的规模通常会提高一系列下游N

用量子纠缠当GPS,无信号地区也能精准定位了用量子纠缠当GPS,无信号地区也能精准定位了May 04, 2023 pm 10:58 PM

量子纠缠(quantumentanglement)是指粒子之间发生的一种特殊耦合现象。在纠缠态下,我们无法单独描述各个粒子的性质,只能描述整体系统的性质的现象,这种影响不随距离的改变而消失,哪怕粒子之间相隔整个宇宙也不会变。一项新的研究表明,使用量子纠缠机制,传感器可以在检测运动时更加准确且更快。科学家们认为,这些发现可能有助于发展不依赖GPS的导航系统。在美国亚利桑那大学等机构在《NaturePhotonics》提交的一项新研究中,研究人员对光机械传感器(optomechanicalsenso

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools