search
HomeTechnology peripheralsAICan BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

How to run BERT on a convolutional neural network?

You can directly use SparK - Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling proposed by the ByteDance technical team. Recently, it has been included as a Spotlight focus paper by the top artificial intelligence conference :


Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

##Paper link:

https://www.php.cn/link/e38e37a99f7de1f45d169efcdb288dd1

Open source code: ##​

https://www.php.cn/link/9dfcf16f0adbc5e2a55ef02db36bac7f

This is also BERT’s first success in convolutional neural networks (CNN)

##. Let’s first feel the performance of SparK in pre-training. Enter an incomplete picture:


Restore a puppy: Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

Another A mutilated picture: Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

It turns out to be a bagel sandwich: Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

Other scenes can also achieve picture restoration: Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 SpotlightThe perfect match between BERT and Transformer

Any great actions and thoughts, They all have a humble beginning.Behind the BERT pre-training algorithm is a simple and profound design . BERT uses "cloze": randomly delete several words in a sentence and let the model learn to recover.

BERT relies heavily on the

core model in the NLP field - Transformer. Transformer is naturally suitable for processing variable-length sequence data (such as an English sentence), so it can easily cope with BERT's "random deletion" of cloze ".

CNN in the visual field also wants to enjoy BERT: What are the two challenges?

Looking back at the development history of computer vision, Convolutional neural network model condenses the essence of many classic models such as translational equivariance, multi-scale structure, etc. , can be described as the mainstay of the CV world. But what is very different from Transformer is that CNN is inherently unable to adapt to data that is "hollowed out" by cloze and full of "random holes", so it cannot enjoy the dividends of BERT pre-training at first glance.


Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

## Above picture a. What is shown is the work of MAE (Masked Autoencoders are Scalable Visual Learners). Since it uses the Transformer model instead of the CNN model, it can flexibly cope with inputs with holes, which is a "natural match" with BERT.


The picture on the right b. shows a rough way to fuse the BERT and CNN models - that is, "blacken" all the empty areas, and When this "black mosaic" image is input into CNN, the result can be imagined, which will cause serious pixel intensity distribution shift problem and lead to poor performance (verified later). This is the

challenge that hinders the successful application of BERT on CNN. #In addition, the author team also pointed out that the BERT algorithm originating from the field of NLP naturally does not have the characteristics of "multi-scale", and the multi-scale pyramid structure It can be called the "gold standard" in the long history of computer vision. The conflict between single-scale BERT and natural multi-scale CNN is

Challenge 2. Solution SparK: Sparse and Hierarchical Mask Modeling


Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight##The author team proposed SparK (Sparse and hierarchical masKed modeling) to solve the previous two problems a challenge.


First, inspired by three-dimensional point cloud data processing, the author team proposed to treat the fragmented images after masking operation (hollowing operation) as Sparse point clouds are encoded using Submanifold Sparse Convolution. This allows the convolutional network to handle randomly deleted images easily.

Secondly, inspired by the elegant design of UNet, the author team naturally designed an encoder-decoder model with horizontal connections, allowing Multi-scale features flow between multiple levels of the model, allowing BERT to fully embrace the multi-scale gold standard of computer vision.

At this point, SparK, a sparse, multi-scale mask modeling algorithm tailored for convolutional networks (CNN), was born.

SparK is

general:

It can Can be directly used on any convolutional network without any modification to their structure or the introduction of any additional components - whether it is the familiar classic ResNet or the recent advanced model ConvNeXt, you can directly benefit from SparK . From ResNet to ConvNeXt: Performance improvements in three major visual tasks

The author team selected two representative convolution model families, ResNet and ConvNeXt, and conducted performance tests on image classification, target detection, and instance segmentation tasks.

On the classic ResNet-50 model, SparK serves as the only generative pre-training,

has achieved State-of-the-art level:

Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

On the ConvNeXt model, SparK still leads . Before pre-training, ConvNeXt and Swin-Transformer were evenly matched; after pre-training, ConvNeXt overwhelmingly surpassed Swin-Transformer in three tasks:

Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

When verifying SparK on the complete model family from small to large, you can observe:

##No matter Models big or small, new or old, can all benefit from SparK, and as the model size/training overhead increases, the increase is even higher, reflecting the scaling capability of the SparK algorithm:

Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight

##Finally, the author team also designed a confirmatory ablation experiment, from which we can see

Sparse MaskandHierarchical Structure Lines 3 and 4 Lines) are very critical designs. Once missing, it will cause serious performance degradation:

The above is the detailed content of Can BERT also be used on CNN? ByteDance's research results selected for ICLR 2023 Spotlight. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
字节跳动旗下视频编辑 App CapCut 全球用户总支出超 1 亿美元字节跳动旗下视频编辑 App CapCut 全球用户总支出超 1 亿美元Sep 14, 2023 pm 09:41 PM

字节跳动旗下的创意视频剪辑工具CapCut在中国、美国和东南亚拥有大量用户。该工具支持安卓、iOS和PC平台市场调研机构data.ai最新报告指出,截至2023年9月11日,CapCut在iOS和GooglePlay上的用户总支出已突破1亿美元(本站备注:当前约7.28亿元人民币),成功超越Splice(2022年下半年排名第一)成为2023年上半年全球最吸金的视频剪辑应用,与2022年下半年相比增长了180%。截至2023年8月,全球有4.9亿人通过iPhone和安卓手机使用CapCut。da

深圳字节跳动后海中心总建筑面积 7.74 万平方米完成主体结构封顶深圳字节跳动后海中心总建筑面积 7.74 万平方米完成主体结构封顶Jan 24, 2024 pm 05:27 PM

据南山区政府官方微信公众号“创新南山”透露,深圳字节跳动后海中心项目最近取得了重要进展。根据中建一局建设发展公司的消息,该项目主体结构提前3天全部完成封顶工作。这一消息意味着南山后海核心区将迎来一个新的地标建筑。深圳字节跳动后海中心项目位于南山区后海核心区,是今日头条科技有限公司在深圳市的总部办公大楼。总建筑面积为7.74万平方米,高约150米,共有地下4层和地上32层。据悉,深圳字节跳动后海中心项目将成为一座创新型超高层建筑,集办公、娱乐、餐饮等功能为一体。该项目将有助于深圳推动互联网产业的集

字节跳动模型大规模部署实战字节跳动模型大规模部署实战Apr 12, 2023 pm 08:31 PM

一. 背景介绍在字节跳动,基于深度学习的应用遍地开花,工程师关注模型效果的同时也需要关注线上服务一致性和性能,早期这通常需要算法专家和工程专家分工合作并紧密配合来完成,这种模式存在比较高的 diff 排查验证等成本。随着 PyTorch/TensorFlow 框架的流行,深度学习模型训练和在线推理完成了统一,开发者仅需要关注具体算法逻辑,调用框架的 Python API 完成训练验证过程即可,之后模型可以很方便的序列化导出,并由统一的高性能 C++ 引擎完成推理工作。提升了开发者训练到部署的体验

NUS和字节跨界合作,通过模型优化实现训练提速72倍,并荣获AAAI2023杰出论文。NUS和字节跨界合作,通过模型优化实现训练提速72倍,并荣获AAAI2023杰出论文。May 06, 2023 pm 10:46 PM

近日,人工智能国际顶会AAAI2023公布评选结果。新加坡国立大学(NUS)与字节跳动机器学习团队(AML)合作的CowClip技术论文入围杰出论文(DistinguishedPapers)。CowClip是一项模型训练优化策略,可以在保证模型精度的前提下,实现在单张GPU上的模型训练速度提升72倍,相关代码现已开源。​论文地址:https://arxiv.org/abs/2204.06240​开源地址:https://github.com/bytedance/LargeBatchCTR​AAA

字节跳动拓展全球研发中心,派遣工程师加拿大和澳大利亚等地字节跳动拓展全球研发中心,派遣工程师加拿大和澳大利亚等地Jan 18, 2024 pm 04:00 PM

IT之家1月18日消息,针对近日TikTok国内员工转岗海外的传言,据接近字节跳动的人士透露,该公司正在加拿大、澳大利亚等地筹建研发中心。目前,部分研发中心已试运营半年左右,未来将支持TikTok、CapCut、Lemon8等多个海外业务研发。字节跳动计划以当地招聘为主,并辅助少量外派的方式筹建相关研发中心。据了解,过去半年,该公司已从美国、中国、新加坡等地选派少量工程师参与筹建。其中,从中国向两地研发中心累计派出包括产品、研发和运营岗位120人。相关人士表示,此举是为了应对海外业务的发展,更好

Pico疑似即将发布全新VR头显Pico 4S,硬件升级引期待Pico疑似即将发布全新VR头显Pico 4S,硬件升级引期待Mar 16, 2024 pm 08:49 PM

近期,科技圈再次掀起了一股虚拟现实(VR)的热潮。据称,字节跳动旗下的VR子公司Pico即将推出全新的独立VR头显——Pico4S。一位名为@Lunayian的用户在社交媒体上发布了一张3D模型图片,声称该图片来自PicoConnectPC客户端,展示了Pico4S的右控制器设计。这款控制器的外观与去年9月在网络上泄露的"Pico5"控制器非常相似,但与Pico4的控制器有一些明显的差异,主要体现在取消了定位环。这一设计调整可能预示着Pico4S将带来全新的用户体验和交互方式。据了解,Pico在

PICO 4 销量远远低于预期,消息称字节跳动将取消下一代 VR 头显 PICO 5PICO 4 销量远远低于预期,消息称字节跳动将取消下一代 VR 头显 PICO 5Dec 15, 2023 am 09:34 AM

本站12月13日消息,据TheInformation,字节跳动准备砍掉其PICO新一代VR头显PICO5,因为现款PICO4的销量远远低于预期。根据EqualOcean在今年10月的一篇文章,据称字节跳动将逐步关闭PICO,并放弃元宇宙领域。文章指出,字节跳动认为PICO所处的硬件领域并非其专长,几年来的成绩未达到预期,并且对未来缺乏希望在当时,字节跳动的相关负责人对于关于“逐步放弃PICO业务”的传闻进行了回应,称这一消息是不实的。他们表示PICO业务仍在正常运营,并且公司将会长期投入扩展现实

抖音子公司推出基于云雀模型的 AI 机器人“豆包”抖音子公司推出基于云雀模型的 AI 机器人“豆包”Aug 23, 2023 am 10:53 AM

本站8月17日消息,字节跳动旗下LLM人工智能机器人“豆包”现已开始小范围邀请测试,用户可通过手机号、抖音或者AppleID登录。根据报道,据称字节跳动公司开发了一款名为"豆包"的AI工具,该工具基于云雀模型,提供聊天机器人、写作助手和英语学习助手等功能。它可以回答各种问题并进行对话,帮助人们获取信息。"豆包"支持网页Web平台、iOS和安卓平台,但在iOS平台上需要通过TestFlight进行安装官网用户协议显示,“豆包”软件及相关服务系指北京春田知韵科

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

Atom editor mac version download

Atom editor mac version download

The most popular open source editor