


- The research direction of this paper involves visual language pre-training (VLP), cross-modal image and text retrieval (CMITR) and other fields. This selection marks the re-international recognition of NetEase Fuxi Lab’s multi-modal capabilities. Currently, the relevant technology has been applied to NetEase Fuxi’s self-developed multi-modal intelligent assistant “Dan Qing Yue”.
- ACM MM was initiated by the Association for Computing Machinery (ACM). It is the most influential top international conference in the field of multimedia processing, analysis and computing. It is also a Class A international academic conference in the field of multimedia recommended by the China Computer Federation. As the top conference in the field, ACM MM has received widespread attention from well-known manufacturers and scholars at home and abroad. This year's ACM MM received a total of 4385 valid manuscripts, of which 1149 were accepted by the conference, with an acceptance rate of 26.20%.
As a leading artificial intelligence research institution in China, NetEase Fuxi has accumulated nearly six years of experience in large-scale model research, has rich algorithm and engineering experience, and has created dozens of text and multi-modal pre-training Models include large models for text understanding and generation, large models for image and text understanding, large models for image and text generation, etc. These achievements not only effectively promote the application of large models in the game field, but also lay a solid foundation for the development of cross-modal understanding capabilities. Cross-modal understanding capabilities help to better integrate multiple domain knowledge and align rich data modalities and information.
On this basis, NetEase Fuxi further innovated based on the large model of image and text understanding, and proposed a cross-modal retrieval method based on the selection and reconstruction of key local information to solve the problem of image text in specific fields for multi-modal agents. Interaction issues lay the technical foundation.
The following is a summary of the selected papers:
"Selection and Reconstruction of Key Locals: A Novel Specific Domain Image-Text Retrieval Method"
Selection and Reconstruction of Key Local Information: A Novel Specific Domain Image and Text Retrieval Method
Keywords: key local information, fine-grained, interpretable
Involved fields: visual language pre-training (VLP), cross-modal image and text retrieval (CMITR)
In recent years, with the visual language pre-training (Vision- With the rise of Language Pretraining (VLP) models, significant progress has been made in the field of Cross-Modal Image-Text Retrieval (CMITR). Although VLP models like CLIP perform well in domain-general CMITR tasks, their performance often falls short in Specific Domain Image-Text Retrieval (SDITR). This is because a specific domain often has unique data characteristics that distinguish it from the general domain.
In a specific domain, images may exhibit a high degree of visual similarity between them, while semantic differences tend to focus on key local details, such as specific object areas in the image or meaningful words in the text. Even small changes in these local segments can have a significant impact on the entire content, highlighting the importance of this critical local information. Therefore, SDITR requires the model to focus on key local information fragments to enhance the expression of image and text features in a shared representation space, thereby improving the alignment accuracy between images and text.
This topic explores the application of visual language pre-training models in image-text retrieval tasks in specific fields, and studies the issue of local feature utilization in image-text retrieval tasks in specific fields. The main contribution is to propose a method to exploit discriminative fine-grained local information to optimize the alignment of images and text in a shared representation space.
To this end, we design an explicit key local information selection and reconstruction framework and a key local segment reconstruction strategy based on multi-modal interaction. These methods effectively utilize discriminative fine-grained local information, thereby significantly improving image and Extensive and sufficient experiments on the quality of text alignment in shared space demonstrate the advancement and effectiveness of the proposed strategy.
Special thanks to the IPIU Laboratory of Xi'an University of Electronic Science and Technology for its strong support and important research contribution to this paper.
Currently, NetEase Fuxi’s multi-modal understanding capabilities have been widely used in multiple business departments of NetEase Group, including NetEase Leihuo, NetEase Cloud Music, NetEase Yuanqi, etc. These applications cover a variety of scenarios such as innovative text-based face pinching gameplay in games, cross-modal resource search, personalized content recommendations, etc., demonstrating huge business value.
In the future, with the in-depth research and technological advancement, this achievement is expected to promote the widespread application of artificial intelligence technology in education, medical care, e-commerce and other industries, providing users with a more personalized and intelligent service experience. NetEase Fuxi will also continue to deepen exchanges and cooperation with top academic institutions at home and abroad, conduct in-depth exploration in more cutting-edge research fields, jointly promote the development of artificial intelligence technology, and contribute to building a more efficient and smarter society.
Scan the QR code below to experience the "Picture Appointment" immediately and enjoy the multi-modal interactive experience with pictures and texts that "understand you better"!
The above is the detailed content of ACM MM2024 | NetEase Fuxi's multimodal research gained international recognition again, promoting new breakthroughs in cross-modal understanding in specific fields. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Linux new version
SublimeText3 Linux latest version

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

WebStorm Mac version
Useful JavaScript development tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft
