


ICLR 2024 | Providing a new perspective for audio and video separation, Tsinghua University's Hu Xiaolin team launched RTFS-Net
The main purpose of Audiovisual Speech Separation (AVSS) technology is to identify and separate the target speaker's voice in a mixed signal, using facial information to achieve this goal. This technology has wide applications in multiple fields, including smart assistants, remote conferencing, and augmented reality. Through AVSS technology, the quality of speech signals in noisy environments can be significantly improved, thereby improving the effects of speech recognition and communication. The development of this technology has brought convenience to people's daily life and work, making it easier for people to
Traditional audio-visual speech separation methods usually require complex models and a large amount of computing resources, especially when there are noisy Its performance is easily limited in background or multi-speaker situations. To overcome these problems, researchers began to explore deep learning-based methods. However, existing deep learning technology has the challenges of high computational complexity and difficulty in adapting to unknown environments.
Specifically, the current audio-visual speech separation method has the following problems:
Time domain method: It can provide high-quality audio separation effect, but due to many parameters, The computational complexity is high and the processing speed is slow.
Time-frequency domain methods: more computationally efficient, but historically have performed poorly compared to time domain methods. They face three main challenges:
#1. Lack of independent modeling of time and frequency dimensions.
2. Visual cues from multiple receptive fields are not fully utilized to improve model performance.
3. Improper processing of complex features leads to the loss of critical amplitude and phase information.
In order to deal with these challenges, researchers from the team of Associate Professor Hu Xiaolin of Tsinghua University proposed a new audio-visual speech separation model called RTFS-Net. The model adopts a compression-reconstruction method, which significantly reduces the computational complexity and number of parameters of the model while improving separation performance. RTFS-Net is the first audiovisual speech separation method using less than 1 million parameters, and it is also the first method to outperform all time domain models in time-frequency domain multi-modal separation.
Paper address: https://arxiv.org/abs/2309.17189
Paper home page: https://cslikai.cn/RTFS-Net/AV-Model-Demo.html
Code address: https://github.com/spkgyk/RTFS-Net (coming soon) )
Method introduction
The overall network architecture of RTFS-Net is shown in Figure 1 below:
Figure 1. Network framework of RTFS-Net
Among them, the RTFS block (shown in Figure 2) is responsible for the acoustic dimensions (time and frequency) ) for compression and independent modeling, minimizing information loss while creating low-complexity subspaces. Specifically, the RTFS block employs a dual-path architecture for efficient processing of audio signals in both time and frequency dimensions. With this approach, RTFS blocks are able to reduce computational complexity while maintaining high sensitivity and accuracy to audio signals. The following is the specific workflow of the RTFS block:
1. Time-frequency compression: The RTFS block first compresses the input audio features in the time and frequency dimensions.
2. Independent dimension modeling: After completing the compression, the RTFS block independently models the time and frequency dimensions.
3. Dimension fusion: After processing the time and frequency dimensions independently, the RTFS block merges the information of the two dimensions through a fusion module.
4. Reconstruction and output: Finally, the fused features are reconstructed back to the original time-frequency space through a series of deconvolution layers.
## 图 2. The network structure of the RTFS block
# (CAF) module (As shown in Figure 3) Effectively fuses audio and visual information, enhances speech separation effect, and the computational complexity is only 1.3% of the previous SOTA method. Specifically, the CAF module first generates attention weights using depth and grouped convolution operations. These weights dynamically adjust based on the importance of input features, allowing the model to focus on the most relevant information. Then, by applying the generated attention weights to visual and auditory features, the CAF module is able to focus on key information in multiple dimensions. This step involves weighting and fusing features of different dimensions to produce a comprehensive feature representation. In addition to the attention mechanism, the CAF module can also adopt a gating mechanism to further control the degree of fusion of different source features. This approach can enhance the flexibility of the model and allow finer information flow control.
## Figure 3. Schematic structural diagram of the CAF fusion module
Design of spectrum source separation (S^3) block The idea is to use the spectral information represented by complex numbers to effectively extract the speech features of the target speaker from the mixed audio. This method makes full use of the phase and amplitude information of the audio signal, improving the accuracy and efficiency of source separation. And using a complex network enables the S^3 block to process the signal more accurately when isolating the speech of the target speaker, especially in preserving details and reducing artifacts, as shown below. Likewise, the design of the S^3 block allows easy integration into different audio processing frameworks, is suitable for a variety of source separation tasks, and has good generalization capabilities.Experimental results
Separation effect
In On three benchmark multi-modal speech separation datasets (LRS2, LRS3 and VoxCeleb2), as shown below, RTFS-Net approaches or exceeds the current state-of-the-art performance while significantly reducing model parameters and computational complexity. The trade-off between efficiency and performance is demonstrated through variants with different numbers of RTFS blocks (4, 6, 12 blocks), where RTFS-Net-6 provides a good balance of performance and efficiency. RTFS-Net-12 performed best on all tested data sets, proving the advantages of time-frequency domain methods in handling complex audio and video synchronization separation tasks.actual effect
Mixed video:Female speaker audio:
Male speaker audio:
Summary
With the continuous development of large model technology, audio-visual speech separation The field is also pursuing large models to improve separation quality. However, this is not feasible for end devices. RTFS-Net achieves significant performance improvements while maintaining significantly reduced computational complexity and number of parameters. This demonstrates that improving AVSS performance does not necessarily require larger models, but rather innovative, efficient architectures that better capture the intricate interplay between audio and visual modalities.The above is the detailed content of ICLR 2024 | Providing a new perspective for audio and video separation, Tsinghua University's Hu Xiaolin team launched RTFS-Net. For more information, please follow other related articles on the PHP Chinese website!

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s

Google's Gemini Advanced: New Subscription Tiers on the Horizon Currently, accessing Gemini Advanced requires a $19.99/month Google One AI Premium plan. However, an Android Authority report hints at upcoming changes. Code within the latest Google P

Despite the hype surrounding advanced AI capabilities, a significant challenge lurks within enterprise AI deployments: data processing bottlenecks. While CEOs celebrate AI advancements, engineers grapple with slow query times, overloaded pipelines, a

Handling documents is no longer just about opening files in your AI projects, it’s about transforming chaos into clarity. Docs such as PDFs, PowerPoints, and Word flood our workflows in every shape and size. Retrieving structured

Harness the power of Google's Agent Development Kit (ADK) to create intelligent agents with real-world capabilities! This tutorial guides you through building conversational agents using ADK, supporting various language models like Gemini and GPT. W

summary: Small Language Model (SLM) is designed for efficiency. They are better than the Large Language Model (LLM) in resource-deficient, real-time and privacy-sensitive environments. Best for focus-based tasks, especially where domain specificity, controllability, and interpretability are more important than general knowledge or creativity. SLMs are not a replacement for LLMs, but they are ideal when precision, speed and cost-effectiveness are critical. Technology helps us achieve more with fewer resources. It has always been a promoter, not a driver. From the steam engine era to the Internet bubble era, the power of technology lies in the extent to which it helps us solve problems. Artificial intelligence (AI) and more recently generative AI are no exception

Harness the Power of Google Gemini for Computer Vision: A Comprehensive Guide Google Gemini, a leading AI chatbot, extends its capabilities beyond conversation to encompass powerful computer vision functionalities. This guide details how to utilize

The AI landscape of 2025 is electrifying with the arrival of Google's Gemini 2.0 Flash and OpenAI's o4-mini. These cutting-edge models, launched weeks apart, boast comparable advanced features and impressive benchmark scores. This in-depth compariso


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

WebStorm Mac version
Useful JavaScript development tools

SublimeText3 Chinese version
Chinese version, very easy to use
