Home  >  Article  >  Technology peripherals  >  ICLR 2024 | Providing a new perspective for audio and video separation, Tsinghua University’s Hu Xiaolin team launched RTFS-Net

ICLR 2024 | Providing a new perspective for audio and video separation, Tsinghua University’s Hu Xiaolin team launched RTFS-Net

WBOY
WBOYforward
2024-03-06 18:28:13421browse

The main purpose of Audiovisual Speech Separation (AVSS) technology is to identify and separate the target speaker's voice in a mixed signal, using facial information to achieve this goal. This technology has wide applications in multiple fields, including smart assistants, remote conferencing, and augmented reality. Through AVSS technology, the quality of speech signals in noisy environments can be significantly improved, thereby improving the effects of speech recognition and communication. The development of this technology has brought convenience to people's daily life and work, making it easier for people to

Traditional audio-visual speech separation methods usually require complex models and a large amount of computing resources, especially when there are noisy Its performance is easily limited in background or multi-speaker situations. To overcome these problems, researchers began to explore deep learning-based methods. However, existing deep learning technology has the challenges of high computational complexity and difficulty in adapting to unknown environments.

Specifically, the current audio-visual speech separation method has the following problems:

  • Time domain method: It can provide high-quality audio separation effect, but due to many parameters, The computational complexity is high and the processing speed is slow.

  • Time-frequency domain methods: more computationally efficient, but historically have performed poorly compared to time domain methods. They face three main challenges:

#1. Lack of independent modeling of time and frequency dimensions.

2. Visual cues from multiple receptive fields are not fully utilized to improve model performance.

3. Improper processing of complex features leads to the loss of critical amplitude and phase information.

In order to deal with these challenges, researchers from the team of Associate Professor Hu Xiaolin of Tsinghua University proposed a new audio-visual speech separation model called RTFS-Net. The model adopts a compression-reconstruction method, which significantly reduces the computational complexity and number of parameters of the model while improving separation performance. RTFS-Net is the first audiovisual speech separation method using less than 1 million parameters, and it is also the first method to outperform all time domain models in time-frequency domain multi-modal separation.

ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-Net

  • Paper address: https://arxiv.org/abs/2309.17189

  • Paper home page: https://cslikai.cn/RTFS-Net/AV-Model-Demo.html

  • Code address: https://github.com/spkgyk/RTFS-Net (coming soon) )

Method introduction

The overall network architecture of RTFS-Net is shown in Figure 1 below:

ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-Net

                                                                                                                                                                                                                                              Figure 1. Network framework of RTFS-Net

Among them, the RTFS block (shown in Figure 2) is responsible for the acoustic dimensions (time and frequency) ) for compression and independent modeling, minimizing information loss while creating low-complexity subspaces. Specifically, the RTFS block employs a dual-path architecture for efficient processing of audio signals in both time and frequency dimensions. With this approach, RTFS blocks are able to reduce computational complexity while maintaining high sensitivity and accuracy to audio signals. The following is the specific workflow of the RTFS block:

1. Time-frequency compression: The RTFS block first compresses the input audio features in the time and frequency dimensions.

2. Independent dimension modeling: After completing the compression, the RTFS block independently models the time and frequency dimensions.

3. Dimension fusion: After processing the time and frequency dimensions independently, the RTFS block merges the information of the two dimensions through a fusion module.

4. Reconstruction and output: Finally, the fused features are reconstructed back to the original time-frequency space through a series of deconvolution layers.

ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-Net

## 图 2. The network structure of the RTFS block

# (CAF) module (As shown in Figure 3) Effectively fuses audio and visual information, enhances speech separation effect, and the computational complexity is only 1.3% of the previous SOTA method. Specifically, the CAF module first generates attention weights using depth and grouped convolution operations. These weights dynamically adjust based on the importance of input features, allowing the model to focus on the most relevant information. Then, by applying the generated attention weights to visual and auditory features, the CAF module is able to focus on key information in multiple dimensions. This step involves weighting and fusing features of different dimensions to produce a comprehensive feature representation. In addition to the attention mechanism, the CAF module can also adopt a gating mechanism to further control the degree of fusion of different source features. This approach can enhance the flexibility of the model and allow finer information flow control.

ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-Net

##                                                                                                                                                                                                                                     Figure 3. Schematic structural diagram of the CAF fusion module

Design of spectrum source separation (S^3) block The idea is to use the spectral information represented by complex numbers to effectively extract the speech features of the target speaker from the mixed audio. This method makes full use of the phase and amplitude information of the audio signal, improving the accuracy and efficiency of source separation. And using a complex network enables the S^3 block to process the signal more accurately when isolating the speech of the target speaker, especially in preserving details and reducing artifacts, as shown below. Likewise, the design of the S^3 block allows easy integration into different audio processing frameworks, is suitable for a variety of source separation tasks, and has good generalization capabilities.

ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-Net

Experimental results

Separation effect

In On three benchmark multi-modal speech separation datasets (LRS2, LRS3 and VoxCeleb2), as shown below, RTFS-Net approaches or exceeds the current state-of-the-art performance while significantly reducing model parameters and computational complexity. The trade-off between efficiency and performance is demonstrated through variants with different numbers of RTFS blocks (4, 6, 12 blocks), where RTFS-Net-6 provides a good balance of performance and efficiency. RTFS-Net-12 performed best on all tested data sets, proving the advantages of time-frequency domain methods in handling complex audio and video synchronization separation tasks.

ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-Net

actual effect

Mixed video:

ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-NetFemale speaker audio: ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-NetMale speaker audio: ICLR 2024 | 为音视频分离提供新视角,清华大学胡晓林团队推出RTFS-Net

Summary

With the continuous development of large model technology, audio-visual speech separation The field is also pursuing large models to improve separation quality. However, this is not feasible for end devices. RTFS-Net achieves significant performance improvements while maintaining significantly reduced computational complexity and number of parameters. This demonstrates that improving AVSS performance does not necessarily require larger models, but rather innovative, efficient architectures that better capture the intricate interplay between audio and visual modalities.

The above is the detailed content of ICLR 2024 | Providing a new perspective for audio and video separation, Tsinghua University’s Hu Xiaolin team launched RTFS-Net. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:jiqizhixin.com. If there is any infringement, please contact admin@php.cn delete