Home  >  Article  >  Technology peripherals  >  Meta releases audio AI model that simulates real-person speech in just 2 seconds

Meta releases audio AI model that simulates real-person speech in just 2 seconds

WBOY
WBOYforward
2023-06-21 15:20:171575browse

Recently, Meta released the Voicebox AI model, which has significant advantages in audio simulation.

It is reported that Voicebox only needs a 2-second audio sample to accurately identify the audio details and timbre, and convert it into speech output based on the text results.

Meta releases audio AI model that simulates real-person speech in just 2 seconds

Voicebox is a generative AI model that helps with audio editing, sampling, and styling.

This technology can be used to help creators easily edit audio tracks in the future. At the same time, it can also provide assistance to people with damaged vocal cords and help them "sound" again. Enables visually impaired people to hear their friends' written messages through sound, while enabling people to speak any foreign language with their own voice.

At the same time, it can also automatically fill in the missing content based on the preceding and following content of the voice clip.

According to Meta, Voicebox can provide natural and realistic voice effects for AI assistants or NPCs in the future metaverse, greatly improving the user's immersion when using it.

Voicebox’s versatility supports a variety of tasks, including:

Contextual text-to-speech synthesis: Using audio samples as short as two seconds, Voicebox can match audio styles and use them for text-to-speech generation.

Voice Editing and Noise Reduction: Voicebox can recreate parts of speech interrupted by noise or replace misspoken words without having to re-record the entire speech. For example, you can identify a segment of speech interrupted by a barking dog, crop it, and then instruct Voicebox to regenerate the segment—like an eraser for audio editing.

Cross-language conversion: When given a sample of someone's speech and a text in English, French, German, Spanish, Polish, or Portuguese, Voicebox can generate a reading of the text in any of these languages, even if the sample speech and text are different languages. In the future, people will be able to use this feature to communicate in a more natural and authentic way, even if they don't understand the languages.

Flow matching is a method used by Voicebox that has been shown to improve the performance of diffusion models. Voicebox outperforms VALL-E, the current state-of-the-art English model, in intelligibility (5.9% vs. 1.9% word error rate) and audio similarity (0.580 vs. 0.681), while being 20x faster. For cross-language style transfer, Voicebox outperforms YourTTS, reducing the average word error rate from 10.9% to 5.2% and improving audio similarity from 0.335 to 0.481.

Meta releases audio AI model that simulates real-person speech in just 2 seconds

Voicebox achieves new state-of-the-art results, outperforming Vall-E and YourTTS in word error rate.

Meta releases audio AI model that simulates real-person speech in just 2 seconds

Voicebox also achieves state-of-the-art results on audio style similarity metrics in English and multilingual benchmarks respectively.

It is worth mentioning that Meta is currently aware of the potential harm that exists when Voicebox is used in the field of counterfeiting, so they are looking for a way to distinguish between real speech and Voicebox-generated speech.

Until a solution is found, Meta will not disclose the Voicebox AI model to the public to avoid unnecessary harm.

Editor’s comment: AI has now been applied in various fields. As the first multi-functional and efficient model that successfully performs task generalization, I believe Voicebox can create a new era of speech generation AI. If Meta cannot effectively deal with audio fraud, Voicebox technology may be disabled.

The above is the detailed content of Meta releases audio AI model that simulates real-person speech in just 2 seconds. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete