Home >Technology peripherals >AI >ChatGPT that can understand speech is here: 10 hours of recording thrown in, ask whatever you want
Large Language Models (LLMs) are changing user expectations in every industry. However, building generative AI products centered on human speech remains difficult because audio files pose a challenge to large language models.
A key challenge in applying LLM to audio files is that LLM is limited by its context window. Before an audio file can be fed into LLM, it needs to be converted to text. The longer the audio file, the greater the engineering challenge of bypassing LLM's context window limitations. But in work scenarios, we often need LLM to help us process very long voice files, such as extracting the core content from a several-hour meeting recording, finding the answer to a certain question from an interview...
Recently, speech recognition AI company AssemblyAI launched a new model called LeMUR. Just like ChatGPT processes dozens of pages of PDF text, LeMUR can transcribe and process up to 10 hours of recordings, and then help users summarize the core content of the speech and answer questions entered by the user.
##Trial address: https://www.assemblyai.com/playground/v2/source
LeMUR is the abbreviation of Leveraging Large Language Models to Understand Recognized Speech (using large language models to understand recognized speech). It is a new framework that applies powerful LLM to transcribed speech. . With just one line of code (via AssemblyAI’s Python SDK), LeMUR can quickly process the transcription of up to 10 hours of audio content, effectively converting it into approximately 150,000 tokens. In contrast, off-the-shelf, vanilla LLMs can only accommodate up to 8K, or about 45 minutes of transcribed audio within the constraints of their context window.
To reduce the complexity of applying LLM to transcribed audio files, LeMUR's pipeline mainly includes intelligent segmentation, a fast A vector database and several reasoning steps (such as thought chain prompts and self-assessment), as shown below:
# #Figure 1: LeMUR's architecture enables users to send long and/or multiple audio transcription files into LLM with a single API call.
#In the future, LeMUR is expected to be widely used in customer service and other fields.LeMUR unlocks some amazing new possibilities that I didn’t think were possible just a few years ago . It feels truly amazing to be able to effortlessly extract valuable insights such as determining the best action and discerning call outcomes such as sales, appointments or the purpose of the call. —Ryan Johnson, chief product officer at CallRail, a call tracking and analytics services technology company
What possibilities does LeMUR unlock?
Apply LLM to multiple audio texts
LeMUR allows users to obtain LLM processing of multiple audio files at once Feedback, and up to 10 hours of voice transcription results, the length of the converted text token can reach 150K.
Reliable and safe output
Because LeMUR includes safety measures and content filters, it will provide users with responses from LLM that are less likely to generate harmful or biased language.
can supplement the context In reasoning When used, it allows the incorporation of additional contextual information that LLM can leverage to provide personalized and more accurate results when generating output.
##Modular, rapid integration LeMUR always returns structured data in a processable JSON form. Users can further customize the output format of LeMUR to ensure that the response given by the LLM is in the format expected by their next piece of business logic (e.g. converting the answer to a Boolean value). In this process, users no longer need to write specific code to process the output of LLM. According to the test link provided by AssemblyAI, Machine Heart tested LeMUR. LeMUR’s interface supports two file input methods: uploading audio and video files or pasting web links.
We used a recent interview with Hinton as input to test the performance of LeMUR.
After uploading, the system prompts us to wait for a while because it needs to convert speech into text first.
The interface after transcription is as follows:
On the right side of the page, we can ask LeMUR to summarize the interview or answer questions. LeMUR basically gets the job done with ease:
## If the voice you want to process is a speech or a customer service reply, you can also ask LeMUR for improvement suggestions.However, LeMUR does not seem to support Chinese yet. Interested readers can try it out.
The above is the detailed content of ChatGPT that can understand speech is here: 10 hours of recording thrown in, ask whatever you want. For more information, please follow other related articles on the PHP Chinese website!