Home >Technology peripherals >AI >Will AI speech generators become the next major security threat?

Will AI speech generators become the next major security threat?

王林
王林forward
2023-04-27 16:55:081371browse

Artificial Intelligence is a powerful technology that promises to revolutionize our lives. This has never been more obvious than today; powerful tools are available to anyone with an Internet connection.

These tools include artificial intelligence speech generators, advanced software that imitates human speech so closely that it is impossible to distinguish and distinguish between the two. What does this mean for cybersecurity?

How does the artificial intelligence speech generator work?

Speech synthesis refers to the process of artificially generating human speech, and it has been around for decades. Like all technology, it has undergone significant and profound changes over the years.

Users who have used Windows 2000 and XP may still remember Microsoft Sam, which is the default text-to-speech male voice in Microsoft operating systems. The performance of Microsoft Sam can be described as quite satisfactory, but the sound it makes is very mechanical, very stiff, and very artificial. The tools we have at our fingertips now are far more advanced, thanks in large part to deep learning.

Deep learning is a machine learning method based on artificial neural networks. Thanks to these neural networks, modern AI can process data almost as powerfully as the neurons in the human brain interpret information. In other words, the more human-like artificial intelligence becomes, the better it will be at imitating human behavior.

In a nutshell, this is how modern AI speech generators work. The more speech data they are exposed to, the better they become at imitating human speech. Thanks to recent advances in the technology, state-of-the-art text-to-speech software can essentially replicate the sounds fed to it.

How do threat actors use artificial intelligence speech generators?

As expected, this technique is being abused by threat actors. Not just cybercriminals in the classic sense, but also disinformation agents, scammers, black hat marketers and trolls.

The moment ElevenLabs released a beta version of its text-to-speech software in January 2023, far-right trolls on the message board 4chan began abusing it. They used this advanced artificial intelligence technology to replicate the voices of celebrities such as BBC TV presenter David Attenborough and well-known actress Emma Watson, making it appear as if these celebrities were delivering vicious, hateful tirades.

As IT foreign media Vice reported at the time, ElevenLabs admitted that some people were abusing its software, especially the voice cloning function. This feature allows anyone to "clone" another person's voice; you just upload a minute-long recording and let the AI ​​do the rest. Presumably, the longer the recording time, the more realistic the output will be.

In March 2023, a video that went viral on TikTok attracted the attention of the New York Times Magazine. In this video, celebrity podcast host Joe Rogan and Dr. Andrew Huberman, a frequent guest on his show The Joe Rogan Experience, discuss a "libido-enhancing" caffeine drink. This video makes it seem like both Rogan and Huberman clearly support this product. In fact, their voices were cloned using artificial intelligence.

Around the same time, Santa Clara, California-based Silicon Valley Bank collapsed due to risk management errors and other problems and was taken over by the state. It was the worst U.S. bank failure since the 2008 financial crisis and sent shockwaves through global markets.

What makes people even more panic is a fake recording of US President Biden. In the recording, Biden can clearly be heard warning of an impending "collapse" and instructing the U.S. government to "fully use the power of the media to reassure the public." Truth-seeking sites like PolitiFact quickly debunked the video, but by now millions of people may have listened to the recording.

If AI voice generators can be used to impersonate celebrities, they can also be used to impersonate civilians, which is exactly what cybercriminals have been doing. According to IT foreign media ZDNet, thousands of Americans fall for voice phishing (vishing) scams every year. In 2023, an elderly couple made national headlines when they received a phone call from their “grandson,” who claimed to be in jail and asked them for money.

If you’ve ever uploaded a YouTube video (or been featured in a YouTube video), been on a large group conference call with a bunch of people you don’t know, or somehow uploaded your voice to the internet, theoretically Say you or a friend or relative may be in danger. What can you do to stop scammers from uploading your voice to an AI generator, cloning it, and then pretending to be you to contact your family?

Artificial intelligence sound generator is upending the cybersecurity landscape

You don’t have to be a cybersecurity expert to realize that artificial intelligence is in the wrong hands Hands are so dangerous. While this is true of all technologies, artificial intelligence is a unique threat for several reasons.

First of all, it is a relatively new technology, which means we don’t actually know what it can bring us. Modern artificial intelligence tools allow cybercriminals to scale and automate their campaigns like never before, while taking advantage of the public’s relative ignorance about the issue. In addition, generative AI enables threat actors who lack knowledge and skills to create malicious code, build deceptive websites, spread spam, compose phishing emails, generate realistic images, and generate long-form fake audio and video content.

Crucially, we can leverage AI in turn: AI is also being used to protect systems, and likely will be for decades to come. Predictably, there will be something of an AI arms race between cybercriminals and the cybersecurity industry, as the defensive and offensive capabilities of these tools are essentially equal.

For ordinary people, the proliferation of generative AI will require a complete rethinking of security practices. While AI may be exciting and useful, it can at least confuse the real thing and, at worst, exacerbate existing security problems and provide a new arena for threat actors.

Speech generator demonstrates the disruptive potential of artificial intelligence

As soon as ChatGPT hit the market, discussions about regulating artificial intelligence began to heat up. Any attempt to curb this technology is likely to require international cooperation, requiring a level of cooperation we have not seen in the past few decades, so curbing artificial intelligence is unlikely.

The genie has escaped from the bottle. All we can do is get used to it and adapt to it. I hope the network security industry can make corresponding adjustments.

https://www.makeuseof.com/ai-voice-generators-security-threat

The above is the detailed content of Will AI speech generators become the next major security threat?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete