


Recently, the Baotou City Public Security Bureau announced that Mr. Guo, the legal representative of a technology company in Fuzhou City, was cheated out of 4.3 million yuan in 10 minutes due to AI face-changing, triggering widespread heated discussions on the safety of artificial intelligence.
avoid the shortcomings of face recognition and effectively identify people. Photos, videos, masks, simulation models, etc., especially the face-changing and voice-changing fraud that occurred in this incident."
Regarding the potential risks brought by AI, "AI Godfather" Hinton once said in an interview with the New York Times, "Compared to climate change,AI may pose a 'more urgent' threat to humanity.”
Based on reality, this statement is not “alarmist”. The "evil" in Pandora's Box is gradually emerging. At the same time, AIGC is rapidly promoting the development of AI to a new stage.
Beware of the “Evils” of Technology: Seeing Is Not necessarily Believing
According to the "Safe Baotou" WeChat public account, the Telecommunications Cybercrime Investigation Bureau of Baotou City Public Security Bureau announced a case of telecommunications fraud. Mr. Guo, the legal representative of a technology company in Fuzhouwas defrauded of 4.3 million yuan in 10 minutes. Yuan, fortunately, the 3.3684 million yuan of defrauded funds has been intercepted, and the "intelligent AI face-changing and onomatopoeia technology" was used. How did they do it?
This is a complicated process. From the news itself, from the perspective of Mr. Guo, the victimwas mainly deceived by face-changing technology and onomatopoeia technology, but there may be some other hidden things behind the incident. Conditions, such as through which channel the criminal established contact with the victim, how to obtain the account of the relevant social software, how to know who he needs to impersonate, how to obtain the corresponding image and sound materials, and how to select the victim.
If we focus on the two key issues offace-changing technology and onomatopoeia technology, the main problems are obtaining basic data, training framework, and real-time rendering. At the data level, the audio and video information of the impersonator can be obtained from social engineering libraries such as social media, or it can be extracted from the photo album and chat history of a lost black phone. In terms of training framework, the editing software of several mainstream domestic short video platforms currently supports face-changing special effects without warning messages. There are more than 30 synthesis methods in foreign open source frameworks. In terms of real-time rendering, hijacked rendering can also be achieved through attack operations such as flashing the phone and installing intrusive software.
Therefore,"Seeing is not necessarily believing". There is always deliberate "evil" behind technology, which must be guarded against.
Nip in the bud and defeat "magic" with "魔"
Risks are inevitable, so how can we use "magic" to defeat "magic"?At present, many financial institutions have relevant implementation measures and related technology applications. For example, technologies such as
live anti-counterfeiting and intermediary identification have also been applied in financial business scenarios.
“The core function of live anti-counterfeiting, intermediary identification and other technologies isto avoid the shortcomings of face recognition and effectively identify photos, videos, masks, simulation models, etc., especially those that appeared in this incident. Face-changing and voice-changing fraud." Feng Yue, an intelligent core expert at MaMaConsumer Artificial Intelligence Research Institute, said, "Based on such technological innovations, MaMaConsumer Finance is currently able to intercept 99% of batch fraud attacks and effectively prevent forged information. , intermediary agency, counterfeit applications, long-term lending, telecommunications fraud and other risks. "
Compared with financial institutions at the end of the overall process, social software is obliged to introduce a counterfeiting detection mechanism, and provide real-time call authentication reminders to users to mark whether the peer user has abnormal behavior or abnormal image forgery. , abnormal sound forgery. Earlier risk warning information can more effectively prevent the occurrence of risk results.In addition to platform-related implementation, the public can also take some self-help measures. Currently, there is a type of technology that can protect user data from being used to change faces and voices. This type of technology is called
Adversarial Sample Technology. Taking human face images as an example, face-changing technology can be rendered ineffective by mixing an adversarial perturbation mask invisible to the naked eye into the image. Users can use this type of technology to protect their media information such as images and voices on social media before making it public.
Finally,"precautionary awareness" is the cornerstone of all "magic", and the public also needs to strengthen the security protection of personal belongings and information.
"Don't click on unknown text message links, email links, etc., don't scan QR codes from unknown sources, download APPs, don't easily provide your face and other personal biometric information to others, don't easily reveal your ID card, bank card , verification codes and other information, and do not overly disclose or share animations, videos, etc.," Feng Yue suggested. "There is no guarantee that mobile phones and other mobile devices will not be lost. You can regularly clear WeChat chat records, hide relevant traces with people you have close relationships with, encrypt mobile photo albums, or activate the data protection function of your mobile phone. These daily behaviors may lead to fraud. Intercepted outside."
It takes a long time to achieve success, step by step. Preventing the "misuse" of artificial intelligence technology will remain a core issue in the digital era.
Source: Financial Industry Information
The above is the detailed content of AI face-swapping fraud defrauded 4.3 million yuan. Immediately consumer experts recommend three defensive measures. For more information, please follow other related articles on the PHP Chinese website!

Introduction Suppose there is a farmer who daily observes the progress of crops in several weeks. He looks at the growth rates and begins to ponder about how much more taller his plants could grow in another few weeks. From th

Soft AI — defined as AI systems designed to perform specific, narrow tasks using approximate reasoning, pattern recognition, and flexible decision-making — seeks to mimic human-like thinking by embracing ambiguity. But what does this mean for busine

The answer is clear—just as cloud computing required a shift toward cloud-native security tools, AI demands a new breed of security solutions designed specifically for AI's unique needs. The Rise of Cloud Computing and Security Lessons Learned In th

Entrepreneurs and using AI and Generative AI to make their businesses better. At the same time, it is important to remember generative AI, like all technologies, is an amplifier – making the good great and the mediocre, worse. A rigorous 2024 study o

Unlock the Power of Embedding Models: A Deep Dive into Andrew Ng's New Course Imagine a future where machines understand and respond to your questions with perfect accuracy. This isn't science fiction; thanks to advancements in AI, it's becoming a r

Large Language Models (LLMs) and the Inevitable Problem of Hallucinations You've likely used AI models like ChatGPT, Claude, and Gemini. These are all examples of Large Language Models (LLMs), powerful AI systems trained on massive text datasets to

Recent research has shown that AI Overviews can cause a whopping 15-64% decline in organic traffic, based on industry and search type. This radical change is causing marketers to reconsider their whole strategy regarding digital visibility. The New

A recent report from Elon University’s Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, ‘Being Human in 2035’, concluded that most are concerned that the deepening adoption of AI systems over t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

Atom editor mac version download
The most popular open source editor

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Zend Studio 13.0.1
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software