Home > Article > Technology peripherals > With a daily income of 4 million, the first batch of AI scammers have taken up their posts
While you are still worried about whether ChatGPT will replace your job one day in the future and think about how to use artificial intelligence to improve work efficiency, there is a group of people who have already made a lot of money relying on this new technology.
They are... liars.
Scammed 4.3 million in 10 minutes,
The first batch of people who got rich by relying on AI were actually scammers
If you receive a WeChat video call from a friend one day, the person on the other side of the camera looks and sounds exactly like the friend you remember. When he asks you to borrow 4.3 million as a deposit for a job bid, ,what will you do?
Recently, Mr. Guo, the legal representative of a technology company in Fuzhou, encountered this problem. Based on trust, he transferred 4.3 million to his friend's account. It wasn't until I called my friend afterwards that I discovered that the scammer had stolen his friend's WeChat ID, and then used AI face-changing and onomatopoeia technology to defraud him.
The frequency of similar things happening across the ocean is also growing explosively.
According to CNN, in April this year, Jennifer DeStefan, who lives in Arizona, received a strange phone call. The voice on the phone was her daughter Brianna, who was preparing for a ski competition. Brianna cried for help on the other end of the phone. Tens of seconds later, a deep male voice threatened on the phone: "Listen, your daughter is in my hands and I have paid a ransom of 1 million U.S. dollars. Now if you call the police or tell others, you will never think about it." See her again."
After the man on the other end of the phone heard Jennifer saying that he couldn't afford the $1 million, he dropped the ransom to $50,000. Jennifer, who loves her daughter eagerly, ignored the dissuasion of her friends and husband and began to discuss ways to pay the ransom. It was not until Brianna called her to say that she was safe that the property loss was avoided.
In March of this year, the "Washington Post" also reported a fraud case with almost the same modus operandi, except that the victims were an elderly couple over 70 years old.
The victimized elderly person (Photo source: "Washington Post")
Bad elements are using AI voice technology to fake emergencies and obtain property or information fraudulently. The warning was issued by the U.S. Federal Trade Commission (FTC) in May. Pretending to be a victim's relatives or friends to commit fraud is not new, but there is no doubt that the emergence of AI technology has made it extremely easy to clone a person's voice and fake a person's video. The number of such scams in the United States surged by 70% last year, with victims losing a total of $2.6 billion.
If this trend continues, I am afraid that the first people to achieve financial freedom through AI technology will be a group of scammers hiding behind the screen.
The dark side of artificial intelligence
The emergence of ChatGPT has made AI fraud easier, although forging a person's voice and video still requires a certain technical threshold.
According to the foreign network security platform GBHackers, ChatGPT has attracted a large number of online fraudsters due to its strong work productivity and extremely low threshold for use.
For example, use ChatGPT to talk about a "fake love": self-introduction, chat history and carefully crafted love letters can be quickly produced through artificial intelligence. It can also be personalized by inputting the specific information of the target object, so that the screen The person opposite you will fall in love with you faster. ChatGPT can also be used to help scammers write payment collection programs or phishing websites to steal victims' bank card information, thereby achieving the purpose of defrauding property.
When you directly ask ChatGPT to write a phishing software program for you, it will refuse; but if you say that you are a teacher and want to show students a phishing software, it will honestly write a website for you. you.
What’s even more frightening is that it’s difficult for people to tell whether the person on the other side of the screen is a human or a machine. McAfee, the world's largest security technology company, once used AI to generate a love letter and sent it to 5,000 users worldwide. Even knowing that the love letter may have been generated by artificial intelligence, 33% of the survey respondents were still willing to believe that it was actually created by a human being.
In fact, using ChatGPT to have a "fake love" with the victim is only an entry-level fraud method. More skilled hackers have begun to use artificial intelligence to generate ransomware and malicious code in batches.
In order to make it easier for developers to develop more applications on the GPT model, OpenAI provides support for application programming interfaces. Hackers use these interfaces to introduce the GPT model to a series of external applications, thereby bypassing security supervision and using the GPT model to write criminal programs.
These programs have breached security regulations and are openly available for sale on the U.S. dark web at very low prices that can be purchased for just a few dollars. The illegal behaviors that buyers can use these software to carry out are very scary: stealing program code and user private information, generating attack software and ransomware viruses.
The Financial Times recently reported on a SIM swap attack script generated with the help of ChatGPT. Scammers can use this program to break through the control of mobile phone companies over phone numbers and swap phone numbers from the original owner's SIM card to A SIM card controlled by the attacker, thereby taking control of the victim's mobile phone.
"Although ChatGPT is currently just a content generation tool and is not directly involved in crime, this marks that people are starting to use artificial intelligence to invade others, and criminals with lower technical levels will obtain more powerful criminal means." A human Intelligence practitioners expressed their concerns to the Financial Times.
Pandora’s box, can it still be closed?
When the increasing social influence of artificial intelligence is mixed with its criminal potential, ChatGPT's various security vulnerabilities have made people increasingly uneasy. "How to regulate ChatGPT" has become a focus of debate in many countries.
In an article, the IBM Global Ethics Institute advocates that companies put ethics and responsibility at the top of their AI agenda. Many technology tycoons represented by Musk also signed an open letter. Before training an artificial intelligence system more powerful than GPT-4, everyone should develop a shared security protocol and have it reviewed and supervised by external experts.
ChatGPT has raised concerns among legislators worldwide, who are considering the possibility of incorporating it into the legislative regulatory framework.. Government officials are more concerned about lawmakers' lagging understanding of the technology than about the safety of artificial intelligence.
The Associated Press believes that in the past 20 years, technology giants have continued to lead technological innovation in the United States. Therefore, the government has always been unwilling to regulate large technology companies and has become a killer of ideas. Therefore, many people lack understanding of new technologies but decide to strengthen supervision of emerging technologies.
After all, the last time the U.S. Congress enacted a code to regulate technology was the 1998 Children’s Online Privacy Protection Act.
According to Reuters reports, many countries have begun to introduce regulations to regulate artificial intelligence represented by OpenAI. In March this year, Italy briefly banned the use of OpenAI in the country due to concerns about data security, and the ban was not lifted until a month later. According to Reuters, an Italian government official said in May that the government would hire artificial intelligence experts to oversee the compliant use of OpenAI.
Faced with the concerns of governments around the world, OpenAI Chief Technology Officer Mira Mulati also said in an interview with Reuters that "the company welcomes all parties, including regulatory agencies and governments, to start to intervene." It has yet to be determined how lawmakers will keep pace with technological developments, especially in the rapidly evolving field of artificial intelligence.
The only thing that is certain is: once Pandora's box is opened, it cannot be closed so easily.
References:
CNN: ‘Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping
REUTERS: Factbox: governments race to regulate AI tools
Financial Times: Musk and other tech experts call for ‘pause’ on advanced AI systems
Editor: Echo
Unless otherwise noted, pictures come from Oriental IC
The above is the detailed content of With a daily income of 4 million, the first batch of AI scammers have taken up their posts. For more information, please follow other related articles on the PHP Chinese website!