Home  >  Article  >  Technology peripherals  >  "AI fraud" has become a hot search topic. How can AI "do good" after it is "released"? |Hot comments on titanium degree

"AI fraud" has become a hot search topic. How can AI "do good" after it is "released"? |Hot comments on titanium degree

WBOY
WBOYforward
2023-06-07 10:01:261051browse

AI fraud has become a hot search topic. How can AI do good after it is released? |Hot comments on titanium degree

Recently, the Telecommunications Cybercrime Investigation Bureau of Baotou City Public Security Bureau released a case of telecom fraud using intelligent AI technology. The victim was defrauded of 4.3 million yuan in 10 minutes. The topic of "AI fraud" became a hot search topic.

As AI technology becomes more mature, the degree of realism and deception are getting stronger and stronger. AI singers, anchors who look like celebrities, etc. can be faked when their appearance and voice can be "highly imitated". How to prevent it?

What is the current progress in the standardization of generative AI? How can the application of AI technology avoid legal risks?

Will the current frequent infringement and fraud incidents restrict the development of the industry?

This issue of "Tidu Hot Review" specially invites senior media people to discuss the hot topic "AI fraud". How can AI "do good" after it is "released"? A discussion took place and below are some highlights.

Regarding how people should take precautions when their appearance and voice can be "imitated".

Jia Xiaojun, the manager of Beido Finance, said that tools are neutral and the key lies in how to use them. "AI fraud" is a typical example. For ordinary people, fraud generated by this type of AI technology is very difficult to identify, especially when sound and video are simulated.

The reason why AI fraud can occur is essentially a problem caused by the leakage of personal information. The other party can master the sensitive information of the relevant parties, accurately provide accounts, addresses and other information, and can also obtain address books, thereby defrauding users of funds.

For ordinary people, when it comes to funds, they must maintain sensitivity and carefully check whether there are any problems. It is best to do cross-verification and keep a close eye on their wallets.

From a regulatory perspective, we should strengthen the introduction of scientific and technological personnel and innovate relevant countermeasures technologies to provide timely and effective reminders and prevention. At the same time, we will strengthen the platform's access threshold and take reasonable interception measures.

Guo Shiliang, an expert at the "Whale Platform" think tank, said that everyone is talking about AI, but people see more of the benefits of AI, but rarely see the negative impacts of AI. Someone was defrauded of 4.3 million yuan in 10 minutes. The key is that the scammer used AI face-changing technology, and the victim passed the video verification before transferring the money. Unexpectedly, it turned out to be a scam, and it was a high-end scam. It seemed Act flawlessly.

Later, with the full assistance of the bank, it took 10 minutes to successfully intercept the defrauded funds of 3.3684 million yuan in the fraudulent account, but 931,600 yuan of funds are still being recovered. AI fraud is very clever, using both voice synthesis technology and AI face-changing technology. It can even invade the contact information of the other party's friends and successfully steal their money.

The arrival of the AI ​​era has brought opportunities and challenges. At this time, scammers are using new technologies to defraud, and anti-fraud methods need to keep pace with the times. Everyone’s anti-fraud awareness also needs to be improved. Whenever sensitive content such as transfers is involved, everyone should be alert. The AI ​​era has arrived, and related laws and regulations need to be followed up. As technology advances, regulatory enforcement capabilities and anti-fraud technology also need to keep pace with the times.

Jiang Han, a senior researcher at Pangu Think Tank, said that with the development of artificial intelligence technology, AI fraud has also begun to appear. AI fraud is a type of fraud that uses artificial intelligence technology. The advantage of this fraud method is that it can achieve higher frequency attacks, more precise targeting and more efficient deception effects. It is also a double-edged sword in the process of technological development. How should we view and respond to this type of fraud?

First of all, frequent fraud incidents require everyone to continuously improve their learning capabilities and identify potential fraud risks. AI technology has strengthened the attack methods of criminals in terms of basic data analysis, model training, and artificial intelligence decision-making. For consumers, to prevent such fraud, they need to improve their technical awareness and risk identification capabilities. You need to try not to give unknown calls a chance to answer, do not be gullible or deceived, and protect your privacy and property security.

Secondly, enterprises need to strengthen self-discipline, supervision needs to guide market standards, and individuals should further strengthen their awareness of risk prevention and control. AI technology can bring efficiency and competitive advantages to enterprises, but they should also pay attention to the balance between innovation and compliance. Especially when it comes to sensitive areas such as user information and privacy, enterprises must strengthen self-discipline and compliance control. For regulatory agencies, it is necessary to guide the development of market standards and prevent such illegal and criminal activities. For individuals, they should actively participate in the management of public safety, learn new knowledge about fraud prevention, and enhance their ability to identify potential fraud.

In the long term, the AI ​​trend of generative large models will continue, but how to use AI as a tool to make it better requires everyone to work together. In the development process of AI, it is necessary to pay attention to the compliance and security of technology, and promote the realization of AI technology and traditional laws, morals and ethics. AI development cannot simply pursue speed and efficiency. At the same time, it should also be based on the study of human nature, keep in mind the original intention of using artificial intelligence to better serve society, explore and optimize the paradigm of AI, prevent potential abuse and harm, and help AI Sound development.

Bi Xiaojuan, chief editor of the New Economic Observer Group, said that as the saying goes, technology has always been a double-edged sword. In the past few years, it can be seen that the development of big data and the sharing economy has brought considerable economic and social benefits, but the by-product has been a large number of leaks of personal privacy information, and telecommunications network fraud and harassing text messages have emerged one after another. However, with the inspection and correction of regulatory deficiencies, timely follow-up of policies and regulations, and the improvement of public awareness of prevention, similar fraud incidents have been alleviated to a certain extent.

Now the same process is also taking place in the field of AI. The commercial value of AI technology continues to be released, but it also makes AI fraudsters "like a tiger flying in the sky": by pretending to be the user's relatives and friends through face-changing, voice simulation and other means, it is almost possible to defraud people with fake ones, making it difficult for users to distinguish and easy to fall for the bait. Not only that, using AI technology, criminals can defraud a large number of users at the same time, causing a wider range of harm and greater property losses to the victims.

But as an ordinary user, he is definitely not helpless in the face of AI. First of all, we must increase our awareness of risk prevention of personal property and be wary of this new type of fraud; secondly, strengthen the protection of personal information and do not register a large number of unofficial and dating apps; thirdly, if the other party requests to borrow money or transfer money through audio or video means, No matter how anxious the other party is, try to use multiple methods and conduct offline verification before transferring money if possible. If it involves large-amount transfers, you can even go to the bank counter to handle it; finally, if you are unfortunately fooled, call the police as soon as possible. Process it and contact the relevant bank to stop the loss as much as possible.

In terms of laws, regulations and supervision, we should follow up in a timely manner, set thresholds and firewalls for the development and application of AI technology, and promote AI for good. The good news is that in early April, the Cyberspace Administration of China drafted the "Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments)" and publicly solicited opinions from the public. The purpose is to promote the scientific, orderly, healthy and standardized development of my country's AI industry and prevent chaos. violations and violations.

For AI practitioners, they must follow the direction of national regulatory policies, maintain sensitivity to AI ethics, and achieve the unification of technological development and corporate responsibility.

Industry observer Wenzi said that research on AI security can be traced back to 2008. So far it has covered many areas. In a 2022 survey of the natural language processing (NLP) community, 37% agreed or slightly agreed that AI decision-making could lead to a disaster that is “at least as bad as all-out nuclear war.” However, there are some dissenting voices, such as Andrew Ng, an adjunct professor at Stanford University who likened it to "worrying about overpopulation on Mars before we even set foot on Earth."

Dr. Samuel Bowman’s point of view is a more pertinent understanding of this issue-instead of stopping research on AI/ML around the world, people should ensure that all sufficiently powerful artificial intelligence is built and deployed in a responsible manner. smart system.

Zhang Jingke, founder of Internet Beijing Diary, said that with the massive amount of information produced at high speed by AI, mankind will inevitably once again face the problem of false information that is a hundred times more than that in the era of Internet information explosion.

If you want to protect most people from harm, you can refer to the traditional news and copyright protection model for works. That is, information dissemination needs to be marked with the source. If it is AI writing, the author needs to be marked. This is also the necessary price paid by netizens and capital to eliminate information dissemination intermediaries. In the long run, free information lacking supervision will no longer be fair and efficient.

What is the current progress in the construction of specifications for generative AI? How can the application of AI technology avoid legal risks?

Chu Shaojun, the director of Duoduoshuo, said that first of all, in the field of communication and public opinion, "good things don't go out, and bad things spread thousands of miles." The use of AI to defraud is not new in nature, and it is just one of thousands of fraud methods. First, because it is tied to the popular AI technology and applications, it can easily become a hot topic and attract the attention of all parties. It has even formed a crusade against new technologies and new fields to a certain extent, and calls for the supervision of new technologies. It will burst out again in a short time. But for now, there is no need to pay too much attention to new technologies and new fields and supervise them prematurely. After all, the development of any new technologies and new fields needs time and tolerance, and sometimes even requires a certain amount of " Wild growth” time and space.

Second, for new fields and new technologies, supervision and legislation often lag behind. This requires more self-discipline by enterprises and industries. While developing business and technology, enterprises must take more into account their own responsibilities. Social responsibility requires better prediction and prevention during the R&D and design stages. At the same time, user education and science popularization should be strengthened so that users and the general audience can understand new technologies and new fields and nip problems in the bud. At the same time, users and audiences themselves must learn more and pay more attention to it. At the same time, they must be more cautious and check multiple ways to make money, borrow money, and transfer money. At the same time, never believe that "pie will fall from the sky."

Third, new technologies can be used for fraud, but from another perspective, the emergence of new technologies can also be used for fraud prevention and anti-fraud. Many times technology is neutral or neutral, the key lies in how it is applied and standardized. The current frequent incidents of infringement and fraud will not restrict the development of the industry, but will promote the standardized development of the industry. After all, after the barbaric growth, the frequent occurrence of negative incidents will on the one hand trigger public opinion supervision and naturally trigger enterprises and industries to become more self-disciplined. , on the other hand, supervision and legislation will definitely speed up, and ultimately the industry will be pushed to a more standardized development.

Finally, it is necessary to prevent extreme thinking and over-regulatory thinking. Even if negative events occur, on the one hand, supervision and legislative standards are needed; on the other hand, supervision and legislation also need to consider tolerance and time to give the industry sufficient development. space. After all, looking at the big picture, the future development of AI, especially AIGC and other fields, may not only affect the enterprise industry, but also the country’s right to speak in the technical field. All parties have the responsibility to make new technologies develop better and more stably.

Internet technology blogger Pipitui said that AI artificial intelligence technology is becoming more and more mature. Are there any risks and how to avoid them? The answer given by Tongyi Qianwen is: The development of AI artificial intelligence technology has indeed brought many conveniences and opportunities to people, but it has also brought some risks and challenges.

Avoiding AI risks can start from the following points. The first is data privacy: when collecting and using personal data, strict privacy and protection regulations need to be followed to ensure that the data is not abused or leaked. Second, transparency and explainability: AI systems should be transparent and explainable so that people can understand their decisions and behaviors. Third, security: Ensure the security of the AI ​​system, including preventing malicious attacks, data backup and recovery, error correction, etc. Fourth, legal compliance: Ensure that the AI ​​system complies with relevant laws, regulations and ethical norms, and avoids conflicts with the law. Fifth, technical level: Ensure the technical level of the AI ​​system to prevent technology abuse or misuse. Finally, cooperation and collaboration: Cooperate and collaborate with other stakeholders to ensure that the development and application of AI systems are in the public interest and social responsibility.

In short, avoiding AI risks requires ensuring that the development, use and management of AI systems comply with ethical, legal and safety standards. At the same time, education and public participation need to be strengthened to promote the healthy development of AI technology.

Zheng Yang, Director of the Strategic Development Department of Zhonghuo International, said that the current types of AI fraud mainly include voice synthesis, AI face-changing, and the use of big data and AI technology to screen and filter information to target target groups. Technology is a double-edged sword. Historical experiences such as text message phone fraud, account theft, and p2p have verified the truth time and time again, that is, every iteration of new technology will bring endless technological fraud.

Use reverse thinking to think. First of all, the core of AI fraud is that it can be fake and real. The focus of prevention and supervision is how to more easily identify AI-generated content and how to let ordinary people have a deep understanding of the possible risks of AI technology (popular common sense) . Secondly, the foundation of AI fraud technology lies in data, and the focus of prevention and supervision is how to prevent the leakage and abuse of personal information.

In addition, from a regulatory perspective, on the one hand, it is necessary to carry out precise education and prevention and control of vulnerable groups in advance, such as empty nesters, fanatical star-chasers, etc. On the other hand, online dating, dating, loans, online games and other channels can be strictly supervised.

In the process of marketization of any technology, its social value and economic value need to be balanced. As far as the current development of AI technology is concerned, the more prominent social issues include privacy and data protection risks, intellectual property infringement risks, moral and ethical risks, etc. These problems will inevitably affect the marketization speed of AI technology, but this itself is a necessary process for the implementation of technology.

If AI technology wants to fundamentally reduce legal risks, on the one hand, it needs the production companies of AI technology and its tools to self-regulate the platform. For example, Google’s behavior of labeling every AI-generated image created by its tools is a good example. beginning. On the other hand, relevant departments need to establish relevant regulatory laws, regulations and standard systems as soon as possible to clarify legal boundaries and responsible entities. For example, the regulations issued by the Cyberspace Administration of China in April require organizations and individuals that provide AI services to bear the responsibility of content producers. .

Wei Li, founder of Dali Finance, said that judging from the current published information on AI fraud cases, the technologies involved in AI fraud are mainly deep synthesis technology, including AI face-changing technology, voice (sound) synthesis technology and text generation models. . Criminals can use AI face-changing technology to create fake videos or photos to deceive others by pretending to be others.

At the technical level, there are also many technology companies and researchers actively practicing to identify this type of AI deep synthesis content through "technical countermeasures". These "technical countermeasures" products are usually based on deep learning technology, for example, Identify AI-generated videos by analyzing the characteristics of the video, processing traces, and detecting "inconsistency" in facial features.

In terms of legal prevention and control, state agencies, platforms, and individuals work together. While platforms continue to improve their audit and monitoring capabilities in accordance with various regulatory requirements, individuals also need to be vigilant at all times. It may also be necessary to establish collaboration and sharing databases, collect and store confirmed video samples, or establish anti-AI fraud alliances and other law enforcement practices.

How to make artificial intelligence technology "technology for good" has become an urgent problem to be solved. First, data privacy and security are very important. It is necessary to strengthen the supervision and management of artificial intelligence technology to prevent the abuse and misuse of personal information. Secondly, the safety and reliability, transparency and fairness of artificial intelligence technology need to be strengthened. Finally, it is necessary to strengthen the education and popularization of artificial intelligence technology, improve the public's scientific and technological literacy and safety awareness, and avoid being deceived and victimized.

AboutWill the current frequent infringement and fraud incidents restrict the development of the industry?

Zu Tengfei, a senior media person, said, "Technology itself is innocent." The key depends on who uses it. New technologies like AI are not new today, nor are the "bad guys" who seek to make money through fraud. We should not stop eating because of choking. From a global perspective, AI is a direction that almost all countries are vigorously developing. It can be applied in many fields and can liberate productivity and improve production efficiency.

Tracing back to the root cause, the source of AI fraud is the leakage of personal information. Earlier, due to the leakage of personal information, everyone was harassed by phone calls and bombarded with text messages. Coupled with the current AI fraud, personal information protection has reached the point where it has to be done. Various APPs require access to phone numbers, photos, geographical locations, etc. when downloading. Is this information effectively protected by the relevant software companies? Or is it being resold and profited by people with ulterior motives?

To deal with this new type of AI fraud, we still need to follow the knowledge points popularized by various anti-fraud bloggers, such as protecting personal information, verifying messages, not transferring money, not paying, etc. At the same time, relevant companies must strictly abide by relevant policies, improve AI technology ethics, and strengthen safety supervision measures.

Tang Chen, the manager of Tang Chen, said that negative cases that appear in the development process of AIGC are inevitable. AI fraud, digital people, face-changing, etc. are all manifestations of the capabilities of artificial intelligence tools. How to avoid the negative effects brought about by the development of artificial intelligence fundamentally requires regulating who can use artificial intelligence. Two days ago, Stefanie Sun responded to the copyright controversy caused by "AI Stefanie" and said, "Everything is possible and everything does not matter. I think it is enough to have pure thoughts and be yourself." Stefanie Sun's response was highly praised by the public, except In addition to the simple praise of good literary talent and large-scale layout, the more profound meaning is that she shows the confidence of human beings. This is also the confidence of human beings that artificial intelligence will not be able to replace humans in a short time. Her mentality should also become a reference for the public.

Similarly, this logic applies to any industry that artificial intelligence is transforming. As OpenAI CEO Sam Altman said, artificial intelligence technology will reshape society as we know it. It may be "the greatest technology developed by mankind so far" and will greatly improve human life. But there are real dangers in confronting it, and people should be happy to be a little scared of it. Only with awe can we use artificial intelligence technology for human use in the development of science and technology. In this process, what people are worried about is not the technology itself, just like Stefanie Sun herself is not too worried about AI Stefanie Sun, but the motivation of human beings to use technology. Everyone should pay more attention to, what do you want technology to evolve into? And what will non-technology transform humans into? This may be the essence of the problem.

"Titanium Hot Comments" is a hot event observation column launched by Titanium Media. It mainly invites media and industry practitioners who have unique insights and in-depth observations on the development of different industries and different business models to comprehensively display them through multi-angle interpretations. The impact and significance of the event.

If you care about the latest trends, have your own unique opinions, and want to share and exchange views with more like-minded people, you are welcome to add the "Tidu Hot Review" community assistant WeChat: taiduzhushou, and join " Titanium Hot Comments" community, join us to create a community of thinkers, so that valuable thinking can be seen by more people!

The above is the detailed content of "AI fraud" has become a hot search topic. How can AI "do good" after it is "released"? |Hot comments on titanium degree. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete