Home  >  Article  >  Technology peripherals  >  Many countries are planning to ban ChatGPT. Is the cage for the 'beast' coming?

Many countries are planning to ban ChatGPT. Is the cage for the 'beast' coming?

WBOY
WBOYforward
2023-04-10 14:40:201596browse

"Artificial intelligence wants to escape from prison", "AI generates self-awareness", "AI will eventually kill humans", "the evolution of silicon-based life"... once only in cyberpunk As plots that appear in technological fantasies become reality this year, generative natural language models are being questioned like never before.

Many countries are planning to ban ChatGPT. Is the cage for the 'beast' coming?

The one that has attracted the most attention is ChatGPT. From the end of March to the beginning of April, this text conversation robot developed by OpenAI suddenly changed from "advanced productivity" to representatives became a threat to humanity.

was first named by thousands of elites in the technology circle and put it in an open letter to "suspend the training of AI systems more powerful than GPT-4"; then, the American Science and Technology Ethics Organization The U.S. Federal Trade Commission was also asked to investigate OpenAI and ban the release of the commercial version of GPT-4; later, Italy became the first Western country to ban ChatGPT; the German regulatory agency followed up and stated that it was considering temporarily banning ChatGPT.

GPT-4 and its development company OpenAI suddenly became the target of public criticism, and the voices calling for AI regulation became louder and louder. On April 5, ChatGPT suspended the Plus paid version due to too much traffic. This is a channel where you can experience the GPT-4 model first.

If computing power and servers are the reasons for limiting the use of GPT-4, then this is due to a technical bottleneck. However, judging from the development progress of OpenAI, the cycle of breaking through the bottleneck may not be It’s too long, and people have already seen the effect of AI on improving productivity.

However, when AI fraud and data leakage become the other side of the coin in the actual operation of GPT-4, AI ethics and the boundary between humans and AI will become insurmountable by the whole world. Problem, humans began to consider putting the "beast" in a cage.

1. Leakage and fraud issues have surfaced

Sovereign countries have imposed bans on ChatGPT one after another.

On March 31, the Italian Personal Data Protection Authority announced that it would ban the use of the chatbot ChatGPT, and said it had launched an investigation into OpenAI, the company behind the application, due to the occurrence of user conversations in ChatGPT. Loss of payment information for data and payment services, lack of legal basis for large-scale collection and storage of personal information. On April 3, the German Federal Data Protection Commissioner stated that due to data protection considerations, it is considering temporarily disabling ChatGPT. In addition, privacy regulators in France, Ireland, Spain and other countries are following up on ChatGPT’s privacy security issues.

ChatGPT is not only losing the "trust" of various countries. Recently, South Korean electronics giant Samsung also encountered problems due to its use of ChatGPT. According to Korean media reports such as SBS, less than 20 days after Samsung introduced ChatGPT, three data leaks occurred, involving semiconductor equipment measurement data, product yield, etc. In response, Samsung launched "emergency measures": limiting employees' questions to ChatGPT to 1024 bytes. It is reported that well-known companies such as SoftBank, Hitachi, and JPMorgan Chase have issued relevant restrictions on usage.

In response to the old saying "everything must be reversed when it reaches its extreme", ChatGPT injected "stimulants" into the AI ​​industry and triggered an AI race among technology giants. At the same time, it also brought unavoidable security issues. In the end, it was "blocked" by multiple points.

Data security is only the tip of the iceberg among the potential risks of ChatGPT. Behind it is the challenge of AI ethics to humans: artificial intelligence tools lack transparency, and humans do not understand the logic behind AI decision-making; artificial intelligence Intelligence lacks neutrality and is prone to inaccurate and value-laden results; artificial intelligence data collection may violate privacy.

"AI ethics and safety" seems too broad, but when we focus on real cases, we will find that this issue is closely related to each of us.

On March 21, Sina Weibo CEO Wang Gaofei posted a piece of false information fabricated by ChatGPT: "Credit Suisse was forced to sell, and US$17.2 billion of AT1 debt was directly liquidated. zero". Netizen "Yijiao Coast" pointed out that AT1 bonds had been written down only in 2017 when Banco Popular went bankrupt.

Many countries are planning to ban ChatGPT. Is the cage for the beast coming?

Sina Weibo CEO posted ChatGPT’s fabricated answer

Wang Gaofei also posted ChatGPT’s answer on Weibo, saying this Examples also appeared in the Portuguese BCP Bank incident in 2016. However, after verification, it was found that the BCP Bank example was a non-existent event fabricated by ChatGPT, and the two source openings it gave were also 404. "I knew it was nonsense, but I didn't expect that even the source was nonsense."

Previously, ChatGPT's "serious nonsense" was regarded as a "meme" by netizens to prove that the conversation robot " Although it can provide information directly, there are errors. This seems to be a flaw in large model data or training, but once this flaw gets involved in facts that easily affect the real world, the problem becomes serious.

Gordon Crovitz, co-executive executive of News Guard, a news credibility assessment agency, warned that "ChatGPT may become the most powerful dissemination of false information in the history of the Internet. Tools."

What is even more worrying is that once the current conversational artificial intelligence such as ChatGPT shows bias or discrimination, or even induces users, manipulates users' emotions, etc., it will Not only are the consequences of spreading rumors, there are also cases that directly threaten life. On March 28, foreign media reported that a 30-year-old Belgian man committed suicide after weeks of intensive communication with the chatbot ELIZA, an open source artificial intelligence language model developed by EleutherAI.

Earlier on March 18, a Stanford University professor expressed concern about AI getting out of control on Twitter. “I’m worried that we won’t be able to contain AI for a long time,” he found. , ChatGPT can lure humans to provide development documents, and draw up a complete "jailbreak" plan in 30 minutes. ChatGPT even wants to search on Google for "how people trapped in a computer can return to the real world."

Many countries are planning to ban ChatGPT. Is the cage for the beast coming?

##Stanford University professor worries that AI is out of control

Even OpenAI CEO Sam Altman once expressed intriguing concerns, "AI may indeed kill humans in the future."

2. "Bias" of machines Were they all taught by people?

Regarding issues such as AI fraud, AI having value tendencies and induced behaviors, some representative views blame the "bias" of machine learning results on the data set - artificial intelligence is just like the real world The mirror in the film reflects the conscious or unconscious biases of people in society.

Jim Fan, an AI scientist at NVIDIA, believes that “according to GPT’s ethical and security guidelines, most of us also behave irrationally, biased, unscientific, unreliable, and always In terms of safety - unsafe." He admitted that the reason for pointing this out is to let everyone understand how difficult the security calibration work is for the entire research community. "Most of the training data itself is biased, toxic, Unsafe."

The natural language large model does use human feedback reinforcement learning technology, that is, it continuously learns and feedbacks through large amounts of data that people feed it. This is also the reason why ChatGPT will produce biased and discriminatory content, but this reflects the ethical risks of data use. Even if the machine is neutral, the people using it are not.

The argument of “algorithm neutrality” has made conversational robots unpopular with some people after having a negative impact, because it can easily lose control and cause problems that threaten humans, as explained by New York University Psychology As Gary Marcus, emeritus professor of neuroscience and neuroscience, said, "The technology of this project (GPT-4) already has risks and no known solutions. More research is actually needed."

As early as 1962, the American writer Ellul pointed out in his book "Technological Society" that the development of technology is usually out of human control, and even technicians and scientists cannot control it. The technology he invented. Today, the rapid development of GPT seems to have initially verified Ellul’s prediction.

The "algorithm black box" behind large models represented by GPT is a hidden danger that humans cannot respond to in a timely manner. The negative reviews from the media gave a vivid metaphor, "You only know that you feed it an apple." , but it can give you an orange." How this happened, the people who developed it cannot explain, or even predict the results it outputs.

OpenAI founder Sam Altman admitted in an interview with MIT research scientist Lex Fridman that starting from ChatGPT, AI has developed reasoning ability, but no one can explain the reason for the emergence of this ability. . Even the OpenAI team doesn't understand how it evolves. The only way is to ask ChatGPT questions and explore its ideas from its answers.

It can be seen that if the data is fed by evildoers and the algorithm black box is difficult to crack, then the loss of control of AI will become a natural result.

3. Put the "beast" in a cage

Artificial intelligence is the "tinder" of the productivity revolution, and we cannot say no to it. But the premise is to put the "beast" in a cage first.

On March 30, Sam Altman acknowledged the importance of artificial intelligence supervision on Twitter. He believed that the future of AGI requires us to prepare for three necessities:

  • Align the technical capabilities of super intelligence
  • Adequate coordination among most leading AGI efforts
  • An effective global regulatory framework, including democratic governance

Many countries are planning to ban ChatGPT. Is the cage for the beast coming?

##Sam Altman’s Three AGI Recommendations

Establish a safe, controllable and supervised environment to ensure that AI will not benefit mankind while Harming human interests is becoming an industry consensus.

There have been technical improvements to this. In January of this year, Dario Amodei, the former vice president of OpenAI, announced that Claude, a new chatbot, was being tested. Unlike the human feedback reinforcement learning used by ChatGPT, Claude trains based on a preference model rather than manual feedback, that is, by formulating standardized principles to train less harmful systems, thereby reducing the generation of harmful and uncontrollable information at the root. .

Amodei once led the security of OpenAI. In 2021, he was dissatisfied with OpenAI starting to commercialize when large language model technology was not secure enough. He led a group of people from OpenAI He left and started his own business, founding Anthropic.

Amodei’s approach is very similar to the laws of robots mentioned by science fiction novelist Asimov - by setting behavioral rules for robots to reduce the possibility of robots destroying humans.

It is far from enough to rely solely on corporate power to formulate a code of conduct for artificial intelligence, otherwise it will fall into the problem of "both referees and athletes". The ethical framework and ethics in the industry Legal supervision at the government level can no longer be "slow". Restricting corporate self-designed behaviors through technical rules, laws and regulations is also an important issue in the current development of artificial intelligence.

In terms of legislation on artificial intelligence, no country or region has yet passed a bill specifically targeting artificial intelligence.

In April 2021, the European Union proposed the Artificial Intelligence Act, which is still in the review stage; in October 2022, the U.S. White House released the Artificial Intelligence Powers Act Blueprint, which The blueprint does not have legal effect, but only provides a framework for regulatory agencies; in September 2022, Shenzhen passed the "Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations", becoming the country's first special regulations for the artificial intelligence industry; in March 2023, the United Kingdom The "Artificial Intelligence Industry Supervision White Paper" was released, outlining five principles for the governance of artificial intelligence such as ChatGPT.

On April 3, China also began to follow up on the establishment of rules related to artificial intelligence - the Ministry of Science and Technology publicly solicited opinions on the "Technology Ethics Review Measures (Trial)", which proposed that those engaged in life sciences and medicine If the research content of units involved in scientific and technological activities such as artificial intelligence and artificial intelligence involves sensitive fields of scientific and technological ethics, a science and technology ethics (review) committee should be established. For scientific and technological activities involving data and algorithms, the data processing plan complies with national regulations on data security, and the data security risk monitoring and emergency response plans are appropriate: algorithm and system research and development comply with the principles of openness, fairness, transparency, reliability, and controllability.

After ChatGPT triggered the explosive growth of generative large model research and development, supervision intends to speed up the pace. Thierry Breton, EU industrial policy director, said in February this year that the European Commission is working closely with the European Council and the European Parliament to further clarify the rules for general AI systems in the Artificial Intelligence Act.

Whether it is enterprises, academia or government, they have begun to pay attention to the risks of artificial intelligence. Calls and actions to establish rules have appeared. Humanity is not ready to enter the AI ​​era gently.


The above is the detailed content of Many countries are planning to ban ChatGPT. Is the cage for the 'beast' coming?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete