Home  >  Article  >  Technology peripherals  >  The father of ChatGPT "cannot sleep at night": I am really worried that the large model will be abused

The father of ChatGPT "cannot sleep at night": I am really worried that the large model will be abused

WBOY
WBOYforward
2023-04-11 14:34:031140browse


The father of ChatGPT

News on March 21st, last week Sam Altman, CEO of the artificial intelligence start-up organization OpenAI, was interviewed Shi said that artificial intelligence may be "the greatest technology developed by mankind so far" and will reshape existing society. At the same time, he also admitted that this technology has risks and is still "a little afraid" of it.

Altmann emphasized that artificial intelligence will bring real dangers, but it may also be "the greatest technology developed by mankind so far" that can reshape society and bring about earth-shaking changes in life.

He said: "We have to be careful in this regard...I think thankfully we are a little afraid of it."

Altman talked about OpenAI's recent recent interview. The latest version of the artificial intelligence language model GPT-4 released. He emphasized that the launch of ChatGPT requires the joint participation of regulatory agencies and the whole society, and various feedbacks can help prevent potential negative impacts of artificial intelligence technology on human beings. He added that he had been in "regular contact" with government officials.

In just a few months since ChatGPT was launched, the number of monthly active users has exceeded 100 million, and it is recognized as the fastest growing consumer application in history. According to a UBS study, it took TikTok nine months to attract 100 million monthly active users, and Instagram nearly three years.

Although it is still "not perfect," Altman said that GPT-4 achieved a good score of 90th on the American law exam and achieved a near-perfect score on the SAT mathematics test. GPT-4 can now write computer code proficiently in most programming languages ​​on the market.

GPT-4 is just a small step towards the eventual creation of general artificial intelligence by OpenAI. When artificial intelligence technology makes breakthrough progress, artificial intelligence systems that are smarter than humans will appear. This is the so-called general artificial intelligence.

Although Altman praises the success of his product, he also admits that the possible dangers of artificial intelligence keep him up at night.

"I'm particularly concerned that these models will be used to generate disinformation at scale," Altman said. "Now that they are getting better and better at writing computer code, they can be used for offensive cyber attacks."

But Altman does not believe that artificial intelligence models will no longer need humans as described in science fiction and can make decisions on their own and plan to rule the world. "It's waiting for someone to give it input," Altman said. "It's still very much a human-controlled tool."

However, Altman said he does worry about who will Control artificial intelligence. "There are always going to be people who don't follow some of the safety rules that we've set," he added. "I think there's not a lot of time left for society as a whole to figure out how to respond to this, how to regulate it, how to deal with it."

When commenting on the idea that mastering artificial intelligence technology may "dominate the world," Altman said: "This is certainly a chilling statement." "On the contrary, what I hope is that we continue to develop More and more powerful systems allow people to use these systems in different ways, allowing the system to be integrated into our daily lives and into the economy to give full play to people's wishes."

Concerns about Misinformation

According to OpenAI, GPT-4 has made huge improvements through iteration, such as the ability to understand images as input. Demonstrations have shown that GTP-4 can describe the contents of a refrigerator, solve puzzles, and even elucidate hidden meanings behind Internet images.

But Altman said that the inherent problem with artificial intelligence language models such as ChatGPT is misinformation, and the program may provide users with factually inaccurate information.

Altman said: "What I want to remind people most is the so-called 'illusion problem.'" "This model will indeed state one thing with confidence, making something that is completely fabricated appear to be fact."

According to OpenAI, artificial intelligence models have this problem in part because they use deductive reasoning instead of memory.

OpenAI Chief Technology Officer Mira Murati once said: "One of the biggest differences between GPT-3.5 and GPT-4 is that the latter has stronger emergence capabilities in reasoning."

“Our goal is for AI to predict which word will come next, which is an understanding of the nature of language,” Murati said. “We want these models to see and understand the world in the same way humans do.”

"You have to realize that the model we developed is an inference engine, not a database of factual information," Altman said. "Of course, they can also be factual databases, but that is not special. "What we want them to achieve is reasoning, not memory." Altman and his team hope that "over time, this model will become a true reasoning engine." He said , the model will ultimately rely on Internet information and its own deductive reasoning to identify which information is real and which is fiction. According to data from OpenAI, GPT-4 is 40% more likely to produce accurate information than previous versions. But Altman said people should not rely on the system as a primary source of accurate information and encouraged users to carefully examine the results generated by the program.

How to protect against bad guys

Many people are also concerned about what kind of information ChatGPT and other artificial intelligence language models will generate. According to Altman, ChatGPT won't tell users how to build a bomb because of the security measures built into the system.

"What I'm concerned about is ... we're not the only ones creating this technology," Altman said. “There will always be people who don’t add in some security measures like we do.”

According to Altman, there are solutions and safeguards for all these potential dangers of artificial intelligence. One is to let society play around with ChatGPT when the stakes are low to see how people use it.

Muradi said that the main reason why ChatGPT is now open to the public is to "gather a lot of feedback" from it. She believes that as the public continues to test OpenAI applications, it will become easier to determine where security measures are needed.

Mulati said: "What people use them for, what problems it has, and what shortcomings it has, all allow us to intervene in time and improve the technology."

Altman also emphasized, It is important that the public is able to interact with each version of ChatGPT. He said, "If we just sit in the laboratory and conduct secret research, such as developing GPT-7, and then make it public all at once... I think it will bring more negative effects." "People need time to refresh their understanding. , react, and eventually get used to this technology, we also need time to understand where the shortcomings of the system are and how to mitigate the adverse effects."

Altman revealed that OpenAI has a team dedicated to formulating policies, and they decide What information can be entered into ChatGPT, and what information ChatGPT can share with users.

Altman added: "(We) are talking to policy and security experts to review the system, just to solve these problems and launch something that we believe is safe and reliable." "Again , we can’t get it perfect the first time, but it’s important to learn lessons and find safety margins when the risks are relatively low.”

Will artificial intelligence replace jobs?

Why people Among the concerns about the disruptive power of AI technology is the belief that it will replace existing jobs. Artificial intelligence may replace some jobs in the near future, Altman said, but it will depend on how quickly that happens.

Altmann said: “I think that over the course of several generations, humans have proven themselves to be able to adapt to major technological changes.” “But if this happens within a few years, I do worry most about the consequences. What changes are coming..."

But he encouraged people to think of ChatGPT as a tool, not a replacement for their own work. Altman added, "Human creativity is unlimited. We will find new jobs and new things to do."

Altman believes that as a human tool, ChatGPT has The effects outweigh the risks.

He said: "We can all have a tailor-made great educator in our pocket to help us learn." "We can provide medical advice to everyone, which is something we cannot achieve today."

“Co-pilot”

ChatGPT has caused a lot of controversy in the education field because some students use ChatGPT to cheat. Educators have mixed opinions on whether ChatGPT can serve as an extension of student learning or whether it will affect students' motivation to learn on their own.

"The way education has to change, but that has happened many times as technology develops," Altman said. He added that students' teachers will no longer be limited to the classroom, "most of which What excites me is that it can bring better personalized learning to each student."

Altman and his team hope that users in any field can regard ChatGPT as their "co-pilot", such as helping users write large amounts of computer code or solve problems.

"We can provide services like this to every industry so people can have a higher quality of life," Altman said. "We're also going to have new things that we can't even imagine today, and that's also Our commitment."

The above is the detailed content of The father of ChatGPT "cannot sleep at night": I am really worried that the large model will be abused. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete