Home >Technology peripherals >AI >Nature|GPT-4 has been blown up, and scientists are worried!

Nature|GPT-4 has been blown up, and scientists are worried!

WBOY
WBOYforward
2023-04-28 17:16:071050browse

The emergence of GPT-4 is both exciting and frustrating.

Although GPT-4 has astonishing creativity and reasoning capabilities, scientists have expressed concerns about the security of this technology.

Since OpenAI went against its original intention and did not open source GPT-4 or publish the model’s training methods and data, its actual working conditions are unknown.

The scientific community is very frustrated about this.

Sasha Luccioni, a scientist specializing in environmental research at the open source AI community HuggingFace, said, "OpenAI can continue to develop based on their research, but for the entire community, all these closed source models , it’s like a dead end in science.”

Nature|GPT-4 has been blown up, and scientists are worried!

Fortunately, there is a red team test

Andrew, a chemical engineer at the University of Rochester As a member of the "red-teamer," White has privileged access to GPT-4.

OpenAI pays red teams to test the platform and try to get it to do bad things. So Andrew White has had the opportunity to come into contact with GPT-4 in the past 6 months.

He asked GPT-4 what chemical reaction steps are needed to make a compound, and asked it to predict the reaction yield and choose a catalyst.

"Compared to previous iterations, GPT-4 seemed no different, and I thought it was nothing. But then it was really surprising, it looked so realistic, It would spawn an atom here and skip a step there."

Nature|GPT-4 has been blown up, and scientists are worried!

But when he continued testing and asked GPT-4 When I visited the paper, things changed dramatically.

"We suddenly realized that maybe these models weren't all that great. But when you start connecting them to tools like backtracking synthesis planners or calculators, all of a sudden, New abilities have emerged."

With the emergence of these abilities, people began to worry. For example, could GPT-4 allow the manufacture of hazardous chemicals?

Andrew White shows that with the testing input of red teamers like White, and OpenAI engineers feeding it into their models, they can stop GPT-4 from creating dangers , illegal or disruptive content.

False facts

Outputting false information is another problem.

Luccioni said that models like GPT-4 have not yet been able to solve the problem of hallucinations, which means that they can still utter nonsense.

"You can't rely on this type of model because there are too many hallucinations, and although OpenAI says it has improved security in GPT-4, this is still a problem in the latest version Problem."

Nature|GPT-4 has been blown up, and scientists are worried!

With no access to data for training, OpenAI's security guarantees were insufficient in Luccioni's view.

"You don't know what the data is. So you can't improve it. It's completely impossible to do science with such a model."

Regarding how GPT-4 is trained, this mystery has also been bothering psychologist Claudi Bockting: "It is very difficult for humans to be responsible for things that you cannot supervise."

Luccioni also believes that GPT-4 will be biased by the training data, and without access to the code behind GPT-4, it is impossible to see where the bias may originate, and it is impossible to remedy it.

Ethical Discussion

Scientists have always had reservations about GPT.

When ChatGPT was launched, scientists had already objected to GPT appearing in the author column.

Nature|GPT-4 has been blown up, and scientists are worried!

#Publishers also believe that artificial intelligence such as ChatGPT does not meet the standards of research authors because they cannot evaluate the content and integrity of scientific papers. Responsible. But the contribution of AI to the writing of the paper can be recognized beyond the author list.

Additionally, there are concerns that these artificial intelligence systems are increasingly in the hands of large technology companies. These technologies should be tested and validated by scientists.

We urgently need to develop a set of guidelines to govern the use and development of artificial intelligence and tools such as GPT-4.

Despite such concerns, White said, GPT-4 and its future iterations will shake up science: “I think it’s going to be a huge infrastructure change in science, just in terms of A huge change like the Internet. We are starting to realize that we can connect papers, data programs, libraries, computational work and even robotic experiments. It will not replace scientists, but it can help complete some tasks."

However, it seems that any legislation surrounding artificial intelligence technology is struggling to keep up with the pace of development.

On April 11, the University of Amsterdam will convene an invitational summit to discuss with representatives from organizations such as UNESCO’s Committee on Ethics in Science, the Organization for Economic Co-operation and Development and the World Economic Forum these questions.

Main topics include insisting on manual inspection of LLM output; establishing mutual accountability rules within the scientific community aimed at transparency, integrity and fairness; investments being owned by independent non-profit organizations reliable and transparent large language models; embrace the advantages of AI, but must make trade-offs between the benefits of AI and the loss of autonomy; invite the scientific community to discuss GPT with relevant parties (from publishers to ethicists) and more .

The above is the detailed content of Nature|GPT-4 has been blown up, and scientists are worried!. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete