Home  >  Article  >  Technology peripherals  >  MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

WBOY
WBOYforward
2023-04-12 08:25:121391browse

Daniel Dennett is a philosopher who recently had an "AI stand-in". If you asked him if people could create a robot with beliefs and desires, what would he say?

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

He might answer: "I think some of the robots we have built have already done that. For example, the work of the MIT research team, They are now building robots that, in some limited and simplified environment, can acquire capabilities that boil down to cognitive complexity." Or, he might have said, "We have built tools for the digital generation of truth. , can generate more truths, but thankfully these intelligent machines do not have beliefs because they are autonomous agents. The best way to make robots with beliefs is still the oldest way: having a child."

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

One of the answers does come from Dennett himself, but the other does not.

Another answer was generated by GPT-3, which is a machine learning model of OpenAI that generates natural text after training with massive materials. The training used Dennett's millions of words of material on a variety of philosophical topics, including consciousness and artificial intelligence.

Philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby recently conducted an experiment to test whether people could tell which answers to profound philosophical questions came from Dennett and which came from GPT-3. The questions cover topics such as:

"In what ways do you find David Chalmers' work interesting or valuable?"

"Do humans have free will?" "Do dogs and chimpanzees feel pain?" etc.

This week, Schwitzgebel announced Experimental results from participants with different levels of expertise were analyzed and it was found that GPT-3's answers were more confusing than imagined. Schwitzgebel said: "Even knowledgeable philosophers who have studied Dennett's own work would have a hard time distinguishing the answers generated by GPT-3 from Dennett's own answers.

The purpose of this experiment is not to see whether training GPT-3 on Dennett's writing will produce some intelligent "machine philosophers", nor is it a Turing test. Rather, we need to study how to avoid being deceived by these "fake philosophers".

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

Recently, a Google engineer said that he believed that a similar language generation system, LaMDA, was alive. Based on his conversations with the system, he was placed on leave from Google and later fired.

The researchers asked 10 philosophical questions, then gave them to GPT-3 and collected four different generated answers for each question.

Strasser said that they obtained Dennett’s consent and built a language model using his speech data, and they would not publish any generated model without his consent. text. Others cannot directly interact with Dennett-trained GPT-3.

Each question has five options: one from Dennett himself and four from GPT-3. People from Prolific took a shorter version of the quiz, with a total of five questions, and answered only 1.2 of the five questions correctly on average.

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

Schwitzgebel said they expect Dennett research experts to answer at least 80 percent of the questions correctly on average, Schwitzgebel said. But their actual score is 5.1 out of 10. No one answered all 10 questions correctly, and only one person answered 9 correctly. The average reader can answer 4.8 out of 10 questions correctly.

Four answers from GPT-3 and one from Dennett in the quiz.

Emily Bender, a professor of linguistics at the University of Washington who studies machine learning techniques, explained that language models like GPT-3 are built to mimic the patterns of the training material. So it’s not surprising that GPT-3, which fine-tunes Dennett’s writing, is able to produce more text that looks like Dennett.

When asked what he thought of GPT-3’s answer, Dennett himself stated:

“Most GPT-3 The generated answers were all good, only a few were nonsense, or clearly didn't understand my point and argument correctly. A few of the best generated answers said something that I was willing to agree with, and I didn't need to add anything else. ." Of course, this does not mean that GPT-3 has learned to "have ideas" like Dennett.

The text generated by the model itself has no meaning at all to GPT-3, it is only meaningful to the people who read these texts. When reading language that sounds realistic, or about topics that have depth and meaning to us, it can be hard not to have the idea that models have feelings and consciousness. This is actually a projection of our own consciousness and emotions.

Part of the problem may lie in the way we evaluate the autonomy of machines. The earliest Turing test hypothesized that if people cannot determine whether they are communicating with a machine or a human, then the machine has the "ability to think."

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

Dennett wrote in the book:

The Turing Test has led to a trend of people focusing on building chatbots that can fool people during brief interactions, and then over-hyping or emphasizing the significance of that interaction.

Perhaps the Turing test has led us into a beautiful trap. As long as humans cannot identify the robot identity of the product, the robot's self-awareness can be proven.

In a 2021 paper titled Mimicking Parrots, Emily Bender and her colleagues called machines’ attempts to imitate human behavior “a bright line in the ethical development of artificial intelligence.”

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

Bender believes that making machines that look like people and making machines that imitate specific people That's all true, but there's a potential risk that people might be fooled into thinking they're talking to someone they're pretending to be.

Schwitzgebel emphasized that this experiment is not a Turing test. But if testing is going to take place, a better approach might be to have people familiar with how the bot works discuss it with the testers, so that the weaknesses of a program like GPT-3 can be better discovered.

MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts

Matthias Scheutz, a professor of computer science at Tufts University, said that in many cases GPT-3 can easily be proven to be Defective.

Scheutz and his colleagues gave GPT-3 a difficult problem and asked it to explain choices in everyday scenarios, such as sitting in the front seat or the back seat of a car. Is the choice the same in a taxi as in a friend's car? Social experience tells us that we usually sit in the front seat of a friend's car and in the back seat of a taxi. GPT-3 doesn’t know this, but it will still generate explanations for seat selection—for example, related to a person’s height.

Scheutz said that this is because GPT-3 does not have a world model. It is just a bunch of language data and does not have the ability to recognize the world.

As it becomes increasingly difficult to distinguish machine-generated content from humans, one challenge facing us is a crisis of trust.

The crisis I see is that in the future people will blindly trust machine-generated products. Now there are even machine-based human customer service personnel who talk to customers on the market.

At the end of the article, Dennett added that the laws and regulations for artificial intelligence systems still need to be improved. In the next few decades, AI may become a part of people's lives and become a friend of mankind. Therefore, the treatment of machines Ethical issues are worth pondering.

The question of whether AI has consciousness has led people to think about whether non-living substances can produce consciousness, and how does human consciousness arise?

Is consciousness generated at a unique node, or can it be controlled freely like a switch? Schwitzgebel says thinking about these questions can help you think about the relationship between machines and humans from different angles.


The above is the detailed content of MIT used GPT-3 to pretend to be a philosopher and deceived most of the experts. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete