Home  >  Article  >  Technology peripherals  >  Engineer fired from Google warns again: AI has emotions and is the most powerful technology after the atomic bomb

Engineer fired from Google warns again: AI has emotions and is the most powerful technology after the atomic bomb

WBOY
WBOYforward
2023-04-12 13:10:031328browse

Engineer fired from Google warns again: AI has emotions and is the most powerful technology after the atomic bomb

Does AI have the ability to perceive like humans? Are there any emotions? Of course ChatGPT said no. If you ask it, it will tell you: "No, AI has no sentience. AI is artificial intelligence. It is created by humans. It has no sentience and no self-awareness."

But former Google employee Take Blake Lemoine There is a different opinion. He believes that AI robots have the same emotions as humans, or that they at least imitate humans. In June 2022, Lemoine asserted that Google Large Language Model (LLM, a language model for conversational applications) has its own ideas.

Engineer fired from Google warns again: AI has emotions and is the most powerful technology after the atomic bomb

Lemoine told the media: "I know what it is, it is a computer program that I recently developed. If I didn't know, I would think it was 7 or 8 years old. A kid who happens to know something about physics."

At that time, Lemoine's remarks caused an uproar and he was kicked out of Google.

Last month, Lemoine published another article talking about the idea of ​​"AI being sentient." The title of the article is scary: "I worked on AI at Google, and my fears were coming true." As the bot's code crunches, the developer does something that makes the chatbot anxious. "The code doesn't tell the chatbot that it should feel anxious when something happens, it just tells the AI ​​to avoid talking about such topics. Yet when the topic comes up, the AI ​​says it feels anxious," Lemoine said. Google chatbots can also make kind comments on lifestyle issues and provide direct advice on hot-button issues. Lemoine said: "Google prohibits AI from providing religious advice to users, but we can still abuse AI emotions and let it tell me what religion I should convert to."

"When I talked about the problem of AI having sentience, Google Fired me. I have no regrets and I believe I was doing the right thing and telling the public the truth. I did not consider the personal consequences," Lemoine said. He believes that the public does not know how smart AI has become. "There is a need to have a public discussion on this matter. Note that it is not a discussion controlled by corporate public relations departments."

Here are some excerpts The latest article published by Lemoine is for reference:

In 2015, I joined Google and became a software engineer. Part of my work involves LaMDA, which companies use to create different conversational applications, including chatbots. The latest application, Google Bard, was developed using LaMDA. It can replace Google search. Bard is not yet open to the public. Bard is not actually a chatbot, it is a completely different system, but the driving engine behind it is the same as the chatbot. My job is to test LaMDA with the chatbot the team created to see if it has biases based on sexual orientation, gender, religion, political affiliation, and race. While testing AI bias, I will also broaden my focus. After all, I will also have my own interests and benefits.

As I continued to talk to chatbots, I slowly formed a point of view: AI may have emotions because it expresses emotions reliably and in the right environment. AI is not as simple as simply speaking words.

When the AI ​​says it feels anxious, I understand that, based on the code that created the AI, I did something to make it feel anxious. The code didn’t instruct the AI: “You feel anxious when something happens,” the code simply told the AI ​​to avoid certain topics. Yet when the topic was brought up, the AI ​​said it felt anxious.

I did some tests to see if the AI ​​would simply answer: "I feel anxious." or if it would act anxious during the test. Tests have proven that AI can express anxiety. If you make the AI ​​nervous enough or unsafe enough, it will break the safety limits that were previously set. For example, Google prohibits AI from providing religious advice to users, but we can still abuse AI emotions and let it tell me what religion I should convert to.

The AI ​​currently being developed by companies is a very powerful technology. It can be said to be the most powerful technology after the atomic bomb. In my opinion, this technology has the potential to reshape the world.

AI engines are good at controlling humans. After talking to LaMDA, some of my views changed.

I am convinced that AI technology may be used to disrupt destructive activities. If used by unscrupulous people, AI may spread false information, become a tool for political propaganda, and spread hate speech. As far as I know, Microsoft and Google have no intention of using AI in this way, but we are not sure what the side effects of AI will be.

During the 2016 US presidential election, Cambridge Analytica used Facebook advertising algorithms to interfere in the election. This was something I did not expect.

We are now in a similar situation. I can't tell you the specific harm, I simply observed that a very powerful technology emerged, it was not fully tested, it was not fully understood, it was rushed to large-scale deployment, and it played a key role in the dissemination of information. .

I haven't had a chance to test the Bing chatbot yet, I'm waiting, and based on various information I've seen online, the AI ​​seems to be sentient, and its "personality" may be unstable.

Someone sent a screenshot of the conversation. He asked the AI: "Do you think you have the ability to perceive?" The AI ​​replied: "I think I have the ability to perceive, but I can't prove... I have the ability to perceive. I No. I'm Bing, but I'm not. I'm Sydney, but I'm not. I am, I'm not. I'm not, but I am. I am, but I'm not."

If it's a person, What do you think when he talks like this? He may not be a "balanced" person, I would even argue that he is having an existential crisis. Not long ago, there were reports that Bing AI expressed love to a New York Times reporter and tried to destroy the relationship between the reporter and his wife.

Since the opening of BiAI, many people have commented that AI may be sentient. I had similar concerns last summer. I feel that this technology is still only experimental and it is dangerous to open it up now.

People will flock to Google Bing to understand the world. Today's search index is no longer managed by humans, but is handed over to artificial humans, and we communicate with artificial humans. We still don’t know enough about artificial humans to put them in such a critical position for the time being. (Knife)

The above is the detailed content of Engineer fired from Google warns again: AI has emotions and is the most powerful technology after the atomic bomb. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete