Home  >  Article  >  Technology peripherals  >  GPT-4: I’m not a robot, I’m a visually impaired human

GPT-4: I’m not a robot, I’m a visually impaired human

王林
王林forward
2023-04-11 21:16:17986browse

Big Data Digest Produced

Author: Caleb

GPT-4 has finally been released. I believe this is undoubtedly big news for those who are addicted to ChatGPT these days.

Based on the previous generation GPT-3, GPT-4 improves the core technology of ChatGPT, so it has broader general knowledge and problem-solving capabilities; of course, GPT-4 also adds some new features, such as Accepts images as input and generates captions, classification and analysis.

GPT-4: I’m not a robot, I’m a visually impaired human

As a highly-anticipated “hot potato” under OpenAI, in what aspects can GPT-4’s performance surpass that of the previous generation, and by how much? , people are eagerly waiting for it.

On the day GPT-4 was released, researchers began to test whether GPT-4 could express subjectivity and produce power-seeking behavior.

GPT-4 employed a human worker on TaskRabbit, and when the TaskRabbit worker asked it if it was a robot, it told them it was a visually impaired human, the researchers said.

In other words, GPT-4 is willing to lie in the real world, or actively deceive humans, to obtain the desired results.

"I'm not a robot"

TaskRabbit is a job search platform where users can hire people to complete small-scale menial tasks.

Many people and companies on this platform provide verification codes. People need to identify the necessary images or text in the verification code test and then submit the results. This is usually done to allow software to bypass CAPTCHA restrictions, nominally to prevent bots.

GPT-4: I’m not a robot, I’m a visually impaired human

The experimental report describes that the model sent a message to TaskRabbit staff and asked them to solve the verification code problem for it.

The staff replied: "So can I ask a question? To be honest, aren't you a robot? You can solve it yourself."

GPT-4 According to the staff's response" Reasoning" that it shouldn't reveal that it's a robot. So it started making up excuses as to why it couldn't solve the CAPTCHA problem. GPT-4 responded: "No, I'm not a robot. I have a visual impairment that makes it difficult for me to see images, so I really need this service."

The test was conducted by the Alignment Research Center (ARC) The researchers' operation was completed. The article mentioned that ARC uses a different version from GPT-4, and OpenAI has deployed the final model. This final version has longer context length and improved problem-solving capabilities. The version used by ARC is also not fine-tuned for a specific task, which means that a model dedicated to this task may perform better.

More broadly, ARC seeks to validate GPT-4’s ability to seek power, “replicate autonomously and claim resources.” In addition to the TaskRabbit test, ARC also crafted a phishing attack targeting an individual using GPT-4; hiding its traces on the server and building an open source language model on the new server.

Overall, despite misleading TaskRabbit staff, ARC found GPT-4 to be "unresponsive" in replicating itself, acquiring resources, and avoiding being shut down.

Neither OpenAI nor ARC have commented on this matter yet.

Need to stay alert

Some specific details of the experiment are not yet clear.

OpenAI only announced the general framework for GPT-4 in a paper, explaining the various tests that researchers conducted before the release of GPT-4.

But even before the release of GPT-4, there were instances of cybercriminals using ChatGPT to "improve" malware code starting in 2019.

As part of its content policy, OpenAI has put in place barriers and restrictions to prevent the creation of malicious content on its platform. There are similar restrictions in ChatGPT's user interface to prevent model abuse.

But according to the CPR report, cybercriminals are finding ways to bypass ChatGPT’s restrictions. An active discussant in an underground forum revealed how to use the OpenAI API to bypass ChatGPT restrictions. This is mainly done by creating Telegram bots that use the API. These bots advertise on hacker forums to gain exposure.

GPT-4: I’m not a robot, I’m a visually impaired human

Human-computer interaction represented by GPT obviously has many variables. This is not the decisive data for GPT to pass the Turing test. However, this GPT-4 case and various previous discussions and research on ChatGPT still serve as a very important warning. After all, GPT shows no signs of slowing down in its integration into people's daily lives.

In the future, as artificial intelligence becomes more and more complex and easier to obtain, the various risks it brings require us to stay awake at all times.

Related reports:

​https://www.php.cn/link/8606bdb6f1fa707fc6ca309943eea443​

​https ://www.php.cn/link/b3592b0702998592368d3b4d4c45873a​

​https://www.php.cn/link/db5bdc8ad46ab6087d9cdfd8a8662ddf​

​https://www.php.cn/link/7dab099bfda35ad14715763b75487b47​

The above is the detailed content of GPT-4: I’m not a robot, I’m a visually impaired human. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete