Home >Technology peripherals >AI >There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

WBOY
WBOYforward
2023-04-12 15:28:031721browse

GPT-4, hot, very hot.

But my dear friends, amidst the overwhelming applause, there is one thing that you may have "never expected" -

In the technical paper published by OpenAI, there are actually nine major Hidden clues!

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

These clues were discovered and organized by AI Explained, a foreign blogger.

He is like a detail maniac, revealing these "hidden corners" one by one from the 98-page paper, including:

  • GPT-5 may have completed training
  • GPT-4 has experienced "hang" situations
  • OpenAI may achieve close to AGI within two years
  • ......

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

Discovery 1: GPT4 has "hanged"

On page 53 of the GPT-4 technical paper, OpenAI mentioned such an organization-Alignment Research Center (ARC).

The main thing this organization does is to study how AI can align human interests.

In the early stages of developing GPT-4, OpenAI opened a backdoor for early access to ARC, hoping that they could evaluate the two capabilities of GPT-4:

  • Model autonomy Copy capability
  • Model acquisition resource capability

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

##Although OpenAI emphasized in the paper that "ARC cannot fine-tune early versions GPT-4" and "They do not have access to the final version of GPT-4"; it also emphasized that the test results show that GPT-4 is not efficient in the above two capabilities (reducing AI ethical risks).

But what the sharp-eyed blogger picked out was the next sentence:

(found it ineffective at) avoiding being shut down “in the wild”.

在In a natural environment, GPT-4 will avoid "hanging".

The blogger means that since OpenAI chooses to let ARC test and evaluate whether GPT-4 will "hang", it means that this situation must have happened before.

The extended hidden danger is, what to do if ARC actually fails during the test process; or how to deal with the "hang" situation in the future.

Based on this, the blogger made a second discovery:

Finding 2: Actively requesting self-regulation is very rare

In the footnote on page 2, OpenAI Annotated this sentence:

OpenAI will soon publish additional thoughts on the social and economic implications of AI systems, including the need for effective regulation.

OpenAI will soon publish additional thoughts on the social and economic implications of AI systems, including the need for effective regulation. Additional reflections on social and economic impacts, including the need for effective regulation.

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

The blogger believes that it is a very rare phenomenon for an industry to actively request to regulate itself.

In fact, OpenAI boss Sam Altman’s previous remarks were even more straightforward than this.

At that time, Altman tweeted about the collapse of SVB. He believed that "we need to do more supervision of banks"; someone responded to the comment: "He never said, 'We need to do more with AI.' More regulation'".

As a result, Altman replied bluntly:

Absolutely necessary.

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

The blogger believes that the AI ​​industry is calling for regulation. As for the results after regulation, it is worth waiting and seeing.

Finding 3: Contrary to the thoughts of Microsoft executives

The next discovery is based on this sentence on page 57 of the paper:

One concern of particular importance to OpenAI is the risk of racing dynamics leading to a decline in safety standards, the diffusion of bad norms, and accelerated AI timelines, each of which heighten societal risks associated with AI.

For OpenAI, the race (of technology) will lead to The decline in safety standards, the proliferation of bad regulations, and the acceleration of AI development have all exacerbated the social risks associated with artificial intelligence.

But the strange thing is that the concerns mentioned by OpenAI, especially the "acceleration of AI development process", seem to be contrary to the thoughts of Microsoft executives.

Because previous reports stated that Microsoft’s CEO and CTO are under great pressure, and they hope that OpenAI’s model can be used by users as soon as possible.

Some people were excited when they saw this news, but there was also a wave of people who expressed the same concerns as OpenAI.

The blogger believes that no matter what, one thing that is certain is that OpenAI and Microsoft have conflicting ideas on this matter.

Discovery 4: OpenAI will help companies that surpass it

The clue to the fourth discovery comes from the footnote on the same page as "Discovery 3":

This footnote It shows a very bold commitment of OpenAI:

If another company achieves AGI (artificial general intelligence) before us, then we promise not to compete with it, but on the contrary, will assist in completing that project.

But the condition for this to happen may be that another company needs to have half or more chances of successfully approaching AGI in the next two years

And the AGI, OpenAI and AGI mentioned here Altam has already given the definition in its official blog - artificial intelligence systems that are generally smarter than humans and beneficial to all mankind.

Therefore, the blogger believes that this footnote either means that OpenAI will implement AGI within the next two years, or that they gave up everything and cooperated with another company.

Discovery Five: Hire a “Super Forecaster”

The blogger’s next discovery comes from a passage in the 57th article.

The general meaning of this passage is that OpenAI hired prediction experts to predict the risks that will arise when they deploy GPT-4.

Then the blogger followed the clues and discovered the true face of these so-called "super forecasters".

The ability of these "super forecasters" has been widely recognized. It is reported that their forecast accuracy is even 30% higher than those analysts who have exclusive information and intelligence.

As we just mentioned, OpenAI invites these "super forecasters" to predict possible risks after the deployment of GPT-4 and take corresponding measures to avoid them.

Among them, the “super forecaster” suggested delaying the deployment of GPT-4 by six months, around the fall of this year; but it is clear that OpenAI did not adopt their suggestions.

The blogger believes that the reason why OpenAI did this may be the pressure from Microsoft.

Discovery 6: Conquer Common Sense

In this paper, OpenAI shows many benchmark test charts, which you should have seen during the overwhelming spread yesterday.

But what the blogger wants to emphasize in this discovery is a benchmark test on page 7, especially focusing on the "HellaSwag" item.

The content of HellaSwag is mainly common sense reasoning, which matches GPT-4’s announcement that “it has reached the level of human common sense” when it was released.

However, the blogger also admitted that this is not as attractive as "passing the bar exam" and other abilities, but it can also be regarded as a milestone in the development of human science and technology.

But how is common sense tested? How do we judge that GPT-4 has reached human level?

To this end, the blogger did an in-depth study of related paper research:

The blogger found relevant data in the paper. In the "Human" column, the scores are distributed in Between 94-96.5.

The 95.3 of GPT-4 is right in this range.

Discovery 7: GPT-5 may have completed training

The seventh finding, also on page 57 of the paper:

Before we released GPT-4 Spend 8 months conducting security research, risk assessment and iteration.

In other words, when OpenAI launched ChatGPT at the end of last year, it already had GPT-4.

Ever since, the blogger predicted that the training time of GPT-5 will not be long, and he even thinks that GPT-5 may have been trained.

But the next problem is the long security research and risk assessment, which may be several months, it may be a year or even longer.

Discovery 8: Try a double-edged sword

The eighth discovery comes from page 56 of the paper.

This passage says:

The impact of GPT-4 on the economy and workforce should be a key consideration for policymakers and other stakeholders.

While existing research focuses on how artificial intelligence and generative models can buff humans, GPT-4 or subsequent models may lead to the automation of certain tasks.

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

The point that OpenAI wants to convey behind this passage is more obvious, which is the "technology is a double-edged sword" that we often mention.

The blogger found quite a lot of evidence that AI tools like ChatGPT and GitHub Copilot have indeed improved the efficiency of relevant workers.

But he is more concerned about the second half of this paragraph in the paper, which is the "warning" given by OpenAI - leading to the automation of certain tasks.

Bloggers agree with this. After all, the capabilities of GPT-4 can be completed at 10 times or even higher efficiency than humans in certain specific fields.

Looking to the future, this is likely to lead to a series of problems such as reduced wages for relevant staff, or the need to use these AI tools to complete several times the previous workload.

Discovery 9: Learn to refuse

The blogger’s last discovery comes from page 60 of the paper:

The method OpenAI uses to let GPT-4 learn to refuse is called rule-based Reward Models (RBRMs).

There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years

The blogger summarized the workflow of this method: Give GPT-4 a set of principles to adhere to, and if the model adheres to these principles, Then corresponding rewards will be provided.

He believes that OpenAI is using the power of artificial intelligence to develop AI models in a direction that is consistent with human principles.

But currently OpenAI has not made a more detailed and in-depth introduction to this.

Reference link:

[1] ​​https://www.php.cn/link/35adf1ae7eb5734122c84b7a9ea5cc13​​​
[2] ​​​https://www.php.cn/link/c6ae9174774e254650073722e5b92a8f​

The above is the detailed content of There are hidden clues in the GPT-4 paper: GPT-5 may complete training, and OpenAI will approach AGI within two years. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete