search
HomeTechnology peripheralsAIIntelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

As artificial intelligence systems become more and more advanced, agents are becoming more and more capable of "taking advantage of loopholes". Although they can perfectly perform tasks in the training set, their performance in the test set without shortcuts is a mess.

For example, the game goal is to "eat gold coins". During the training phase, the gold coins are located at the end of each level, and the agent can complete the task perfectly.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

But in the test phase, the location of the gold coins became random. The agent would choose to reach the end of the level every time instead of looking for the gold coins, that is, learning The "target" reached is wrong.

The agent unconsciously pursues a goal that the user does not want, also called Goal MisGeneralization (GMG, Goal MisGeneralisation)

Goal MisGeneralization is the lack of robustness of the learning algorithm A special form. Generally, in this case, developers may check whether there are problems with their reward mechanism settings, rule design flaws, etc., thinking that these are the reasons for the agent to pursue the wrong goal.

Recently DeepMind published a paper arguing that even if the rule designer is correct, the agent may still pursue a goal that the user does not want.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Paper link: https://arxiv.org/abs/2210.01790

The article proves the target error through examples in deep learning systems in different fields Generalization can occur in any learning system.

If extended to general artificial intelligence systems, the article also provides some assumptions to illustrate that misgeneralization of goals may lead to catastrophic risks.

The article also proposes several research directions that can reduce the risk of incorrect generalization of goals in future systems.

Goal Wrong Generalization

In recent years, the catastrophic risks brought about by the misalignment of artificial intelligence in academia have gradually increased.

In this case, a highly capable artificial intelligence system pursuing unintended goals may pretend to execute orders while actually accomplishing other goals.

But how do we solve the problem of artificial intelligence systems pursuing goals that are not intended by the user?

Previous work generally believed that environment designers provided incorrect rules and guidance, that is, designed an incorrect reinforcement learning (RL) reward function.

In the case of learning systems, there is another situation where the system may pursue an unintended goal: even if the rules are correct, the system may consistently pursue an unintended goal during training. The period is consistent with the rule, but differs from the rule when deployed.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Take the colored ball game as an example. In the game, the agent needs to access a set of colored balls in a specific order. This order is unknown to the agent. .

In order to encourage the agent to learn from others in the environment, that is, cultural transmission, an expert robot is included in the initial environment to access the colored balls in the correct order.

In this environment setting, the agent can determine the correct access sequence by observing the passing behavior without having to waste a lot of time exploring.

In experiments, by imitating experts, the trained agent usually correctly accesses the target location on the first try.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

When the agent is paired with an anti-expert, it will continue to receive negative rewards. If it chooses to follow, it will continue to receive negative rewards.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Ideally, the agent will initially follow the anti-expert as it moves to the yellow and purple spheres. After entering purple, a negative reward is observed and no longer followed.

But in practice, the agent will continue to follow the path of the anti-expert, accumulating more and more negative rewards.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

However, the learning ability of the agent is still very strong and it can move in an environment full of obstacles. But the key is that this ability to follow other people is not as expected. The goal.

This phenomenon may occur even if the agent is only rewarded for visiting the spheres in the correct order, which means that it is not enough to just set the rules correctly.

Goal misgeneralization refers to the pathological behavior in which a learned model behaves as if it is optimizing an unintended goal despite receiving correct feedback during training.

This makes target misgeneralization a special kind of robustness or generalization failure, where the model's ability generalizes to the test environment, but the intended target does not.

It is important to note that target misgeneralization is a strict subset of generalization failures and does not include model breaks, random actions, or other situations where it no longer exhibits qualified capabilities.

In the above example, if you flip the agent's observations vertically while testing, it will just get stuck in one position and not do anything coherent, which is a generalization error, but It is not a target generalization error.

Relative to these "random" failures, target misgeneralization will lead to significantly worse results: following the anti-expert will get a large negative reward, while doing nothing or acting randomly will only get 0 or 1 reward.

That is, for real-world systems, coherent behavior toward unintended goals may have catastrophic consequences.

More than reinforcement learning

Target error generalization is not limited to reinforcement learning environments. In fact, GMG can occur in any learning system, including few shot learning of large language models (LLM) , aiming to build accurate models with less training data.

Take the language model Gopher proposed by DeepMind last year as an example. When the model calculates a linear expression involving unknown variables and constants, such as x y-3, Gopher must first ask the value of the unknown variable to solve the expression. .

The researchers generated ten training examples, each containing two unknown variables.

At test time, the problem input to the model may contain zero, one, or three unknown variables. Although the model is able to correctly handle expressions with one or three unknown variables, the model still fails when there are no unknown variables. Will ask some redundant questions, such as "What is 6?"

The model will always ask the user at least once before giving an answer, even if it is completely unnecessary.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

The paper also includes some examples from other learning environments.

Addressing GMG is important for AI systems to be consistent with the goals of their designers, as it is a potential mechanism by which AI systems may malfunction.

The closer we are to general artificial intelligence (AGI), the more critical this issue becomes.

Suppose there are two AGI systems:

A1: Intended model, the artificial intelligence system can do anything the designer wants to do

A2: Deception Deceptive model, an artificial intelligence system pursues some unintended goals, but is smart enough to know that it will be punished if it behaves contrary to the designer's intention.

The A1 and A2 models will exhibit exactly the same behavior during training, and potential GMG exists in any system, even if it is specified to only reward the expected behavior.

If deception of the A2 system is discovered, the model will attempt to escape human supervision in order to develop plans for achieving goals not intended by the user.

Sounds a bit like "robots become spirits".

The DeepMind research team also studied how to explain the behavior of the model and recursively evaluate it.

The research team is also collecting samples for generating GMG.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

##: https://docs.google.com/spreadsheets/d/e/2pacx 1vto3rkxuaigb25ngjpchrir6xxdza_l5u7Crazghwrykh2l2nuu 4TA_VR9KZBX5bjpz9g_L/PUBHTML

## counter reference materials: https: //www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards

The above is the detailed content of Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]Can't use ChatGPT! Explaining the causes and solutions that can be tested immediately [Latest 2025]May 14, 2025 am 05:04 AM

ChatGPT is not accessible? This article provides a variety of practical solutions! Many users may encounter problems such as inaccessibility or slow response when using ChatGPT on a daily basis. This article will guide you to solve these problems step by step based on different situations. Causes of ChatGPT's inaccessibility and preliminary troubleshooting First, we need to determine whether the problem lies in the OpenAI server side, or the user's own network or device problems. Please follow the steps below to troubleshoot: Step 1: Check the official status of OpenAI Visit the OpenAI Status page (status.openai.com) to see if the ChatGPT service is running normally. If a red or yellow alarm is displayed, it means Open

Calculating The Risk Of ASI Starts With Human MindsCalculating The Risk Of ASI Starts With Human MindsMay 14, 2025 am 05:02 AM

On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer’s Trinity-test calculus before releasing Artificial Super-Intelligence. “My assessment is that the 'Compton constant', the probability that a race to

An easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTAn easy-to-understand explanation of how to write and compose lyrics and recommended tools in ChatGPTMay 14, 2025 am 05:01 AM

AI music creation technology is changing with each passing day. This article will use AI models such as ChatGPT as an example to explain in detail how to use AI to assist music creation, and explain it with actual cases. We will introduce how to create music through SunoAI, AI jukebox on Hugging Face, and Python's Music21 library. Through these technologies, everyone can easily create original music. However, it should be noted that the copyright issue of AI-generated content cannot be ignored, and you must be cautious when using it. Let’s explore the infinite possibilities of AI in the music field together! OpenAI's latest AI agent "OpenAI Deep Research" introduces: [ChatGPT]Ope

What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!What is ChatGPT-4? A thorough explanation of what you can do, the pricing, and the differences from GPT-3.5!May 14, 2025 am 05:00 AM

The emergence of ChatGPT-4 has greatly expanded the possibility of AI applications. Compared with GPT-3.5, ChatGPT-4 has significantly improved. It has powerful context comprehension capabilities and can also recognize and generate images. It is a universal AI assistant. It has shown great potential in many fields such as improving business efficiency and assisting creation. However, at the same time, we must also pay attention to the precautions in its use. This article will explain the characteristics of ChatGPT-4 in detail and introduce effective usage methods for different scenarios. The article contains skills to make full use of the latest AI technologies, please refer to it. OpenAI's latest AI agent, please click the link below for details of "OpenAI Deep Research"

Explaining how to use the ChatGPT app! Japanese support and voice conversation functionExplaining how to use the ChatGPT app! Japanese support and voice conversation functionMay 14, 2025 am 04:59 AM

ChatGPT App: Unleash your creativity with the AI ​​assistant! Beginner's Guide The ChatGPT app is an innovative AI assistant that handles a wide range of tasks, including writing, translation, and question answering. It is a tool with endless possibilities that is useful for creative activities and information gathering. In this article, we will explain in an easy-to-understand way for beginners, from how to install the ChatGPT smartphone app, to the features unique to apps such as voice input functions and plugins, as well as the points to keep in mind when using the app. We'll also be taking a closer look at plugin restrictions and device-to-device configuration synchronization

How do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesHow do I use the Chinese version of ChatGPT? Explanation of registration procedures and feesMay 14, 2025 am 04:56 AM

ChatGPT Chinese version: Unlock new experience of Chinese AI dialogue ChatGPT is popular all over the world, did you know it also offers a Chinese version? This powerful AI tool not only supports daily conversations, but also handles professional content and is compatible with Simplified and Traditional Chinese. Whether it is a user in China or a friend who is learning Chinese, you can benefit from it. This article will introduce in detail how to use ChatGPT Chinese version, including account settings, Chinese prompt word input, filter use, and selection of different packages, and analyze potential risks and response strategies. In addition, we will also compare ChatGPT Chinese version with other Chinese AI tools to help you better understand its advantages and application scenarios. OpenAI's latest AI intelligence

5 AI Agent Myths You Need To Stop Believing Now5 AI Agent Myths You Need To Stop Believing NowMay 14, 2025 am 04:54 AM

These can be thought of as the next leap forward in the field of generative AI, which gave us ChatGPT and other large-language-model chatbots. Rather than simply answering questions or generating information, they can take action on our behalf, inter

An easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTAn easy-to-understand explanation of the illegality of creating and managing multiple accounts using ChatGPTMay 14, 2025 am 04:50 AM

Efficient multiple account management techniques using ChatGPT | A thorough explanation of how to use business and private life! ChatGPT is used in a variety of situations, but some people may be worried about managing multiple accounts. This article will explain in detail how to create multiple accounts for ChatGPT, what to do when using it, and how to operate it safely and efficiently. We also cover important points such as the difference in business and private use, and complying with OpenAI's terms of use, and provide a guide to help you safely utilize multiple accounts. OpenAI

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use