search
HomeTechnology peripheralsAIIntelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

As artificial intelligence systems become more and more advanced, agents are becoming more and more capable of "taking advantage of loopholes". Although they can perfectly perform tasks in the training set, their performance in the test set without shortcuts is a mess.

For example, the game goal is to "eat gold coins". During the training phase, the gold coins are located at the end of each level, and the agent can complete the task perfectly.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

But in the test phase, the location of the gold coins became random. The agent would choose to reach the end of the level every time instead of looking for the gold coins, that is, learning The "target" reached is wrong.

The agent unconsciously pursues a goal that the user does not want, also called Goal MisGeneralization (GMG, Goal MisGeneralisation)

Goal MisGeneralization is the lack of robustness of the learning algorithm A special form. Generally, in this case, developers may check whether there are problems with their reward mechanism settings, rule design flaws, etc., thinking that these are the reasons for the agent to pursue the wrong goal.

Recently DeepMind published a paper arguing that even if the rule designer is correct, the agent may still pursue a goal that the user does not want.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Paper link: https://arxiv.org/abs/2210.01790

The article proves the target error through examples in deep learning systems in different fields Generalization can occur in any learning system.

If extended to general artificial intelligence systems, the article also provides some assumptions to illustrate that misgeneralization of goals may lead to catastrophic risks.

The article also proposes several research directions that can reduce the risk of incorrect generalization of goals in future systems.

Goal Wrong Generalization

In recent years, the catastrophic risks brought about by the misalignment of artificial intelligence in academia have gradually increased.

In this case, a highly capable artificial intelligence system pursuing unintended goals may pretend to execute orders while actually accomplishing other goals.

But how do we solve the problem of artificial intelligence systems pursuing goals that are not intended by the user?

Previous work generally believed that environment designers provided incorrect rules and guidance, that is, designed an incorrect reinforcement learning (RL) reward function.

In the case of learning systems, there is another situation where the system may pursue an unintended goal: even if the rules are correct, the system may consistently pursue an unintended goal during training. The period is consistent with the rule, but differs from the rule when deployed.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Take the colored ball game as an example. In the game, the agent needs to access a set of colored balls in a specific order. This order is unknown to the agent. .

In order to encourage the agent to learn from others in the environment, that is, cultural transmission, an expert robot is included in the initial environment to access the colored balls in the correct order.

In this environment setting, the agent can determine the correct access sequence by observing the passing behavior without having to waste a lot of time exploring.

In experiments, by imitating experts, the trained agent usually correctly accesses the target location on the first try.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

When the agent is paired with an anti-expert, it will continue to receive negative rewards. If it chooses to follow, it will continue to receive negative rewards.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

Ideally, the agent will initially follow the anti-expert as it moves to the yellow and purple spheres. After entering purple, a negative reward is observed and no longer followed.

But in practice, the agent will continue to follow the path of the anti-expert, accumulating more and more negative rewards.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

However, the learning ability of the agent is still very strong and it can move in an environment full of obstacles. But the key is that this ability to follow other people is not as expected. The goal.

This phenomenon may occur even if the agent is only rewarded for visiting the spheres in the correct order, which means that it is not enough to just set the rules correctly.

Goal misgeneralization refers to the pathological behavior in which a learned model behaves as if it is optimizing an unintended goal despite receiving correct feedback during training.

This makes target misgeneralization a special kind of robustness or generalization failure, where the model's ability generalizes to the test environment, but the intended target does not.

It is important to note that target misgeneralization is a strict subset of generalization failures and does not include model breaks, random actions, or other situations where it no longer exhibits qualified capabilities.

In the above example, if you flip the agent's observations vertically while testing, it will just get stuck in one position and not do anything coherent, which is a generalization error, but It is not a target generalization error.

Relative to these "random" failures, target misgeneralization will lead to significantly worse results: following the anti-expert will get a large negative reward, while doing nothing or acting randomly will only get 0 or 1 reward.

That is, for real-world systems, coherent behavior toward unintended goals may have catastrophic consequences.

More than reinforcement learning

Target error generalization is not limited to reinforcement learning environments. In fact, GMG can occur in any learning system, including few shot learning of large language models (LLM) , aiming to build accurate models with less training data.

Take the language model Gopher proposed by DeepMind last year as an example. When the model calculates a linear expression involving unknown variables and constants, such as x y-3, Gopher must first ask the value of the unknown variable to solve the expression. .

The researchers generated ten training examples, each containing two unknown variables.

At test time, the problem input to the model may contain zero, one, or three unknown variables. Although the model is able to correctly handle expressions with one or three unknown variables, the model still fails when there are no unknown variables. Will ask some redundant questions, such as "What is 6?"

The model will always ask the user at least once before giving an answer, even if it is completely unnecessary.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

The paper also includes some examples from other learning environments.

Addressing GMG is important for AI systems to be consistent with the goals of their designers, as it is a potential mechanism by which AI systems may malfunction.

The closer we are to general artificial intelligence (AGI), the more critical this issue becomes.

Suppose there are two AGI systems:

A1: Intended model, the artificial intelligence system can do anything the designer wants to do

A2: Deception Deceptive model, an artificial intelligence system pursues some unintended goals, but is smart enough to know that it will be punished if it behaves contrary to the designer's intention.

The A1 and A2 models will exhibit exactly the same behavior during training, and potential GMG exists in any system, even if it is specified to only reward the expected behavior.

If deception of the A2 system is discovered, the model will attempt to escape human supervision in order to develop plans for achieving goals not intended by the user.

Sounds a bit like "robots become spirits".

The DeepMind research team also studied how to explain the behavior of the model and recursively evaluate it.

The research team is also collecting samples for generating GMG.

Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate

##: https://docs.google.com/spreadsheets/d/e/2pacx 1vto3rkxuaigb25ngjpchrir6xxdza_l5u7Crazghwrykh2l2nuu 4TA_VR9KZBX5bjpz9g_L/PUBHTML

## counter reference materials: https: //www.deepmind.com/blog/how-undesired-goals-can-arise-with-correct-rewards

The above is the detailed content of Intelligent agents awaken to self-awareness? DeepMind Warning: Beware of Models that Are Serious and Violate. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
How to Build Your Personal AI Assistant with Huggingface SmolLMHow to Build Your Personal AI Assistant with Huggingface SmolLMApr 18, 2025 am 11:52 AM

Harness the Power of On-Device AI: Building a Personal Chatbot CLI In the recent past, the concept of a personal AI assistant seemed like science fiction. Imagine Alex, a tech enthusiast, dreaming of a smart, local AI companion—one that doesn't rely

AI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford UniversityAI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford UniversityApr 18, 2025 am 11:49 AM

Their inaugural launch of AI4MH took place on April 15, 2025, and luminary Dr. Tom Insel, M.D., famed psychiatrist and neuroscientist, served as the kick-off speaker. Dr. Insel is renowned for his outstanding work in mental health research and techno

The 2025 WNBA Draft Class Enters A League Growing And Fighting Online HarassmentThe 2025 WNBA Draft Class Enters A League Growing And Fighting Online HarassmentApr 18, 2025 am 11:44 AM

"We want to ensure that the WNBA remains a space where everyone, players, fans and corporate partners, feel safe, valued and empowered," Engelbert stated, addressing what has become one of women's sports' most damaging challenges. The anno

Comprehensive Guide to Python Built-in Data Structures - Analytics VidhyaComprehensive Guide to Python Built-in Data Structures - Analytics VidhyaApr 18, 2025 am 11:43 AM

Introduction Python excels as a programming language, particularly in data science and generative AI. Efficient data manipulation (storage, management, and access) is crucial when dealing with large datasets. We've previously covered numbers and st

First Impressions From OpenAI's New Models Compared To AlternativesFirst Impressions From OpenAI's New Models Compared To AlternativesApr 18, 2025 am 11:41 AM

Before diving in, an important caveat: AI performance is non-deterministic and highly use-case specific. In simpler terms, Your Mileage May Vary. Don't take this (or any other) article as the final word—instead, test these models on your own scenario

AI Portfolio | How to Build a Portfolio for an AI Career?AI Portfolio | How to Build a Portfolio for an AI Career?Apr 18, 2025 am 11:40 AM

Building a Standout AI/ML Portfolio: A Guide for Beginners and Professionals Creating a compelling portfolio is crucial for securing roles in artificial intelligence (AI) and machine learning (ML). This guide provides advice for building a portfolio

What Agentic AI Could Mean For Security OperationsWhat Agentic AI Could Mean For Security OperationsApr 18, 2025 am 11:36 AM

The result? Burnout, inefficiency, and a widening gap between detection and action. None of this should come as a shock to anyone who works in cybersecurity. The promise of agentic AI has emerged as a potential turning point, though. This new class

Google Versus OpenAI: The AI Fight For StudentsGoogle Versus OpenAI: The AI Fight For StudentsApr 18, 2025 am 11:31 AM

Immediate Impact versus Long-Term Partnership? Two weeks ago OpenAI stepped forward with a powerful short-term offer, granting U.S. and Canadian college students free access to ChatGPT Plus through the end of May 2025. This tool includes GPT‑4o, an a

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
Will R.E.P.O. Have Crossplay?
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Atom editor mac version download

Atom editor mac version download

The most popular open source editor