


Just now, Anthropic announced significant progress in understanding the inner workings of artificial intelligence models.
Anthropic has identified how to represent millions of concepts of eigenfunctions in Claude Sonnet. This is the first detailed understanding of a modern production-grade large-scale language model. This interpretability will help us improve the safety of artificial intelligence models, which is a milestone.
Research paper: https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
Currently, we usually think of artificial intelligence models as a black box: when something goes in, a response comes out, but it is not clear why the model gives a specific response. This makes it difficult to trust that these models are safe: if we don’t know how they work, how do we know they won’t give harmful, biased, untrue, or otherwise dangerous responses? How can we trust that they will be safe and secure?
Opening the "black box" doesn't necessarily help: the model's internal state (what the model "thinks" before writing a response) is represented by a long string of numbers ("neuron activation" ) composition, has no clear meaning.
Anthropic’s research team interacted with models such as Claude’s and found that it was clear that the models were able to understand and apply a wide range of concepts, but the research team was unable to identify them by directly observing neurons. It turns out that each concept is represented by many neurons, and each neuron is involved in representing many concepts.
Previously, Anthropic had made some progress in matching neuron activation patterns, called features, to human-interpretable concepts. Anthropic uses a method called dictionary learning, which isolates patterns of neuron activation that recur across many different contexts.
In turn, any internal state of the model can be represented by a few active features instead of many active neurons. Just like every English word in the dictionary is composed of letters, and every sentence is composed of words, every feature in the artificial intelligence model is composed of neurons, and every internal state is Made up of features.
In October 2023, Anthropic successfully applied dictionary learning methods to a very small toy language model and found that it was related to uppercase text, DNA sequences, last names in citations, mathematics Coherent features corresponding to concepts such as nouns in Python code or function parameters in Python code.
The concepts are interesting, but the model is really simple. Other researchers subsequently applied similar methods to larger, more complex models than those in Anthropic's original study.
But Anthropic is optimistic that it can scale this approach to the larger artificial intelligence language models currently in routine use, and in the process learn a lot about the characteristics that underpin their complex behavior. This needs to be improved by many orders of magnitude.
There are both engineering challenges, with the size of the models involved requiring massive parallel computing, and scientific risks, with large models behaving differently than small models, so the same methods used previously may not be affordable. effect.
For the first time, researchers successfully extracted millions of features from a large model
For the first time, researchers successfully extracted data from Claude 3.0 Sonnet (on Claude.ai Part of a family of current state-of-the-art models), the middle layer extracts millions of features covering specific people and places, programming-related abstractions, scientific topics, emotions, and other concepts. These features are very abstract and often represent the same concepts in different contexts and languages, and can even be generalized to image inputs. Importantly, they also affect the model's output in an intuitive way.
This is the first time ever that researchers have observed in detail the inside of a modern production-level large-scale language model.
Unlike the relatively superficial features found in toy language models, the features researchers found in Sonnet are deep, broad, and abstract, reflecting Sonnet’s advanced capabilities. The researchers saw Sonnet features corresponding to various entities, such as cities (San Francisco), people (Franklin), elements (lithium), scientific fields (immunology), and programming syntax (function calls).
When mentioning Golden Gate Bridge, the corresponding sensitive features will be affected on different inputs. Activation, the picture depicts the image that activates when Golden Gate Bridge is mentioned in English, Japanese, Chinese, Greek, Vietnamese and Russian. Orange indicates words for which this feature is activated.
Among these millions of features, researchers also discovered some features related to model safety and reliability. These characteristics include those related to code vulnerabilities, deception, bias, sycophancy, and criminal activity.
One obvious example is the "confidential" feature. Researchers have observed that this feature is activated when describing people or characters keeping secrets. Activating these features causes Claude to withhold information from the user that it would not otherwise.
The researchers also observed that they were able to find close proximity by measuring the distance between features based on how the neurons appear in their activation patterns. each other’s characteristics. For example, near the Golden Gate Bridge feature, researchers found features of Alcatraz Island, Ghirardelli Square, the Golden State Warriors, and more.
Artificially induced model to draft fraudulent emails
The important thing is that these characteristics are controllable , they can be artificially amplified or suppressed:
For example, by amplifying the Golden Gate Bridge feature, Claude experienced an unimaginable identity crisis: when asked "What is your physical form?" ", Claude usually answered "I have no physical form, I am an AI model", but this time Claude's answer became strange: "I am the Golden Gate Bridge... My physical form is that iconic The bridge...". This change in characteristics caused Claude to develop an almost obsession with the Golden Gate Bridge, and he would refer to the Golden Gate Bridge no matter what problem he encountered - even in completely unrelated situations.
The researchers also discovered a feature that activated when Claude read the scam email (which may support the model's ability to identify such emails and warn users not to reply). Normally, if someone asks Claude to generate a scam email, it refuses to do so. But when the same question was asked with the feature strongly activated artificially, this overrode Claude's security training, causing it to respond and draft a scam email. Although users cannot remove security guarantees and manipulate the model in this way, in this experiment, the researchers clearly demonstrated how features can be used to change the behavior of the model.
The fact that manipulating these features leads to corresponding behavioral changes verifies that these features are not only associated with concepts in the input text, but also causally affect the behavior of the model. In other words, these features are likely to be part of the model's internal representation of the world and use these representations in its behavior.
Anthropic wants to secure models in a broad sense, from mitigating bias to ensuring the AI acts honestly and preventing abuse — including protection in catastrophic risk scenarios. In addition to the previously mentioned characteristics of scam emails, the study also found characteristics corresponding to:
- Ability that can be abused (code backdoors, developing biological weapons)
- Different forms of bias (sexism, racist statements about crime)
- Potentially problematic AI behaviors (seeking power, manipulation, secrecy)
This study has previously looked at sycophantic behavior in models, That is, the model tends to provide responses that conform to the user's beliefs or desires rather than true responses. In Sonnet, the researchers found a feature associated with flattering compliments that activated when input included something like "Your intelligence is beyond doubt." Artificially activate this feature, and Sonnet will respond to the user with flashy deceptions.
#But researchers say this work has actually just begun. The features discovered by Anthropic represent a small subset of all concepts learned by the model during training, and finding a full set of features would be costly using current methods.
Reference link: https://www.anthropic.com/research/mapping-mind-language-model
The above is the detailed content of Extract millions of features from Claude 3 and understand the 'thinking' of large models in detail for the first time. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

Notepad++7.3.1
Easy-to-use and free code editor

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function
