


OpenAI uses GPT-4 to explain GPT-2's 300,000 neurons: This is what wisdom looks like
Although ChatGPT seems to bring humans closer to recreating intelligence, so far we have never fully understood what intelligence is, whether natural or artificial.
It is obviously necessary to understand the principles of intelligence. How to understand the intelligence of large language models? The solution given by OpenAI is: ask what GPT-4 says.
On May 9, OpenAI released its latest research, which used GPT-4 to automatically interpret neuron behavior in large language models and obtained many interesting results.
A simple way to study interpretability is to first understand the various components of the AI model (neurons and attention heads) )doing what. Traditional methods require humans to manually inspect neurons to determine which features of the data they represent. This process is difficult to scale, and applying it to neural networks with hundreds or hundreds of billions of parameters is prohibitively expensive.
So OpenAI proposed an automated method - using GPT-4 to generate and score natural language explanations of neuron behavior and apply it to another language Neurons in the model - Here they chose GPT-2 as the experimental sample and published a data set of interpretations and scores of these GPT-2 neurons.
- Paper address: https://openaipublic.blob.core.windows.net/ neuron-explainer/paper/index.html
- GPT-2 neuron diagram: https://openaipublic.blob.core.windows.net/neuron- explainer/neuron-viewer/index.html
- Code and dataset: https://github.com/openai/automated-interpretability
This technology allows people to use GPT-4 to define and automatically measure the quantitative concept of explainability of AI models: it is used to measure language models using natural language compression and reconstruction The ability of neurons to activate. Due to their quantitative nature, we can now measure progress in understanding the computational goals of neural networks.
OpenAI said that using the benchmark they established, using AI to explain AI can achieve scores close to human levels.
## OpenAI co-founder Greg Brockman also said that we have taken an important step towards using AI to automate alignment research.
Specific methodThe method of using AI to explain AI involves running three steps on each neuron:
Step 1: Use GPT-4 to generate explanations
Explanations of model generation: References to movies, characters, and entertainment.
Step 2: Use GPT-4 to simulate
Use GPT-4 again to simulate the interpreted neural What will Yuan do.
Step 3: Comparison Explanations are scored based on how well simulated activations match real activations - in this case, GPT-4 scored 0.34.
Using its own scoring method, OpenAI began measuring the effectiveness of their technology on different parts of the network and trying to improve the technology for parts that are currently unclear. For example, their technique does not work well with larger models, possibly because later layers are more difficult to interpret.
OpenAI says that while the vast majority of their explanations didn’t score highly, they believe they can now use ML technology to further enhance their ability to generate explanations. For example, they found that the following helped improve their scores:
- Iterative explanations. They could improve their scores by asking GPT-4 to think of possible counterexamples and then modify the explanation based on their activation.
- Use a larger model for explanation. As the ability of the explainer model improves, the average score will also increase. However, even GPT-4 gave worse explanations than humans, suggesting there is room for improvement.
- Change the structure of the explained model. Training the model with different activation functions improves the explanation score.
OpenAI says it is making open source the dataset and visualization tools written by GPT-4 that interpret all 307,200 neurons in GPT-2. At the same time, they also provide code for interpretation and scoring using models publicly available on the OpenAI API. They hope the research community will develop new techniques to generate higher-scoring explanations, as well as better tools to explore GPT-2 through explanations.
They found that more than 1,000 neurons had an explanation score of at least 0.8, meaning they accounted for most of the neuron's top activation behavior according to GPT-4. Most of these well-explained neurons are not very interesting. However, they also found many interesting neurons that GPT-4 did not understand. OpenAI hopes that as explanations improve, they may quickly uncover interesting qualitative insights into model computations.
Here are some examples of neurons being activated in different layers, with higher layers being more abstract:
Future work of OpenAI
Currently, this method still has some limitations, and OpenAI hopes to solve these problems in future work:Ultimately, OpenAI hopes to use models to form, test, and iterate completely general hypotheses, just as explainability researchers do. Additionally, OpenAI hopes to interpret its largest models as a way to detect alignment and security issues before and after deployment. However, there is still a long way to go before that happens.
The above is the detailed content of OpenAI uses GPT-4 to explain GPT-2's 300,000 neurons: This is what wisdom looks like. For more information, please follow other related articles on the PHP Chinese website!

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver Mac version
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

WebStorm Mac version
Useful JavaScript development tools