Large Language Models (LLMs) have made remarkable progress that can perform a variety of tasks, from generating human-like text to answering questions. However, understanding how these models work remains challenging, especially because there is a phenomenon called superposition where features are mixed in a neuron, making it very difficult to extract human-understandable representations from the original model structure . This is why methods like sparse autoencoder seem to be able to untangle features to improve interpretability.
In this blog post, we will use the sparse autoencoder to look for some feature loops in a particularly interesting case of object-verb consistency and understand how the model components contribute to the task.
Key Concepts
Feature loop
In the context of neural networks, the feature loop is how the network learns to combine input features to form complex patterns at a higher level. We use the metaphor of "loop" to describe how features are processed in various layers of a neural network, because this way of processing reminds us of the process of processing and combining signals in electronic circuits. These feature loops are gradually formed through the connection between the neuron and the layer, where each neuron or layer is responsible for transforming the input features, and their interactions lead to useful feature combinations working together to make the final prediction.
The following is an example of feature loops: In many visual neural networks, we can find "a loop, as a family of units that detect curves in different angles. Curve detectors are mainly composed of early, less complex curve detectors. and line detector implementation. These curve detectors are used in the next layer to create 3D geometry and complex shape detectors”[1].
In the following chapters, we will examine a feature loop for subject-predicate consistent tasks in LLM.
Overlay and sparse autoencoder
In the context of machine learning, we sometimes observe superposition, referring to the phenomenon that a neuron in the model represents multiple overlapping features rather than a single, different features. For example, InceptionV1 contains a neuron that responds to the cat's face, the front of the car, and the legs of the cat.
This is what the Sparse Autoencoder (SAE) does.
SAE helps us unblock the activation of the network into a sparse set of features. These sparse features are often understandable by humans, allowing us to better understand the model. By applying SAE to hidden layer activation of the LLM model, we can isolate features that contribute to the model's output.
You can find details on how SAE works in my previous blog post.
Case Study: Subject-predicate consistency
Subject-predicate consistency
Subject-predicate consistency is a basic grammatical rule in English. The subject and predicate verbs in a sentence must be consistent in quantity, i.e., singular or plural. For example:
- "The cat runs." (singular subject, singular verb)
- "The cats run." (plural subject, plural verb)
For humans, understanding this simple rule is very important for tasks such as text generation, translation, and question and answer. But how do we know if LLM really learned this rule?
We will now explore how LLM forms feature loops for this task.
Build feature loop
Now let's build the process of creating feature loops. We will proceed in four steps:
- We first enter the sentence into the model. For this case study, we consider the following sentence:
- "The cat runs." (singular subject)
- "The cats run." (plural subject)
- We run the model on these sentences to get hidden activations. These activations represent how the model processes sentences at each layer.
- We pass activation to SAE to "decompress" the feature.
- We construct the feature loop as a calculation diagram:
- Input nodes represent singular and plural sentences.
- Hidden nodes represent the model layer that processes the input.
- Sparse nodes represent features obtained from SAE.
- Output node represents the final decision. In this case: runs or run.
Toy model
We first build a toy language model, which may be of no sense to the following code. This is a neural network with two simple layers.
For subject-predicate consistency, the model should:
- Enter a sentence with a singular or plural verb.
- Hidden layer converts this information into an abstract representation.
- The model selects the correct verb form as output.
<code># ====== 定义基础模型(模拟主谓一致)====== class SubjectVerbAgreementNN(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(2, 4) # 2 个输入 → 4 个隐藏激活 self.output = nn.Linear(4, 2) # 4 个隐藏 → 2 个输出 (runs/run) self.relu = nn.ReLU() def forward(self, x): x = self.relu(self.hidden(x)) # 计算隐藏激活 return self.output(x) # 预测动词</code>
It is not clear what is happening inside the hidden layer. Therefore, we introduced the following sparse autoencoder:
<code># ====== 定义稀疏自动编码器 (SAE) ====== class c(nn.Module): def __init__(self, input_dim, hidden_dim): super().__init__() self.encoder = nn.Linear(input_dim, hidden_dim) # 解压缩为稀疏特征 self.decoder = nn.Linear(hidden_dim, input_dim) # 重构 self.relu = nn.ReLU() def forward(self, x): encoded = self.relu(self.encoder(x)) # 稀疏激活 decoded = self.decoder(encoded) # 重构原始激活 return encoded, decoded</code>
We train the original models SubjectVerbAgreementNN and SubjectVerbAgreementNN, using sentences designed to represent different singular and plural forms of verbs, such as "The cat runs", "the babies run". But, as before, for toy models, they may not make any sense.
Now we visualize the feature loop. As mentioned earlier, feature loops are neuronal units used to process specific features. In our model, features include:
- Convert language attributes to a hidden layer of abstract representation. SAE
- with independent features which directly contribute to the verb-subject consistency task.
- Hidden activation and encoder output are both nodes of the graph.
- We also have the output node as the correct verb.
- The edges in the figure are weighted by activation intensity, showing which paths are most important in subject-predicate consensus decisions. For example, you can see that the path from H3 to F2 plays an important role.
GPT2-Small
For real cases, we run similar code on GPT2-small. We show a characteristic loop diagram representing the decision to select singular verbs.
Conclusion
Feature loops help us understand how different parts of complex LLM lead to the final output. We show the possibility of forming feature loops using SAE for subject-predicate consistent tasks.
However, we must admit that this approach still requires some human intervention, because we do not always know whether loops can really be formed without proper design.
References
[1] Zoom: Circuit Introduction
Please note that I have preserved the image placeholders and assumed the images are still accessible at the provided URLs. I have also maintained the original formatting as much as possible while reforming and restructuring the text for improved flow and clarity. The code blocks remain unchanged.
The above is the detailed content of Formulation of Feature Circuits with Sparse Autoencoders in LLM. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 Linux new version
SublimeText3 Linux latest version

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Dreamweaver Mac version
Visual web development tools
