Large Language Models (LLMs) have made remarkable progress that can perform a variety of tasks, from generating human-like text to answering questions. However, understanding how these models work remains challenging, especially because there is a phenomenon called superposition where features are mixed in a neuron, making it very difficult to extract human-understandable representations from the original model structure . This is why methods like sparse autoencoder seem to be able to untangle features to improve interpretability.
In this blog post, we will use the sparse autoencoder to look for some feature loops in a particularly interesting case of object-verb consistency and understand how the model components contribute to the task.
Key Concepts
Feature loop
In the context of neural networks, the feature loop is how the network learns to combine input features to form complex patterns at a higher level. We use the metaphor of "loop" to describe how features are processed in various layers of a neural network, because this way of processing reminds us of the process of processing and combining signals in electronic circuits. These feature loops are gradually formed through the connection between the neuron and the layer, where each neuron or layer is responsible for transforming the input features, and their interactions lead to useful feature combinations working together to make the final prediction.
The following is an example of feature loops: In many visual neural networks, we can find "a loop, as a family of units that detect curves in different angles. Curve detectors are mainly composed of early, less complex curve detectors. and line detector implementation. These curve detectors are used in the next layer to create 3D geometry and complex shape detectors”[1].
In the following chapters, we will examine a feature loop for subject-predicate consistent tasks in LLM.
Overlay and sparse autoencoder
In the context of machine learning, we sometimes observe superposition, referring to the phenomenon that a neuron in the model represents multiple overlapping features rather than a single, different features. For example, InceptionV1 contains a neuron that responds to the cat's face, the front of the car, and the legs of the cat.
This is what the Sparse Autoencoder (SAE) does.
SAE helps us unblock the activation of the network into a sparse set of features. These sparse features are often understandable by humans, allowing us to better understand the model. By applying SAE to hidden layer activation of the LLM model, we can isolate features that contribute to the model's output.
You can find details on how SAE works in my previous blog post.
Case Study: Subject-predicate consistency
Subject-predicate consistency
Subject-predicate consistency is a basic grammatical rule in English. The subject and predicate verbs in a sentence must be consistent in quantity, i.e., singular or plural. For example:
- "The cat runs." (singular subject, singular verb)
- "The cats run." (plural subject, plural verb)
For humans, understanding this simple rule is very important for tasks such as text generation, translation, and question and answer. But how do we know if LLM really learned this rule?
We will now explore how LLM forms feature loops for this task.
Build feature loop
Now let's build the process of creating feature loops. We will proceed in four steps:
- We first enter the sentence into the model. For this case study, we consider the following sentence:
- "The cat runs." (singular subject)
- "The cats run." (plural subject)
- We run the model on these sentences to get hidden activations. These activations represent how the model processes sentences at each layer.
- We pass activation to SAE to "decompress" the feature.
- We construct the feature loop as a calculation diagram:
- Input nodes represent singular and plural sentences.
- Hidden nodes represent the model layer that processes the input.
- Sparse nodes represent features obtained from SAE.
- Output node represents the final decision. In this case: runs or run.
Toy model
We first build a toy language model, which may be of no sense to the following code. This is a neural network with two simple layers.
For subject-predicate consistency, the model should:
- Enter a sentence with a singular or plural verb.
- Hidden layer converts this information into an abstract representation.
- The model selects the correct verb form as output.
<code># ====== 定义基础模型(模拟主谓一致)====== class SubjectVerbAgreementNN(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(2, 4) # 2 个输入 → 4 个隐藏激活 self.output = nn.Linear(4, 2) # 4 个隐藏 → 2 个输出 (runs/run) self.relu = nn.ReLU() def forward(self, x): x = self.relu(self.hidden(x)) # 计算隐藏激活 return self.output(x) # 预测动词</code>
It is not clear what is happening inside the hidden layer. Therefore, we introduced the following sparse autoencoder:
<code># ====== 定义稀疏自动编码器 (SAE) ====== class c(nn.Module): def __init__(self, input_dim, hidden_dim): super().__init__() self.encoder = nn.Linear(input_dim, hidden_dim) # 解压缩为稀疏特征 self.decoder = nn.Linear(hidden_dim, input_dim) # 重构 self.relu = nn.ReLU() def forward(self, x): encoded = self.relu(self.encoder(x)) # 稀疏激活 decoded = self.decoder(encoded) # 重构原始激活 return encoded, decoded</code>
We train the original models SubjectVerbAgreementNN and SubjectVerbAgreementNN, using sentences designed to represent different singular and plural forms of verbs, such as "The cat runs", "the babies run". But, as before, for toy models, they may not make any sense.
Now we visualize the feature loop. As mentioned earlier, feature loops are neuronal units used to process specific features. In our model, features include:
- Convert language attributes to a hidden layer of abstract representation. SAE
- with independent features which directly contribute to the verb-subject consistency task.
- Hidden activation and encoder output are both nodes of the graph.
- We also have the output node as the correct verb.
- The edges in the figure are weighted by activation intensity, showing which paths are most important in subject-predicate consensus decisions. For example, you can see that the path from H3 to F2 plays an important role.
GPT2-Small
For real cases, we run similar code on GPT2-small. We show a characteristic loop diagram representing the decision to select singular verbs.
Conclusion
Feature loops help us understand how different parts of complex LLM lead to the final output. We show the possibility of forming feature loops using SAE for subject-predicate consistent tasks.
However, we must admit that this approach still requires some human intervention, because we do not always know whether loops can really be formed without proper design.
References
[1] Zoom: Circuit Introduction
Please note that I have preserved the image placeholders and assumed the images are still accessible at the provided URLs. I have also maintained the original formatting as much as possible while reforming and restructuring the text for improved flow and clarity. The code blocks remain unchanged.
The above is the detailed content of Formulation of Feature Circuits with Sparse Autoencoders in LLM. For more information, please follow other related articles on the PHP Chinese website!

Vibe coding is reshaping the world of software development by letting us create applications using natural language instead of endless lines of code. Inspired by visionaries like Andrej Karpathy, this innovative approach lets dev

DALL-E 3: A Generative AI Image Creation Tool Generative AI is revolutionizing content creation, and DALL-E 3, OpenAI's latest image generation model, is at the forefront. Released in October 2023, it builds upon its predecessors, DALL-E and DALL-E 2

February 2025 has been yet another game-changing month for generative AI, bringing us some of the most anticipated model upgrades and groundbreaking new features. From xAI’s Grok 3 and Anthropic’s Claude 3.7 Sonnet, to OpenAI’s G

YOLO (You Only Look Once) has been a leading real-time object detection framework, with each iteration improving upon the previous versions. The latest version YOLO v12 introduces advancements that significantly enhance accuracy

The $500 billion Stargate AI project, backed by tech giants like OpenAI, SoftBank, Oracle, and Nvidia, and supported by the U.S. government, aims to solidify American AI leadership. This ambitious undertaking promises a future shaped by AI advanceme

Google's Veo 2 and OpenAI's Sora: Which AI video generator reigns supreme? Both platforms generate impressive AI videos, but their strengths lie in different areas. This comparison, using various prompts, reveals which tool best suits your needs. T

Google DeepMind's GenCast: A Revolutionary AI for Weather Forecasting Weather forecasting has undergone a dramatic transformation, moving from rudimentary observations to sophisticated AI-powered predictions. Google DeepMind's GenCast, a groundbreak

The article discusses AI models surpassing ChatGPT, like LaMDA, LLaMA, and Grok, highlighting their advantages in accuracy, understanding, and industry impact.(159 characters)


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

SublimeText3 English version
Recommended: Win version, supports code prompts!

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver CS6
Visual web development tools

WebStorm Mac version
Useful JavaScript development tools