Home >Technology peripherals >AI >Why have you given up on LangChain?
Perhaps from the day it was born, LangChain was destined to be a product with polarizing reputations.
Those who are optimistic about LangChain appreciate its rich tools and components and its ease of integration. Those who are not optimistic about LangChain believe that it is doomed to fail - in this era when technology changes so fast, it is simply not feasible to build everything with LangChain.
Some exaggeration:
"In my consulting work, I spend 70% of my energy convincing people not to use langchain or llamaindex. This solves 90% of their problems."
Recently, an article LangChain’s complaints have once again become the focus of hot discussion:
The author, Fabian Both, is a deep learning engineer at the AI testing tool Octomind. The Octomind team uses an AI Agent with multiple LLMs to automatically create and fix end-to-end tests in Playwright.
This is a story that lasted for more than a year, starting from the choice of LangChain, and then entering the stage of tenacious struggle with LangChain. In 2024, they finally decided to say goodbye to LangChain.
Let's see what they went through:
"LangChain was the best choice"
We were using LangChain in production for over 12 months, starting in early 2023 and then removing it in 2024 .
In 2023, LangChain seems to be our best choice. It has an impressive array of components and tools, and its popularity has skyrocketed. LangChain promised to "allow developers to go from an idea to runnable code in an afternoon," but as our needs became more and more complex, problems began to surface.
LangChain becomes a source of resistance rather than a source of productivity.
As LangChain’s inflexibility began to show, we began to delve deeper into LangChain’s internals to improve the underlying behavior of the system. However, because LangChain intentionally makes many details abstract, we cannot easily write the required underlying code.
As we all know, AI and LLM are rapidly changing fields, with new concepts and ideas emerging every week. However, the framework design of LangChain, an abstract concept created around multiple emerging technologies, is difficult to stand the test of time.
Why LangChain is so abstract
Initially, LangChain can also help when our simple needs match LangChain’s usage assumptions. But its high-level abstractions quickly made our code more difficult to understand and frustrating to maintain. It’s not a good sign when the team spends as much time understanding and debugging LangChain as it does building features.
The problem with LangChain’s abstract approach can be illustrated by the trivial example of “translating an English word into Italian”.
Here is a Python example using only the OpenAI package:
This is a simple and easy-to-understand code containing only one class and one function call. The rest is standard Python code.
Compare this to LangChain’s version:
The code is roughly the same, but that’s where the similarity ends.
We now have three classes and four function calls. But what is worrying is that LangChain introduces three new abstract concepts:
Prompt template: Provides Prompt for LLM;
Output parser: Processes the output from LLM;
Chain: LangChain The "LCEL syntax" covers Python's | operator.
All LangChain does is increase the complexity of the code without any obvious benefits.
This kind of code may be fine for early prototypes. But for production use, each component must be reasonably understood so that it does not crash unexpectedly under actual use conditions. You must adhere to the given data structures and design your application around these abstractions.
Let’s look at another abstract comparison in Python, this time getting JSON from an API.
Use the built-in http package:
Use the requests package:
The difference is obvious. This is what good abstraction feels like.
Of course, these are trivial examples. But what I'm trying to say is that good abstractions simplify code and reduce the cognitive load required to understand it.
LangChain tries to make your life easier by hiding the details and doing more with less code. However, if this comes at the expense of simplicity and flexibility, then abstraction loses value.
LangChain is also used to using abstractions on top of other abstractions, so you often have to think in terms of nested abstractions to use the API correctly. This inevitably leads to understanding huge stack traces and debugging internal framework code you didn't write, rather than implementing new features.
Impact of LangChain on Development Teams
Generally speaking, applications heavily use AI Agents to perform different types of tasks such as discovering test cases, generating Playwright tests, and automatic fixes.
When we want to move from a single Sequential Agent architecture to a more complex architecture, LangChain becomes a limiting factor. For example, generate Sub-Agents and have them interact with the original Agent. Or multiple professional Agents interact with each other.
In another example, we need to dynamically change the availability of tools that the Agent can access based on business logic and the output of the LLM. However, LangChain does not provide a method to observe the Agent state from the outside, which resulted in us having to reduce the scope of implementation to adapt to the limited functionality of LangChain Agent.
Once we remove it, we no longer need to translate our needs into a solution suitable for LangChain. We just need to write code.
So, if not using LangChain, what framework should you use? Maybe you don't need a framework at all.
Do we really need a framework for building AI applications?
LangChain provided us with LLM functionality in the early days, allowing us to focus on building applications. But in hindsight, we would have been better off in the long term without the framework.
LangChain The long list of components gives the impression that building an LLM-powered application is very complex. But the core components required for most applications are usually as follows:
Client for LLM communication
Functions/tools for function calls
Vector database for RAG
Observability platform for tracking, assessment, and more.
The Agent space is evolving rapidly, bringing exciting possibilities and interesting use cases, but our advice - keep it simple for now until the usage patterns of Agents are solidified. Much development work in the field of artificial intelligence is driven by experimentation and prototyping.
The above is Fabian Both’s personal experience over the past year, but LangChain is not entirely without merit.
Another developer Tim Valishev said that he will stick with LangChain for a while longer:
I really like Langsmith:
Visual logging out of the box
Prompt playground, You can instantly fix prompts from the logs and see how it performs under the same inputs
Easily build test datasets directly from the logs, with the option to run simple test sets in prompts with one click (or do it in code) End-to-end testing)
Test score history
Prompt version control
and it provides good support for streaming of the entire chain, it takes some time to implement this manually.
What’s more, relying solely on APIs is not enough. The APIs of each large model manufacturer are different, and "seamless switching" is not possible.
What do you think?
Original link: https://www.octomind.dev/blog/why-we-no-longer-use-langchain-for-building-our-ai-agents
The above is the detailed content of Why have you given up on LangChain?. For more information, please follow other related articles on the PHP Chinese website!