Home >Web Front-end >JS Tutorial >A Complete Guide to LangChain in JavaScript

A Complete Guide to LangChain in JavaScript

William Shakespeare
William ShakespeareOriginal
2025-02-08 10:24:13857browse

LangChainJS: A powerful framework for building AI-driven JavaScript language models and agents

A Complete Guide to LangChain in JavaScript

Core points:

  • LangChainJS is a powerful JavaScript framework that enables developers to build and experiment with AI-driven language models and agents that are seamlessly integrated into web applications.
  • This framework allows the creation of agents that can leverage various tools and data sources to perform complex language tasks such as Internet searches and mathematical calculations, thereby improving the accuracy and relevance of responses.
  • LangChain supports a variety of models, including language models for simple text output, chat models for interactive conversations, and embedding models for converting text into numeric vectors, thereby facilitating the development of various NLP applications.
  • Text data can be managed and processed efficiently through customizable chunking methods, ensuring optimal performance and contextual relevance when processing large text.
  • In addition to using the OpenAI model, LangChain is compatible with other large language models (LLMs) and AI services, providing flexibility and extension capabilities for developers exploring the integration of different AIs in their projects.

This guide will dive into the key components of LangChain and demonstrate how to leverage its power in JavaScript. LangChainJS is a common JavaScript framework that enables developers and researchers to create, experiment and analyze language models and agents. It provides natural language processing (NLP) enthusiasts with a wealth of capabilities, from building custom models to efficient manipulating text data. As a JavaScript framework, it also allows developers to easily integrate their AI applications into web applications.

Prerequisites:

To learn this article, create a new folder and install the LangChain npm package:

<code class="language-bash">npm install -S langchain</code>

After creating a new folder, use the .mjs suffix to create a new JS module file (for example test1.mjs).

Agents:

In LangChain, an agent is an entity that can understand and generate text. These agents can configure specific behaviors and data sources and are trained to perform various language-related tasks, making them a multi-functional tool for a variety of applications.

Create LangChain agent:

Agencies can be configured to use "tools" to collect the required data and develop a good response. Please see the example below. It uses the Serp API (an internet search API) to search for information related to a question or input and respond to it. It also uses the llm-math tool to perform mathematical operations—for example, converting units or finding a percentage change between two values:

<code class="language-bash">npm install -S langchain</code>

After creating model variables using modelName: "gpt-3.5-turbo" and temperature: 0, we create an executor that combines the created model with the specified tools (SerpAPI and Calculator). In the input, I asked LLM to search the internet (using SerpAPI) and find out which artist has released more albums since 2010—Nas or Boldy James—and show the percentage difference (using Calculator).

In this example, I have to explicitly tell LLM to "via search for the internet..." to get it to use the internet to get data until today, rather than using OpenAI default to 2021 only.

The output is as follows:

<code class="language-javascript">import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";

process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
process.env["SERPAPI_API_KEY"] = "YOUR_SERPAPI_KEY"

const tools = [new Calculator(), new SerpAPI()];
const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 });

const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: "openai-functions",
  verbose: false,
});

const result = await executor.run("通过搜索互联网,查找Boldy James自2010年以来发行了多少张专辑,以及Nas自2010年以来发行了多少张专辑?找出谁发行了更多专辑,并显示百分比差异。");
console.log(result);</code>

Models (Models):

There are three types of models in LangChain: LLM, chat model, and text embedding model. Let's explore each type of model with some examples.

Language Model:

LangChain provides a way to use language models in JavaScript to generate text output based on text input. It is not as complex as the chat model and is best suited for simple input-output language tasks. Here is an example using OpenAI:

<code>// 输出将取决于互联网搜索结果</code>

As you can see, it uses the gpt-3.5-turbo model to list all the red berries. In this example, I set the temperature to 0 to give the LLM de facto accuracy.

Output:

<code class="language-javascript">import { OpenAI } from "langchain/llms/openai";

const llm = new OpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});

const res = await llm.call("列出所有红色的浆果");

console.log(res);</code>

Chat Model:

If you want more complex answers and conversations, you need to use the chat model. Technically, how is the chat model different from a language model? In the words of LangChain documentation:

Chat model is a variant of the language model. Although chat models use language models in the background, they use slightly different interfaces. Instead of using the "text input, text output" API, they use the "chat message" as the interface for input and output.

This is a simple (quite useless but interesting) JavaScript chat model script:

<code>// 输出将列出红色的浆果</code>

As you can see, the code first sends a system message and tells the chatbot to become a poetic assistant who always answers with rhymes, and then it sends a human message telling the chatbot to tell me who is the better tennis player: De Jokovic, Federer or Nadal. If you run this chatbot model, you will see something like this:

<code class="language-javascript">import { ChatOpenAI } from "langchain/chat_models/openai";
import { PromptTemplate } from "langchain/prompts";

const chat = new ChatOpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});
const prompt = PromptTemplate.fromTemplate(`你是一个诗意的助手,总是用押韵来回答:{question}`);
const runnable = prompt.pipe(chat);
const response = await runnable.invoke({ question: "谁更好,德约科维奇、费德勒还是纳达尔?" });
console.log(response);</code>

Embeddings:

Embing model provides a way to convert words and numbers in text into vectors that can then be associated with other words or numbers. This may sound abstract, so let's look at an example:

<code>// 输出将是一个用押韵回答的问题</code>

This will return a long list of floating point numbers:

<code class="language-javascript">import { OpenAIEmbeddings } from "langchain/embeddings/openai";

process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"

const embeddings = new OpenAIEmbeddings();
const res = await embeddings.embedQuery("谁创造了万维网?");
console.log(res)</code>

This is what embedding looks like. There are so many floating point numbers in just six words!

This embed can then be used to associate the input text with potential answers, related text, names, etc.

Let's look at a use case for embedded models now...

Now, this is a script that will use embeds to get the question "What is the heaviest animal?" and find the correct answer from the list of possible answers provided:

<code class="language-bash">npm install -S langchain</code>

Chunks:

LangChain models cannot process large texts and use them to generate responses. This is where chunking and text segmentation come into play. Let me show you two simple ways to split text data into chunks before feeding it to LangChain.

Segment by character:

To avoid sudden interruptions in chunking, you can split the text by paragraph by splitting each occurrence of a newline character:

<code class="language-javascript">import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";

process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
process.env["SERPAPI_API_KEY"] = "YOUR_SERPAPI_KEY"

const tools = [new Calculator(), new SerpAPI()];
const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 });

const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: "openai-functions",
  verbose: false,
});

const result = await executor.run("通过搜索互联网,查找Boldy James自2010年以来发行了多少张专辑,以及Nas自2010年以来发行了多少张专辑?找出谁发行了更多专辑,并显示百分比差异。");
console.log(result);</code>

This is a useful way to split text. However, you can use any character as a chunking separator, not just n.

Recursively segmenting chunking:

If you want to strictly divide text by characters of a certain length, you can use RecursiveCharacterTextSplitter:

<code>// 输出将取决于互联网搜索结果</code>

In this example, the text is divided every 100 characters, and the chunks overlap to 15 characters.

Block size and overlap:

By looking at these examples, you may have begun to wonder what the chunking size and overlapping parameters mean and how they affect performance. OK, let me explain two things briefly.

  • The chunk size determines the number of characters in each chunk. The larger the chunk size, the more data there is in the chunk, the longer it takes LangChain to process it and generate the output, and vice versa.
  • Block overlap is the content that shares information between blocks so that they share some context. The higher the chunk overlap, the more redundant your chunks will be; the lower the chunk overlap, the less context shared between chunks. Typically, a good chunking overlap is about 10% to 20% of the chunking size, although the desired chunking overlap varies by different text types and use cases.

Chains:

Chapter is basically multiple LLM functions linked together to perform more complex tasks, otherwise it cannot be done through simple LLM input-> output. Let's look at a cool example:

<code class="language-javascript">import { OpenAI } from "langchain/llms/openai";

const llm = new OpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});

const res = await llm.call("列出所有红色的浆果");

console.log(res);</code>

Beyond OpenAI:

Even if I have been using the OpenAI model as an example of different functions of LangChain, it is not limited to the OpenAI model. You can use LangChain with numerous other LLM and AI services. You can find a complete list of LangChain and JavaScript integrated LLMs in their documentation.

For example, you can use Cohere with LangChain. After installing Cohere, using npm install cohere-ai, you can create a simple Q&A code using LangChain and Cohere, as shown below:

<code>// 输出将列出红色的浆果</code>

Output:

<code class="language-javascript">import { ChatOpenAI } from "langchain/chat_models/openai";
import { PromptTemplate } from "langchain/prompts";

const chat = new ChatOpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});
const prompt = PromptTemplate.fromTemplate(`你是一个诗意的助手,总是用押韵来回答:{question}`);
const runnable = prompt.pipe(chat);
const response = await runnable.invoke({ question: "谁更好,德约科维奇、费德勒还是纳达尔?" });
console.log(response);</code>

Conclusion:

In this guide, you have seen different aspects and functions of LangChain in JavaScript. You can easily develop AI-powered web applications in JavaScript using LangChain and experiment with LLM. Be sure to refer to the LangChainJS documentation for more details about specific features.

I wish you happy coding and experimenting with LangChain in JavaScript! If you like this article, you may also want to read articles about using LangChain with Python.

The above is the detailed content of A Complete Guide to LangChain in JavaScript. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn