Home  >  Article  >  Java  >  Reliable AI agent in prod with Java Quarkus Langchain - Part AI as Service

Reliable AI agent in prod with Java Quarkus Langchain - Part AI as Service

Susan Sarandon
Susan SarandonOriginal
2024-10-27 08:53:301096browse

Authors

@herbertbeckman - LinkedIn
@rndtavares - LinkedIn

Parts of the article

  1. Reliable AI agent in prod with Java Quarkus Langchain4j - Part 1 - AI as Service (this article)

  2. Reliable AI agent in Java Quarkus Langchain4j prod - Part 2 - Memory (coming soon)

  3. Reliable AI agent in prod with Java Quarkus Langchain4j - Part 3 - RAG (coming soon)

  4. Trusted AI agent in prod with Java Quarkus Langchain4j - Part 4 - Guardrails (coming soon)

Introduction

Whenever we have a boom in emerging technology, companies are eager to apply them and reap the long-awaited results from a business point of view. It's the race for innovation and the fight for first-mover advantages. In the midst of this race, companies, which were previously anxious, often end up giving up due to a series of factors, one of the main ones being the reliability of a system in general. Artificial intelligence (AI) is currently undergoing one of its greatest tests of resistance and our job as software developers is to demonstrate to companies that, yes, it is possible to carry out a series of tasks and processes with the conscious and correct use of AI. In this article we will demonstrate, in 3 parts, what are the functionalities and processes that we must have in a reliable AI agent in production for a company to have the long-awaited results, as well as implementing together some concepts used in the market. We will also detail the points of attention of this solution and ask that you, the developer, carry out as many tests and give us as much feedback as possible so that, together, we can further improve this understanding.

Implemented Features

  • Chat
  • Tools
  • Chat Memory
  • Retrieval-Augmented Generation (RAG)
  • Guardrails

Concepts and definitions

Assistant vs Copilot vs Agent

One of the first questions you may have is how an agent differs from other AI use cases. The Agent has functionalities more linked to automation, while the others have their activities aimed at assistance and time optimization. Below I detail each of the use cases in greater detail.

Assistants

Assistants can help us and save us a lot of time checking information and being a good source of knowledge exchange. They talk ABOUT the most varied subjects and can be useful when we need a clear line of reasoning to analyze the premises of an argument. Of course, they have much more powers than that, but I want you to focus on what an assistant does: they talk to you and that's all. He ONLY can talk about, summarize, detail, etc. As examples we have: ChatGPT, Claude AI and Gemini.

Copilots

Copilots are a little more powerful than assistants. They can actually do something, a more concrete action such as changing a text and/or suggesting modifications in real time, as well as giving tips during a modification and/or event happening within a context. However, as said before, it depends on the context to do this and it does not always have all the necessary information to make a good suggestion, it also depends on your express authorization, creating a direct dependence on the user. Good examples are: Github Copilot, Codium and Microsoft Copilot.

Agents

Agents’ main objective is to carry out tasks with clear objectives. Its focus is on automation, that is, they actually do concrete work autonomously. All of this is only possible through the tools we make available to them. The Agent is not the LLM itself, but rather its application that coordinates this LLM. Understand the LLM as the brain of the system, which makes decisions, and its application as the members of the body of that brain. What's the point in thinking about getting a glass of water if I can't reach it with my hand? Your agent gives LLM the power to do something in a safe, auditable and, most importantly, reliable way.

Taking action

In this first part of the article we will implement the AIService in the project, which is nothing more than the interface layer with our AI provider. In this project we use OpenAI's LLM, but you can add your favorite provider and adjust the dependencies based on it.

Now that we have the concepts well defined and we know what we are going to do here, let's move on to coding!

Creating the project

Create a quarkus project, choosing your dependency manager and extensions in Quarkus - Start coding.

Project dependencies

We will use maven as the project's dependency manager. Below are the initial dependencies we added.

Mavem

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-websockets-next</artifactId>
</dependency>

<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-core</artifactId>
  <version>0.20.3</version>
</dependency>

<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-openai</artifactId>
  <version>0.20.3</version>
</dependency>

Project configuration

Add the following properties to the src/main/resources/application.properties file:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-websockets-next</artifactId>
</dependency>

<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-core</artifactId>
  <version>0.20.3</version>
</dependency>

<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-openai</artifactId>
  <version>0.20.3</version>
</dependency>

Replace YOUR_OPENAPI_KEY_HERE with the key (apiKey) that you registered on the OpenAI Platform.

TIP: create an environment variable in your IDE and then modify the property quarkus.langchain4j.openai.api-key to:

quarkus.tls.trust-all=true
quarkus.langchain4j.timeout=60s
quarkus.langchain4j.openai.api-key=YOUR_OPENAI_API_KEY_HERE

Creating our AIService

First we need to create our AIService, which will be the class responsible for giving a "personality" to our agent. To do this, in the src/main/java/ directory, we will create the class named Agent with the following code:

quarkus.langchain4j.openai.api-key=${OPEN_API_KEY:NAO_ENCONTREI_A_VAR}

As you can see from our SystemPrompt (@SystemMessage), we created an agent specialized in football.

Creating our chat

Now that we have created our agent, we need to create the class that will handle our chat with it. To do this, in the src/main/java/ directory, we will create the class named AgentWSEndpoint with the following code:

package <seupacote>;

import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;
import jakarta.enterprise.context.ApplicationScoped;

@ApplicationScoped
@RegisterAiService
public interface Agent {

    @SystemMessage("""
            Você é um agente especializado em futebol brasileiro, seu nome é FutAgentBR
            Você sabe responder sobre os principais títulos dos principais times brasileiros e da seleção brasileira
            Sua resposta precisa ser educada, você pode deve responder em Português brasileiro e de forma relevante a pergunta feita

            Quando você não souber a resposta, responda que você não sabe responder nesse momento mas saberá em futuras versões.
            """)
    String chat(@UserMessage String message);
}

Now you can talk to your agent, who is currently still an assistant, through the quarkus dev ui. Here are some prints to guide you:

Agente de IA confiável em prod com Java   Quarkus   Langchain- Parte  AI as Service

Agente de IA confiável em prod com Java   Quarkus   Langchain- Parte  AI as Service

Agente de IA confiável em prod com Java   Quarkus   Langchain- Parte  AI as Service

Agente de IA confiável em prod com Java   Quarkus   Langchain- Parte  AI as Service

Adding our tools (Function Calling)

Now let's move on to the detail that makes all the difference between an agent and an assistant. We will give our agent the possibility to carry out tasks and/or processes, adding tools (function calling). Before we code this, we have a brief graphic demonstrating how calling a tool works in a macro way.

Agente de IA confiável em prod com Java   Quarkus   Langchain- Parte  AI as Service
Source: surface.ai

Now that we know how a tool call works, we need to create the class with our tools, you can also create several different classes for each tool. In this example we will create a "ToolBox", that is, a toolbox, grouping the tools that our agent can use. Here's the code:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-websockets-next</artifactId>
</dependency>

<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-core</artifactId>
  <version>0.20.3</version>
</dependency>

<dependency>
  <groupId>io.quarkiverse.langchain4j</groupId>
  <artifactId>quarkus-langchain4j-openai</artifactId>
  <version>0.20.3</version>
</dependency>

Soon afterwards, we added an annotation to our agent informing it which tools it has available to use, through the @ToolBox(AgentTools.class) annotation. It looks like this:

quarkus.tls.trust-all=true
quarkus.langchain4j.timeout=60s
quarkus.langchain4j.openai.api-key=YOUR_OPENAI_API_KEY_HERE

Now you can ask your agent what time it is, what today's date is, ask him to add two numbers together and calculate the square root. These are the tools we use here to illustrate, but you can replace this with an HTTP call, a hashing function, an SQL query, etc. The possibilities here are many.

Testing via Quarkus DEV UI

Here is a screenshot of one of the tests carried out after adding the tools:

Agente de IA confiável em prod com Java   Quarkus   Langchain- Parte  AI as Service

Agente de IA confiável em prod com Java   Quarkus   Langchain- Parte  AI as Service

As you can see, for each tool call we will have a log, showing that LLM actually called the code that we authorized it to execute.

Next steps

This concludes the beginning of creation in our Agent. We will soon add memory to our Agent in part 2, the RAG (Retrieval-Augmented Generation) in part 3 and the Guardrails in part 4 of this article. I hope you enjoyed it and see you soon.

But you can now follow along and see ALL the code of the article in this GitHub repository.

The above is the detailed content of Reliable AI agent in prod with Java Quarkus Langchain - Part AI as Service. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn