Home >Web Front-end >JS Tutorial >Running local LLM (Ollama) in your nodejs project.

Running local LLM (Ollama) in your nodejs project.

Barbara Streisand
Barbara StreisandOriginal
2024-11-28 18:45:13950browse

We all love AI, and since recent years the boom in Artificial Intelligence has changed the world and is taking it into a new era. For any use problem there is a use case of AI, being it asking Gemini about a cooking recipe, Chatgpt for assignments, Claude for programming, V0 for frontend design, devs and students are so much dependent on AI these days which leads to almost every new day a startup emerging featuring AI.

Running local LLM (Ollama) in your nodejs project.

This leads to aspiring developers like me question on how can I make something like this? The answer is in the picture above only. API call to these models. But, they are not cheap and an unemployed student like me has no means to purchase the subscription. This lead to the idea of running the AI locally and then serving it on port for api calls. This article would give you a step by step guide on how you can setup Ollama and access the LLMs through your nodejs code.

Installing Ollama

This step is for windows users. If you are on other operating systems then follow this guide.

  • Head over to Ollama, and download their installer.

Running local LLM (Ollama) in your nodejs project.

  • Once done, fire up the setup and Install the application.

Running local LLM (Ollama) in your nodejs project.

  • This will then install the client on your machine, and now you can head over to the library section of ollama's official website to pick the model you want to use.

Running local LLM (Ollama) in your nodejs project.

  • Here, I'll be using codellama:7b for my machine.
  • Open your CMD or Powershell and run the command ollama run , this will download the model on your machine if it already does not exist and then would run it.

Serving LLM on Port

  • Now you have Ollama on your system and also have the required LLM, so the next step would be to serve it on your machine's port for your node app to access it.
  • Before proceeding, close the Ollama from background and check if the default port assigned to ollama is empty or not by using this command ollama serve, if this throws an error then it means the port is occupied.
  • You'll need to clear that port before proceeding, the default port for Ollama is 11434
  • Use the following command to check what process is running on that port netstat -ano | findstr :11434
  • Note down the PID from the above result and use this command to clear the port. taskkill /PID /F
  • Once done open new cmd terminal and run the following command ollama serve
  • Now you'll see something like this which means your LLMs are now accessible through API calls.

Running local LLM (Ollama) in your nodejs project.

Using ollama npm package to for req response handling

  • Start your node project by following the commands
npm init -y
npm i typescript ollama
npx tsc --init
  • this will create a repo for you to start working, first head over to tsconfig.json file, uncomment and set these values
"rootDir": "./src",
"outDir": "./dist",
  • Create a src folder and inside the folder create the index.js file.
import ollama from 'ollama';

async function main() {
    const response = await ollama.chat({
        model: 'codellama:7b',
        messages: [
            {
                role: 'user', 
                content: 'What color is the sky?'
            }
        ],
    })
    console.log(response.message.content)

}

main()

  • Now before running the code, edit the scripts in package.json
"scripts": {
    "dev": "tsc -b && node dist/index.js"
  },
  • This would build the ts code into js code for running.
  • Run the application by using the command npm run dev inside the terminal.

Running local LLM (Ollama) in your nodejs project.

  • There you are. Finally being able to access your local LLM with nodejs.
  • You can read more about the node package ollama here.

Thank you for reading, Hope this article could help you in any case and if it did then feel free to connect on my socials!

Linkedin | Github

The above is the detailed content of Running local LLM (Ollama) in your nodejs project.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn