Home >Technology peripherals >AI >How to Run LLMs Locally in 1 Minute?

How to Run LLMs Locally in 1 Minute?

Jennifer Aniston
Jennifer AnistonOriginal
2025-03-11 09:42:13248browse

How to Run LLMs Locally in 1 Minute?

Large language models (LLMs), such as GPT and Llama, have completely changed the way we handle language tasks, from creating smart chatbots to generating complex code snippets, everything. Cloud platforms such as Hugging Face simplify the use of these models, but in some cases it is a smarter choice to run LLM locally on your own computer. Why? Because it provides greater privacy, allows customization to your specific needs and can significantly reduce costs. Running LLM locally gives you full control, allowing you to take advantage of its power on your own terms.

Let's see how to run LLM on your system with Ollama and Hugging Face in just a few simple steps!

The following video explains the process step by step:

How to run LLM locally in one minute [beginner friendly]

Use Ollama ? and Hugging Face ? Video link

— dylan (@dylanebert) January 6, 2025

Step to run LLM locally

Step 1: Download Ollama

First, search for "Ollama" on your browser, download and install it on your system.

Step 2: Find the best open source LLM

Next, search for the "Hugging Face LLM Ranking" to find a list of top open source language models.

Step 3: Filter the model based on your device

After seeing the list, apply a filter to find the best model for your setup. For example:

  • Select home consumer-grade equipment.
  • Select only official providers to avoid unofficial or unverified models.
  • If your laptop is equipped with a low-end GPU, choose a model designed for edge devices.

Click on top-ranked models, such as Qwen/Qwen2.5-35B. In the upper right corner of the screen, click "Use this model". However, you can't find Ollama as an option here.

This is because Ollama uses a special format called gguf, which is a smaller, faster and quantitative version of the model.

(Note: Quantization will slightly reduce quality, but make it more suitable for local use.)

Get models in gguf format:

  • Go to the "Quantity" section on the rankings - there are about 80 models available here. Sort these models by the most downloads.

Look for models with "gguf" in their names, such as Bartowski. This is a good choice.

  • Select this model and click "Use this model with Ollama".
  • For quantization settings, select a file size that is 1-2GB smaller than your GPU RAM, or select the recommended option, such as Q5_K_M.

Step 5: Download and start using the model

Copy the commands provided for the model of your choice and paste them into your terminal. Press the "Enter" key and wait for the download to complete.

After the download is complete, you can start chatting with the model like you would with any other LLM. Simple and fun!

That's it! You are now running powerful LLM locally on your device. Please tell me if these steps work for you in the comments section below.

The above is the detailed content of How to Run LLMs Locally in 1 Minute?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn