Large language models (LLMs), such as GPT and Llama, have completely changed the way we handle language tasks, from creating smart chatbots to generating complex code snippets, everything. Cloud platforms such as Hugging Face simplify the use of these models, but in some cases it is a smarter choice to run LLM locally on your own computer. Why? Because it provides greater privacy, allows customization to your specific needs and can significantly reduce costs. Running LLM locally gives you full control, allowing you to take advantage of its power on your own terms.
Let's see how to run LLM on your system with Ollama and Hugging Face in just a few simple steps!
The following video explains the process step by step:
How to run LLM locally in one minute [beginner friendly]
Use Ollama ? and Hugging Face ? Video link
— dylan (@dylanebert) January 6, 2025
Step to run LLM locally
Step 1: Download Ollama
First, search for "Ollama" on your browser, download and install it on your system.
Step 2: Find the best open source LLM
Next, search for the "Hugging Face LLM Ranking" to find a list of top open source language models.
Step 3: Filter the model based on your device
After seeing the list, apply a filter to find the best model for your setup. For example:
- Select home consumer-grade equipment.
- Select only official providers to avoid unofficial or unverified models.
- If your laptop is equipped with a low-end GPU, choose a model designed for edge devices.
Click on top-ranked models, such as Qwen/Qwen2.5-35B. In the upper right corner of the screen, click "Use this model". However, you can't find Ollama as an option here.
This is because Ollama uses a special format called gguf, which is a smaller, faster and quantitative version of the model.
(Note: Quantization will slightly reduce quality, but make it more suitable for local use.)
Get models in gguf format:
- Go to the "Quantity" section on the rankings - there are about 80 models available here. Sort these models by the most downloads.
Look for models with "gguf" in their names, such as Bartowski. This is a good choice.
- Select this model and click "Use this model with Ollama".
- For quantization settings, select a file size that is 1-2GB smaller than your GPU RAM, or select the recommended option, such as Q5_K_M.
Step 5: Download and start using the model
Copy the commands provided for the model of your choice and paste them into your terminal. Press the "Enter" key and wait for the download to complete.
After the download is complete, you can start chatting with the model like you would with any other LLM. Simple and fun!
That's it! You are now running powerful LLM locally on your device. Please tell me if these steps work for you in the comments section below.
The above is the detailed content of How to Run LLMs Locally in 1 Minute?. For more information, please follow other related articles on the PHP Chinese website!

Harnessing the Power of Data Visualization with Microsoft Power BI Charts In today's data-driven world, effectively communicating complex information to non-technical audiences is crucial. Data visualization bridges this gap, transforming raw data i

Expert Systems: A Deep Dive into AI's Decision-Making Power Imagine having access to expert advice on anything, from medical diagnoses to financial planning. That's the power of expert systems in artificial intelligence. These systems mimic the pro

First of all, it’s apparent that this is happening quickly. Various companies are talking about the proportions of their code that are currently written by AI, and these are increasing at a rapid clip. There’s a lot of job displacement already around

The film industry, alongside all creative sectors, from digital marketing to social media, stands at a technological crossroad. As artificial intelligence begins to reshape every aspect of visual storytelling and change the landscape of entertainment

ISRO's Free AI/ML Online Course: A Gateway to Geospatial Technology Innovation The Indian Space Research Organisation (ISRO), through its Indian Institute of Remote Sensing (IIRS), is offering a fantastic opportunity for students and professionals to

Local Search Algorithms: A Comprehensive Guide Planning a large-scale event requires efficient workload distribution. When traditional approaches fail, local search algorithms offer a powerful solution. This article explores hill climbing and simul

The release includes three distinct models, GPT-4.1, GPT-4.1 mini and GPT-4.1 nano, signaling a move toward task-specific optimizations within the large language model landscape. These models are not immediately replacing user-facing interfaces like

Chip giant Nvidia said on Monday it will start manufacturing AI supercomputers— machines that can process copious amounts of data and run complex algorithms— entirely within the U.S. for the first time. The announcement comes after President Trump si


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

Dreamweaver Mac version
Visual web development tools

Notepad++7.3.1
Easy-to-use and free code editor