Home >Technology peripherals >AI >How to Run LLMs Locally in 1 Minute?
Large language models (LLMs), such as GPT and Llama, have completely changed the way we handle language tasks, from creating smart chatbots to generating complex code snippets, everything. Cloud platforms such as Hugging Face simplify the use of these models, but in some cases it is a smarter choice to run LLM locally on your own computer. Why? Because it provides greater privacy, allows customization to your specific needs and can significantly reduce costs. Running LLM locally gives you full control, allowing you to take advantage of its power on your own terms.
Let's see how to run LLM on your system with Ollama and Hugging Face in just a few simple steps!
The following video explains the process step by step:
How to run LLM locally in one minute [beginner friendly]
Use Ollama ? and Hugging Face ? Video link
— dylan (@dylanebert) January 6, 2025
Step 1: Download Ollama
First, search for "Ollama" on your browser, download and install it on your system.
Step 2: Find the best open source LLM
Next, search for the "Hugging Face LLM Ranking" to find a list of top open source language models.
Step 3: Filter the model based on your device
After seeing the list, apply a filter to find the best model for your setup. For example:
Click on top-ranked models, such as Qwen/Qwen2.5-35B. In the upper right corner of the screen, click "Use this model". However, you can't find Ollama as an option here.
This is because Ollama uses a special format called gguf, which is a smaller, faster and quantitative version of the model.
(Note: Quantization will slightly reduce quality, but make it more suitable for local use.)
Get models in gguf format:
Look for models with "gguf" in their names, such as Bartowski. This is a good choice.
Step 5: Download and start using the model
Copy the commands provided for the model of your choice and paste them into your terminal. Press the "Enter" key and wait for the download to complete.
After the download is complete, you can start chatting with the model like you would with any other LLM. Simple and fun!
That's it! You are now running powerful LLM locally on your device. Please tell me if these steps work for you in the comments section below.
The above is the detailed content of How to Run LLMs Locally in 1 Minute?. For more information, please follow other related articles on the PHP Chinese website!