Home > Article > Technology peripherals > Docker completes local deployment of LLama3 open source large model in three minutes
LLaMA-3 (Large Language Model Meta AI 3) is a large-scale open source generative artificial intelligence model developed by Meta Company. It has no major changes in model structure compared with the previous generation LLaMA-2.
The LLaMA-3 model is divided into different scale versions, including small, medium and large, to adapt to different application requirements and computing resources. The parameter size of small models is 8B, the parameter size of medium models is 70B, and the parameter size of large models reaches 400B. However, during training, the goal is to achieve multi-modal and multi-language functionality, and the results are expected to be comparable to GPT 4/GPT 4V.
Ollama is an open source large language model (LLM) service tool that allows users to run and deploy large language models on their local machine. Ollama is designed as a framework that simplifies the process of deploying and managing large language models in Docker containers, making the process quick and easy. Users can quickly run open source large-scale language models such as Llama 3 locally through simple command line operations.
Official website address: https://ollama.com/download
Picture
Ollama is a tool that supports multiple platforms. Includes Mac and Linux, and provides Docker images to simplify the installation process. Users can import and customize more models by writing a Modelfile, which is similar to the role of a Dockerfile. Ollama also features a REST API for running and managing models, and a command-line toolset for model interaction.
Picture
ollama pull llama3:8b
The default download is llama3:8b. The colon before the colon here represents the model name, and the colon after the tag represents the tag. You can view all tags of llama3 from here
Pictures
Note: If you want the model to reply in Chinese, please enter: Hello! Please reply in Chinese
Picture
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Picture
Enter the address http://127.0.0.1:3000 to access
Picture
The first visit requires registration. Here I register an account. After registration is completed, the login is successful
Picture
Switch Chinese language
Picture
llama3:8b
Picture
Download completed
Picture
Select model
Picture
Use model
Picture
Note: If you want the model to reply in Chinese, please enter: Hello! Please reply in Chinese
Picture
## picture
The above is the detailed content of Docker completes local deployment of LLama3 open source large model in three minutes. For more information, please follow other related articles on the PHP Chinese website!