search
HomeComputer TutorialsComputer KnowledgeWindows on Ollama: A new tool for running large language models (LLM) locally
Windows on Ollama: A new tool for running large language models (LLM) locallyFeb 28, 2024 pm 02:43 PM
windowsaiHardware AccelerationUpdate driverollama

Windows on Ollama:本地运行大型语言模型(LLM)的新利器

Recently, both OpenAI Translator and NextChat have begun to support large-scale language models running locally in Ollama, which adds a new way of playing for "newbies" enthusiasts.

Moreover, the launch of Ollama on Windows (preview version) has completely subverted the way of AI development on Windows devices. It has guided a clear path for explorers in the field of AI and ordinary "test players".

What is Ollama?

Ollama is a groundbreaking artificial intelligence (AI) and machine learning (ML) tool platform that dramatically simplifies the development and use of AI models.

In the technical community, the hardware configuration and environment construction of AI models have always been a thorny issue, and Ollama emerged to solve such critical needs:

  • It not only provides a series of tools, but more importantly, these tools are very intuitive and efficient to use. Whether you are a professional in the field of AI or a novice in this field, you can find corresponding support on Ollama .
  • More than just ease of use, Ollama also makes access to advanced AI models and computing resources no longer limited to a few people. For the AI ​​and ML communities, the birth of Ollama is a milestone. It promotes the popularization of AI technology and allows more people to try and practice their own AI ideas.

Why does Ollama stand out?

Among many AI tools, Ollama stands out with the following key advantages. These features not only highlight its uniqueness, but also solve the most common problems encountered by AI developers and enthusiasts:

  • Automatic hardware acceleration: Ollama can automatically identify and make full use of optimal hardware resources in Windows systems. Whether you are equipped with an NVIDIA GPU or a CPU that supports advanced instruction sets such as AVX and AVX2, Ollama can achieve targeted optimization to ensure that the AI ​​model runs more efficiently. With it, you no longer have to worry about complex hardware configuration issues, and you can focus more time and energy on the project itself.
  • No need for virtualization: When developing AI, it was often necessary to build a virtual machine or configure a complex software environment. With Ollama, all this is no longer an obstacle. You can start the development of AI projects directly, making the entire process simple and fast. This convenience lowers the barriers to entry for individuals or organizations who want to try AI technology.
  • Access to the complete Ollama model library: Ollama provides users with a rich AI model library, including advanced image recognition models like LLaVA and Google's latest Gemma model. With such a comprehensive "arsenal", we can easily try and apply various open source models without having to spend time and effort searching for integrations ourselves. Whether you want to perform text analysis, image processing, or other AI tasks, Ollama's model library can provide strong support.
  • Ollama’s resident API: In today’s interconnected world of software, integrating AI capabilities into your own applications is extremely valuable. Ollama's resident API greatly simplifies this process, running silently in the background, ready to seamlessly connect powerful AI capabilities to your project without the need for additional complicated setup. With it, Ollama's rich AI capabilities will be ready at any time and can be naturally integrated into your development process to further improve work efficiency.

Through these carefully designed features, Ollama not only solves common problems in AI development, but also allows more people to easily access and apply advanced AI technology, greatly expanding the application prospects of AI.

Using Ollama on Windows

Welcome to the new era of AI and ML! Next, we'll take you through every step of getting started, and we'll also provide some practical code and command examples to make sure you have a smooth journey.

Step 1: Download and Install

1Visit the Ollama Windows Preview page and download the OllamaSetup.exe installation program.

2 Double-click the file and click "Install" to start the installation.

3After the installation is completed, you can start using Ollama on Windows. Isn’t it very simple?

Step 2: Start Ollama and get the model

To launch Ollama and get an open source AI model from the model library, follow these steps:

1 Click the Ollama icon in the "Start" menu. After running, an icon will reside in the taskbar tray.

2 Right-click the taskbar icon and select "View log" to open the command line window.

3Execute the following command to run Ollama and load the model:

ollama run [modelname]

After executing the above command, Ollama will start to initialize and automatically pull and load the selected model from the Ollama model library. Once it's ready, you can send it instructions and it will understand and respond using the chosen model.

Remember to replace the modelname name with the name of the model to be run. Commonly used ones are:

Model parameter size Installation command Publishing Organization
Llama 2 7B 3.8GB ollama run llama2 Meta
Code Llama 7B 3.8GB ollama run codellama Meta
Llama 2 13B 13B 7.3GB ollama run llama2:13b Meta
Llama 2 70B 70B 39GB ollama run llama2:70b Meta
Mistral 7B 4.1GB ollama run mistral Mistral AI
mixtral 8x7b 26GB ollama run mixtral:8x7b Mistral AI
Phi-2 2.7B 1.7GB ollama run phi Microsoft Research
LLaVA 7B 4.5GB ollama run llava Microsoft Research
Columbia University
Wisconsin
Gemma 2B 2B 1.4GB ollama run gemma:2b Google
Gemma 7B 7B 4.8GB ollama run gemma:7b Google
Qwen 4B 4B 2.3GB ollama run qwen:4b Alibaba
Qwen 7B 7B 4.5GB ollama run qwen:7b Alibaba
Qwen 14B 14B 8.2GB ollama run qwen:14b Alibaba

运行 7B 至少需要 8GB 内存,运行 13B 至少需要 16GB 内存。

步骤 3:使用模型

如前所述,Ollama 支持通过各种各样的开源模型来完成不同的任务,下面就来看看怎么使用。

  • 基于文本的模型:加载好文本模型后,就可以直接在命令行里输入文字开始与模型「对话」。例如,阿里的 Qwen(通义千问):
  • 基于图像的模型:如果你想使用图像处理模型,如 LLaVA 1.6,可以使用以下命令来加载该模型:
ollama run llava1.6

Ollama 会使用你选择的模型来分析这张图片,并给你一些结果,比如图片的内容和分类,图片是否有修改,或者其他的分析等等(取决于所使用的模型)。

步骤 4:连接到 Ollama API

我们不可能只通过命令行来使用,将应用程序连接到 Ollama API 是一个非常重要的步骤。这样就可以把 AI 的功能整合到自己的软件里,或者在 OpenAI Translator 和 NextChat 这类的前端工具中进行调用。

以下是如何连接和使用 Ollama API 的步骤:

  • 默认地址和端口:Ollama API 的默认地址是http://localhost:11434,可以在安装 Ollama 的系统中直接调用。
  • 修改 API 的侦听地址和端口:如果要在网络中提供服务,可以修改 API 的侦听地址和端口。

1右击点击任务栏图标,选择「Quit Ollama」退出后台运行。

2使用Windows + R快捷键打开「运行」对话框,输出以下命令,然后按Ctrl + Shift + Enter以管理员权限启动「环境变量」。

C:Windowssystem32rundll32.exe sysdm.cpl, EditEnvironmentVariables

3要更改侦听地址和端口,可以添加以下环境变量:

  • 变量名:OLLAMA_HOST
  • 变量值(端口)::8000

只填写端口号可以同时侦听(所有) IPv4 和 IPv6 的:8000端口。

要使用 IPv6,需要 Ollama 0.0.20 或更新版本。

4如果安装了多个模型,可以通过OLLAMA_MODELS变量名来指定默认模型。

5更改完之后,重新运行 Ollama。然后在浏览器中测试访问,验证更改是否成功。

6示例 API 调用: 要使用 Ollama API,可以在自己的程序里发送 HTTP 请求。下面是在「终端」里使用curl命令给 Gemma 模型发送文字提示的例子:

curl http://192.168.100.10:8000/api/generate -d '{
"model": "gemma:7b",
"prompt": "天空为什么是蓝色的?"
}'

返回响应的格式,目前只支持 Json 格式。

Ollama 的常用命令有:

# 查看 Ollama 版本
ollama -v

# 查看已安装的模型
ollama list

# 删除指定模型
ollama rm [modelname]

# 模型存储路径
# C:Users\.ollamamodels

按照上述步骤,并参考命令示例,你可以在 Windows 上尽情体验 Ollama 的强大功能。不管是在命令行中直接下达指令,通过 API 将 AI 模型集成到你的软件当中,还是通过前端套壳,Ollama 的大门都已经为你敞开。

Ollama on Windows 的最佳实践

要让 Ollama 在 Windows 上充分发挥最大潜力,需要注意以下几点最佳实践和技巧,这将帮助你优化性能并解决一些常见问题:

Optimize Ollama performance:

  • Check hardware configuration: Make sure your device meets Ollama's recommended hardware requirements, especially when running large models. If you have an NVIDIA GPU, you can also enjoy automatic hardware acceleration provided by Ollama, which greatly improves computing speed.
  • Update Drivers: Keep your graphics card drivers up to date to ensure compatibility and optimal performance with Ollama.
  • Release system resources: When running large models or performing complex tasks, please close unnecessary programs to release system resources.
  • Select the appropriate model: Select the appropriate model based on task requirements. Although large-parameter models may be more accurate, they also require higher computing power. For simple tasks, it is more efficient to use small parameter models.

Ollama FAQ

Installation issues

  • Make sure your Windows system is the latest version.
  • Make sure you have the necessary permissions to install the software.
  • Try running the installer as administrator.

Model loading error

  • Check whether the entered command is correct.
  • Confirm that the model name matches the name in the Ollama model library.
  • Check Ollama version and update.

Ollama API connection issue

  • Make sure Ollama is running.
  • Check the listening address and port, especially whether the port is occupied by other applications.

In this tutorial, we learned how to install and use Ollama on Windows, including installing Ollama, executing basic commands, using the Ollama model library, and connecting to Ollama through the API. I recommend you dig into Ollama and try out a variety of different models.

Ollama has unlimited potential, and with it, you can achieve more!

The above is the detailed content of Windows on Ollama: A new tool for running large language models (LLM) locally. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:每日运维. If there is any infringement, please contact admin@php.cn delete
ai合并图层的快捷键是什么ai合并图层的快捷键是什么Jan 07, 2021 am 10:59 AM

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西怎么办ai橡皮擦擦不掉东西怎么办Jan 13, 2021 am 10:23 AM

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式吗ai可以转成psd格式吗Feb 22, 2023 pm 05:56 PM

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

ai顶部属性栏不见了怎么办ai顶部属性栏不见了怎么办Feb 22, 2023 pm 05:27 PM

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑Apr 04, 2023 am 11:55 AM

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程Mar 31, 2023 pm 10:38 PM

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

ai移动不了东西了怎么办ai移动不了东西了怎么办Mar 07, 2023 am 10:03 AM

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment