Home >Technology peripherals >AI >NVIDIA RTX graphics card speeds up AI inference by 5 times! RTX PC easily handles large models locally

NVIDIA RTX graphics card speeds up AI inference by 5 times! RTX PC easily handles large models locally

王林
王林forward
2023-11-17 23:05:431238browse

At the Microsoft Iginte Global Technology Conference, Microsoft released a series of new AI-related optimization models and development tool resources, aiming to help developers make full use of hardware performance and expand AI application fields

Especially for NVIDIA, which currently occupies an absolute dominant position in the AI ​​field, Microsoft has sent a big gift package this time, Whether it is the TensorRT-LLM packaging interface for OpenAI Chat API, or RTX-driven Performance improvements DirectML for Llama 2, as well as other popular large language models (LLM), can be better accelerated and applied on NVIDIA hardware.

NVIDIA RTX显卡AI推理提速5倍!RTX PC轻松在本地搞定大模型

Among them, TensorRT-LLM is a library used to accelerate LLM inference, which can greatly improve AI inference performance. It is constantly being updated to support more and more language models, and it is also open source.

NVIDIA released TensorRT-LLM for Windows platforms in October. For desktops and laptops equipped with RTX 30/40 series GPU graphics cards, as long as the graphics memory reaches 8GB or more, demanding AI workloads can be completed more easily

Now, Tensor RT-LLM for Windows can be compatible with OpenAI’s popular chat API through a new encapsulation interface, so various related applications can be run directly locally without the need to connect to the cloud, which is beneficial Keep private and proprietary data on your PC to prevent privacy leaks.

As long as it is a large language model optimized by TensorRT-LLM, it can be used with this packaging interface, including Llama 2, Mistral, NV LLM, etc.

For developers, there is no need for tedious code rewriting and porting. Just modify one or two lines of code, and the AI ​​application can be executed quickly locally.

NVIDIA RTX显卡AI推理提速5倍!RTX PC轻松在本地搞定大模型

↑↑↑Microsoft Visual Studio code plug-in based on TensorRT-LLM - Continue.dev coding assistant

TensorRT-LLM v0.6.0 will be updated at the end of this month, which will bring up to 5 times improvement in inference performance on RTX GPU, and support more popular LLMs, including new The 7 billion parameter Mistral and 8 billion parameter Nemotron-3 allow desktops and laptops to run LLM locally at any time, quickly and accurately.

According to actual measurement data, RTX 4060 graphics card paired with TenroRT-LLM, the inference performance can reach 319 tokens per second, which is a full 4.2 times faster than the 61 tokens per second of other backends.

RTX 4090 can accelerate from tokens per second to 829 tokens per second, an increase of 2.8 times.

NVIDIA RTX显卡AI推理提速5倍!RTX PC轻松在本地搞定大模型

With its powerful hardware performance, rich development ecosystem and wide range of application scenarios, NVIDIA RTX is becoming an indispensable and powerful assistant for local AI. At the same time, with the continuous enrichment of optimization, models and resources, the popularity of AI functions on hundreds of millions of RTX PCs is also accelerating

Currently, more than 400 partners have released AI applications and games that support RTX GPU acceleration. As the ease of use of models continues to improve, I believe that more and more AIGC functions will appear on the Windows PC platform. .

NVIDIA RTX显卡AI推理提速5倍!RTX PC轻松在本地搞定大模型

The above is the detailed content of NVIDIA RTX graphics card speeds up AI inference by 5 times! RTX PC easily handles large models locally. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:sohu.com. If there is any infringement, please contact admin@php.cn delete