


At the Microsoft Iginte Global Technology Conference, Microsoft released a series of new AI-related optimization models and development tool resources, aiming to help developers make full use of hardware performance and expand AI application fields
Especially for NVIDIA, which currently occupies an absolute dominant position in the AI field, Microsoft has sent a big gift package this time, Whether it is the TensorRT-LLM packaging interface for OpenAI Chat API, or RTX-driven Performance improvements DirectML for Llama 2, as well as other popular large language models (LLM), can be better accelerated and applied on NVIDIA hardware.
Among them, TensorRT-LLM is a library used to accelerate LLM inference, which can greatly improve AI inference performance. It is constantly being updated to support more and more language models, and it is also open source.
NVIDIA released TensorRT-LLM for Windows platforms in October. For desktops and laptops equipped with RTX 30/40 series GPU graphics cards, as long as the graphics memory reaches 8GB or more, demanding AI workloads can be completed more easily
Now, Tensor RT-LLM for Windows can be compatible with OpenAI’s popular chat API through a new encapsulation interface, so various related applications can be run directly locally without the need to connect to the cloud, which is beneficial Keep private and proprietary data on your PC to prevent privacy leaks.
As long as it is a large language model optimized by TensorRT-LLM, it can be used with this packaging interface, including Llama 2, Mistral, NV LLM, etc.
For developers, there is no need for tedious code rewriting and porting. Just modify one or two lines of code, and the AI application can be executed quickly locally.
↑↑↑Microsoft Visual Studio code plug-in based on TensorRT-LLM - Continue.dev coding assistant
TensorRT-LLM v0.6.0 will be updated at the end of this month, which will bring up to 5 times improvement in inference performance on RTX GPU, and support more popular LLMs, including new The 7 billion parameter Mistral and 8 billion parameter Nemotron-3 allow desktops and laptops to run LLM locally at any time, quickly and accurately.
According to actual measurement data, RTX 4060 graphics card paired with TenroRT-LLM, the inference performance can reach 319 tokens per second, which is a full 4.2 times faster than the 61 tokens per second of other backends.
RTX 4090 can accelerate from tokens per second to 829 tokens per second, an increase of 2.8 times.
With its powerful hardware performance, rich development ecosystem and wide range of application scenarios, NVIDIA RTX is becoming an indispensable and powerful assistant for local AI. At the same time, with the continuous enrichment of optimization, models and resources, the popularity of AI functions on hundreds of millions of RTX PCs is also accelerating
Currently, more than 400 partners have released AI applications and games that support RTX GPU acceleration. As the ease of use of models continues to improve, I believe that more and more AIGC functions will appear on the Windows PC platform. .
The above is the detailed content of NVIDIA RTX graphics card speeds up AI inference by 5 times! RTX PC easily handles large models locally. For more information, please follow other related articles on the PHP Chinese website!

Introduction Suppose there is a farmer who daily observes the progress of crops in several weeks. He looks at the growth rates and begins to ponder about how much more taller his plants could grow in another few weeks. From th

Soft AI — defined as AI systems designed to perform specific, narrow tasks using approximate reasoning, pattern recognition, and flexible decision-making — seeks to mimic human-like thinking by embracing ambiguity. But what does this mean for busine

The answer is clear—just as cloud computing required a shift toward cloud-native security tools, AI demands a new breed of security solutions designed specifically for AI's unique needs. The Rise of Cloud Computing and Security Lessons Learned In th

Entrepreneurs and using AI and Generative AI to make their businesses better. At the same time, it is important to remember generative AI, like all technologies, is an amplifier – making the good great and the mediocre, worse. A rigorous 2024 study o

Unlock the Power of Embedding Models: A Deep Dive into Andrew Ng's New Course Imagine a future where machines understand and respond to your questions with perfect accuracy. This isn't science fiction; thanks to advancements in AI, it's becoming a r

Large Language Models (LLMs) and the Inevitable Problem of Hallucinations You've likely used AI models like ChatGPT, Claude, and Gemini. These are all examples of Large Language Models (LLMs), powerful AI systems trained on massive text datasets to

Recent research has shown that AI Overviews can cause a whopping 15-64% decline in organic traffic, based on industry and search type. This radical change is causing marketers to reconsider their whole strategy regarding digital visibility. The New

A recent report from Elon University’s Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, ‘Being Human in 2035’, concluded that most are concerned that the deepening adoption of AI systems over t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Zend Studio 13.0.1
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.