search
HomeTechnology peripheralsAINVIDIA slaps AMD in the face: With software support, H100's AI performance is 47% faster than MI300X!

NVIDIA slaps AMD in the face: With software support, H100s AI performance is 47% faster than MI300X!

According to news on December 14, AMD launched its most powerful AI chip Instinct MI300X earlier this month. The AI ​​performance of its 8-GPU server is 60% higher than that of Nvidia H100 8-GPU. In this regard, NVIDIA recently released a set of the latest performance comparison data between H100 and MI300X, showing how H100 can use the right software to provide faster AI performance than MI300X.

According to data previously released by AMD, the FP8/FP16 performance of MI300X has reached 1.3 times that of NVIDIA H100, and the speed of running Llama 2 70B and FlashAttention 2 models is 20% faster than H100. In the 8v8 server, when running the Llama 2 70B model, MI300X is 40% faster than H100; when running the Bloom 176B model, MI300X is 60% faster than H100.

However, it should be noted that when comparing MI300X with NVIDIA H100, AMD used the optimization library in the latest ROCm 6.0 suite (which can support the latest computing formats such as FP16, Bf16 and FP8, including Sparsity etc.) to get these numbers. In contrast, the NVIDIA H100 was not tested without the use of optimization software such as NVIDIA's TensorRT-LLM.

AMD's implicit statement on the NVIDIA H100 test shows that using vLLM v.02.2.2 inference software and the NVIDIA DGX H100 system, the Llama 2 70B query has an input sequence length of 2048 and an output sequence length of 128

The latest test results released by NVIDIA for DGX H100 (with 8 NVIDIA H100 Tensor Core GPUs, with 80 GB HBM3) show that the public NVIDIA TensorRT LLM software is used, of which v0.5.0 is used for Batch-1 Test, v0.6.1 for latency threshold measurements. Test workload details are the same as previously conducted AMD tests

NVIDIA slaps AMD in the face: With software support, H100s AI performance is 47% faster than MI300X!

According to the results, after using optimized software, the performance of the NVIDIA DGX H100 server has increased by more than 2 times, and is 47% faster than the MI300X 8-GPU server displayed by AMD

DGX H100 can handle a single inference task in 1.7 seconds. In order to optimize response time and data center throughput, cloud services set fixed response times for specific services. This allows them to combine multiple inference requests into larger "batches", thereby increasing the overall number of inferences per second on the server. Industry standard benchmarks such as MLPerf also use this fixed response time metric to measure performance

Slight tradeoffs in response time can create uncertainty in the number of inference requests the server can handle in real time. Using a fixed 2.5 second response time budget, the NVIDIA DGX H100 server can handle more than 5 Llama 2 70B inferences per second, while Batch-1 handles less than one per second.

Obviously, it is relatively fair for Nvidia to use these new benchmarks. After all, AMD also uses its optimized software to evaluate the performance of its GPUs, so why not do the same when testing the Nvidia H100?

You must know that NVIDIA's software stack revolves around the CUDA ecosystem, and after years of hard work and development, it has a very strong position in the artificial intelligence market, while AMD's ROCm 6.0 is new and has not yet been tested in real-world scenarios.

According to information previously disclosed by AMD, it has reached a large part of the deal with large companies such as Microsoft and Meta. These companies regard its MI300X GPU as a replacement for Nvidia's H100 solution.

AMD’s latest Instinct MI300X is expected to be shipped in large quantities in the first half of 2024. However, NVIDIA’s more powerful H200 GPU will also be shipped by then, and NVIDIA will also launch a new generation of Blackwell B100 in the second half of 2024. In addition, Intel will also launch its new generation AI chip Gaudi 3. Next, competition in the field of artificial intelligence seems to become more intense.

Editor: Xinzhixun-Rurounijian

The above is the detailed content of NVIDIA slaps AMD in the face: With software support, H100's AI performance is 47% faster than MI300X!. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:搜狐. If there is any infringement, please contact admin@php.cn delete
How to Run LLM Locally Using LM Studio? - Analytics VidhyaHow to Run LLM Locally Using LM Studio? - Analytics VidhyaApr 19, 2025 am 11:38 AM

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri Helps Flavor McCormick's Future Through Data TransformationGuy Peri Helps Flavor McCormick's Future Through Data TransformationApr 19, 2025 am 11:35 AM

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

What is the Chain of Emotion in Prompt Engineering? - Analytics VidhyaWhat is the Chain of Emotion in Prompt Engineering? - Analytics VidhyaApr 19, 2025 am 11:33 AM

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

12 Best AI Tools for Data Science Workflow - Analytics Vidhya12 Best AI Tools for Data Science Workflow - Analytics VidhyaApr 19, 2025 am 11:31 AM

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

AV Byte: OpenAI's GPT-4o Mini and Other AI InnovationsAV Byte: OpenAI's GPT-4o Mini and Other AI InnovationsApr 19, 2025 am 11:30 AM

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

Perplexity's Android App Is Infested With Security Flaws, Report FindsPerplexity's Android App Is Infested With Security Flaws, Report FindsApr 19, 2025 am 11:24 AM

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

Everyone's Getting Better At Using AI: Thoughts On Vibe CodingEveryone's Getting Better At Using AI: Thoughts On Vibe CodingApr 19, 2025 am 11:17 AM

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Rocket Launch Simulation and Analysis using RocketPy - Analytics VidhyaRocket Launch Simulation and Analysis using RocketPy - Analytics VidhyaApr 19, 2025 am 11:12 AM

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)