Home >Technology peripherals >AI >DeepGEMM Released on Day 3 of DeepSeek Open Source Week

DeepGEMM Released on Day 3 of DeepSeek Open Source Week

Jennifer Aniston
Jennifer AnistonOriginal
2025-03-03 18:58:10201browse

DeepSeek Releases DeepGEMM: A High-Performance FP8 GEMM Library for AI

As part of #OpenSourceWeek, DeepSeek unveiled DeepGEMM, a cutting-edge library optimized for efficient FP8 General Matrix Multiplications (GEMMs). This library supports both dense and Mixture-of-Experts (MoE) GEMMs, proving invaluable for V3/R1 model training and inference. DeepGEMM aims to significantly boost performance and efficiency in AI workloads, reinforcing DeepSeek's commitment to open-source innovation.

? Day 3 of #OpenSourceWeek: DeepGEMM

Introducing DeepGEMM – an FP8 GEMM library supporting dense and MoE GEMMs, powering V3/R1 training and inference.

⚡ Up to 1350 FP8 TFLOPS on Hopper GPUs
✅ Minimal dependencies, designed for ease of use
✅ Fully Just-In-Time compiled…

— DeepSeek (@deepseek_ai) February 26, 2025

This release follows the successful launches of DeepSeek FlashML (Day 1) and DeepSeek DeepEP (Day 2).

Table of Contents

  • What is GEMM?
  • What is FP8?
  • The Need for DeepGEMM
  • Key Features of DeepGEMM
  • Performance Benchmarks
  • Installation Instructions
  • Conclusion

What is GEMM?

General Matrix Multiplication (GEMM) is a fundamental linear algebra operation multiplying two matrices to produce a third. Widely used across numerous applications, its formula is:

DeepGEMM Released on Day 3 of DeepSeek Open Source Week

GEMM is crucial for model performance optimization, particularly in deep learning for neural network training and inference.

DeepGEMM Released on Day 3 of DeepSeek Open Source Week

This illustration shows GEMM, highlighting tiling (dividing matrices into smaller blocks – Mtile, Ntile, Ktile) for optimized cache utilization. This improves performance through enhanced data locality and parallelism.

What is FP8?

FP8 (8-bit floating-point) is a high-performance computing format offering reduced precision and efficient numerical data representation. It's particularly beneficial for handling the computational demands of large datasets in machine learning.

The typical FP8 format includes:

  • 1 sign bit
  • 5 exponent bits
  • 2 fraction bits

This compact structure enables faster computations and reduced memory usage, ideal for training large models. While precision might be slightly compromised, this is often acceptable, even leading to performance gains due to reduced computational overhead.

DeepGEMM Released on Day 3 of DeepSeek Open Source Week

This image compares FP8 (E4M3 and E5M2 formats) with FP16 and BF16, illustrating the trade-offs between precision and range for different floating-point formats.

The Need for DeepGEMM

DeepGEMM addresses matrix multiplication challenges by offering a lightweight, high-performance, and user-friendly library for diverse GEMM operations.

  • Fills a critical need for optimized FP8 GEMM in the AI community.
  • High performance with a small memory footprint.
  • Supports both dense and MoE layouts.
  • Crucial for large-scale AI model training and execution.
  • Optimizes MoE architectures with specialized GEMM types.
  • Directly enhances DeepSeek's AI models.
  • Benefits the broader AI development ecosystem.

Key Features of DeepGEMM

DeepGEMM's strengths include:

  • High Performance: Achieves up to 1350 FP8 TFLOPS on NVIDIA Hopper GPUs.
  • Lightweight Design: Minimal dependencies for simplified usage.
  • Just-In-Time Compilation: Compiles kernels at runtime for streamlined user experience.
  • Concise Core Logic: Approximately 300 lines of core code, outperforming many expert-tuned kernels.
  • Support for Diverse Layouts: Supports dense and two MoE layouts.

Performance Benchmarks

DeepGEMM's efficiency across various matrix configurations is shown below:

/ Custom styles for table / .custom-table { width: 100%; border-collapse: collapse; / Ensures borders don't double up / margin: 20px 0; } .custom-table th, .custom-table td { border: 1px solid #000; / Visible borders / padding: 12px; / Comfortable padding / text-align: center; / Centered text / } .custom-table th { background-color: #f8f9fa; / Light gray for header / font-weight: bold; } / Responsive adjustments / @media (max-width: 768px) { .custom-table th, .custom-table td { font-size: 14px; / Smaller text on smaller screens / padding: 8px; } }
M N K Computation Memory Bandwidth Speedup
64 2112 7168 206 TFLOPS 1688 GB/s 2.7x
128 7168 2048 510 TFLOPS 2277 GB/s 1.7x
4096 4096 7168 1304 TFLOPS 500 GB/s 1.1x

Table 1: DeepGEMM Performance Benchmarks

Installation Instructions

DeepGEMM installation is straightforward:

Step 1: Prerequisites

  • Hopper architecture GPUs (sm_90a)
  • Python 3.8
  • CUDA 12.3 (recommended: 12.8 )
  • PyTorch 2.1
  • CUTLASS 3.6 (can be a Git submodule)

Step 2: Clone the Repository

git clone --recursive [email protected]:deepseek-ai/DeepGEMM.git

Step 3: Install the Library

python setup.py install

Step 4: Import DeepGEMM

import deep_gemm

See the DeepGEMM GitHub repository for detailed instructions.

Conclusion

DeepGEMM is a high-performance, user-friendly FP8 GEMM library ideal for advanced machine learning tasks. Its lightweight design, speed, and flexibility make it a valuable tool for AI developers. Check the Analytics Vidhya Blog for updates on DeepSeek's Day 4 release!

The above is the detailed content of DeepGEMM Released on Day 3 of DeepSeek Open Source Week. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn