Home >Technology peripherals >AI >A brief analysis of calculating GMAC and GFLOPS
GMAC stands for "Giga Multiply-Add Operations per Second" and is an indicator used to measure the computational efficiency of deep learning models. This metric represents the computational speed of the model in terms of one billion multiplication and addition operations per second.
The multiply-accumulate (MAC) operation is fundamental in many mathematical calculations, including matrix multiplication, convolution, and other tensor operations commonly used in deep learning. Each MAC operation involves multiplying two numbers and adding the result to an accumulator.
The GMAC indicator can be calculated using the following formula:
<code>GMAC =(乘法累加运算次数)/(10⁹)</code>
The number of multiply-add operations is usually determined by analyzing the network architecture and the dimensions of the model parameters, such as weights and biases.
With the GMAC metric, researchers and practitioners can make informed decisions about model selection, hardware requirements, and optimization strategies for efficient and effective deep learning computations.
GFLOPS is a measure of computing performance of a computer system or a specific operation, representing one billion floating-point operations per second. It is the number of floating point operations per second, expressed in billions (giga).
Floating point arithmetic refers to performing arithmetic calculations on real numbers represented in IEEE 754 floating point format. These operations typically include addition, subtraction, multiplication, division, and other mathematical operations.
GFLOPS is commonly used in high-performance computing (HPC) and benchmarking, especially in areas that require heavy computational tasks, such as scientific simulations, data analysis, and deep learning.
Calculate the GFLOPS formula as follows:
<code>GFLOPS =(浮点运算次数)/(以秒为单位的运行时间)/ (10⁹)</code>
GFLOPS is an effective measure of the computing power of different computer systems, processors, or specific operations. It helps evaluate the speed and efficiency of hardware or algorithms that perform floating point calculations. GFLOPS is a measure of theoretical peak performance and may not reflect the actual performance achieved in real-world scenarios because it does not take into account factors such as memory access, parallelization, and other system limitations.
The relationship between GMAC and GFLOPS
<code>1 GFLOP = 2 GMAC</code>
If we want to calculate these two indicators, it will be more troublesome to write the code manually, but Python already has a ready-made library for us to use:
ptflops library can calculate GMAC and GFLOPs
<code>pip install ptflops</code>
It is also very simple to use:
<code>import torchvision.models as models import torch from ptflops import get_model_complexity_info import re #Model thats already available net = models.densenet161() macs, params = get_model_complexity_info(net, (3, 224, 224), as_strings=True, print_per_layer_stat=True, verbose=True) # Extract the numerical value flops = eval(re.findall(r'([\d.]+)', macs)[0])*2 # Extract the unit flops_unit = re.findall(r'([A-Za-z]+)', macs)[0][0] print('Computational complexity: {:</code>
The results are as follows:
<code>Computational complexity: 7.82 GMac Computational complexity: 15.64 GFlops Number of parameters: 28.68 M</code>
We can customize a model to take a look Is the result correct?
<code>import os import torch from torch import nn class NeuralNetwork(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10),) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits custom_net = NeuralNetwork() macs, params = get_model_complexity_info(custom_net, (28, 28), as_strings=True, print_per_layer_stat=True, verbose=True) # Extract the numerical value flops = eval(re.findall(r'([\d.]+)', macs)[0])*2 # Extract the unit flops_unit = re.findall(r'([A-Za-z]+)', macs)[0][0] print('Computational complexity: {:</code>
The result is as follows:
<code>Computational complexity: 670.73 KMac Computational complexity: 1341.46 KFlops Number of parameters: 669.71 k</code>
For the convenience of demonstration, we only write the fully connected layer code to manually calculate GMAC. Iterating over the model weight parameters and calculating the shape of the number of multiplication and addition operations depends on the weight parameters, which is the key to calculating GMAC. The formula for calculating the fully connected layer weight required by GMAC is 2 x (input dimension x output dimension). The total GMAC value is obtained by multiplying and accumulating the shapes of the weight parameters of each linear layer, a process based on the structure of the model.
<code>import torch import torch.nn as nn def compute_gmac(model): gmac_count = 0 for param in model.parameters(): shape = param.shape if len(shape) == 2:# 全连接层的权重 gmac_count += shape[0] * shape[1] * 2 gmac_count = gmac_count / 1e9# 转换为十亿为单位 return gmac_count</code>
According to the model given above, the result of calculating GMAC is as follows:
<code>0.66972288</code>
Since the result of GMAC is in billions, it is not much different from the result we calculated using the class library above . Finally, calculating the GMAC of convolution is a little complicated. The formula is ((input channel x convolution kernel height x convolution kernel width) x output channel) x 2. Here is a simple code, which may not be completely correct. Reference
<code>def compute_gmac(model): gmac_count = 0 for param in model.parameters(): shape = param.shape if len(shape) == 2:# 全连接层的权重 gmac_count += shape[0] * shape[1] * 2 elif len(shape) == 4:# 卷积层的权重 gmac_count += shape[0] * shape[1] * shape[2] * shape[3] * 2 gmac_count = gmac_count / 1e9# 转换为十亿为单位 return gmac_count</code>
The above is the detailed content of A brief analysis of calculating GMAC and GFLOPS. For more information, please follow other related articles on the PHP Chinese website!