Home  >  Article  >  Technology peripherals  >  OpenAI programming language accelerates Bert reasoning 12 times, and the engine attracts attention

OpenAI programming language accelerates Bert reasoning 12 times, and the engine attracts attention

WBOY
WBOYforward
2023-04-23 15:19:071294browse

How powerful is one line of code? The Kernl library we are going to introduce today allows users to run the Pytorch transformer model several times faster on the GPU with just one line of code, thus greatly speeding up the model's inference speed. ​

Specifically, with the blessing of Kernl, Bert’s inference speed is 12 times faster than the Hugging Face baseline. This achievement is mainly due to Kernl writing custom GPU kernels in the new OpenAI programming language Triton and TorchDynamo. The project author is from Lefebvre Sarrut.

OpenAI programming language accelerates Bert reasoning 12 times, and the engine attracts attention

##GitHub address: https://github.com/ELS-RD/kernl/

The following is a comparison between Kernl and other inference engines. The numbers in brackets in the abscissa represent batch size and sequence length respectively, and the ordinate is the inference acceleration.

OpenAI programming language accelerates Bert reasoning 12 times, and the engine attracts attention

Benchmarks run on a 3090 RTX GPU, and a 12-core Intel CPU.

#From the above results, Kernl can be said to be the fastest inference engine when it comes to long sequence input (right in the figure above) half), is close to NVIDIA's TensorRT (left half in the figure above) on short input sequences. Otherwise, the Kernl kernel code is very short and easy to understand and modify. The project even adds the Triton debugger and tools (based on Fx) to simplify kernel replacement, so no modifications to the PyTorch model source code are required. ​

Project author Michaël Benesty summarized this research. The Kernl they released is a library for accelerating transformer reasoning. It is very fast and sometimes reaches SOTA performance. Hacked to match most transformer architectures.

They also tested it on T5, which was 6 times faster, and Benesty said this was just the beginning.

Why was Kernl created? ​

At Lefebvre Sarrut, the project author runs several transformers models in production, some of which are latency-sensitive, mainly search and recsys. They are also using OnnxRuntime and TensorRT, and even created the transformer-deploy OSS library to share their knowledge with the community. ​

Recently, the author has been testing generative languages ​​and working to speed them up. However, doing this using traditional tools has proven to be very difficult. In their view, Onnx is another interesting format. It is an open file format designed for machine learning. It is used to store trained models and has extensive hardware support.

However, the Onnx ecosystem (primarily the inference engine) has several limitations as they deal with the new LLM architecture:​

  • Exporting a model without control flow to Onnx is simple because tracking can be relied upon. But dynamic behavior is harder to obtain;
  • Unlike PyTorch, ONNX Runtime/TensorRT does not yet have native support for multi-GPU tasks that implement tensor parallelism;
  • TensorRT cannot manage 2 dynamic axes for a transformer model with the same configuration file. But since you usually want to be able to provide inputs of different lengths, you need to build 1 model per batch size;
  • Very large models are common, but Onnx (as a protobuff file) in the file There are some limitations in terms of size and need to be solved by storing the weights outside the model. ​

A very annoying fact is that new models will never be accelerated, you need to wait for someone else to write a custom CUDA kernel for this. It’s not that existing solutions are bad, one of the great things about OnnxRuntime is its multi-hardware support, and TensorRT is known to be very fast.

So, the project authors wanted to have an optimizer as fast as TensorRT on Python/PyTorch, which is why they created Kernl.

How to do it? ​

Memory bandwidth is usually the bottleneck of deep learning. In order to speed up inference, reducing memory access is often a good strategy. On short input sequences, the bottleneck is usually related to CPU overhead, which must be eliminated. The project author mainly utilizes the following 3 technologies:​

The first is OpenAI Triton, which is a language for writing GPU kernels such as CUDA. Do not confuse it with the Nvidia Triton inference server. It's more efficient. Improvements were achieved by the fusion of several operations such that they chain computations without retaining intermediate results in GPU memory. The author uses it to rewrite attention (replaced by Flash Attention), linear layers and activations, and Layernorm/Rmsnorm. ​

The second is the CUDA graph. During the warmup step, it saves each launched core and their parameters. The project authors then reconstructed the entire reasoning process. ​

Finally, there is TorchDynamo, a prototype proposed by Meta to help project authors deal with dynamic behavior. During the warm-up step, it tracks the model and provides an Fx graph (static calculation graph). They replaced some operations of the Fx graph with their own kernel, recompiled in Python.

In the future, the project roadmap will cover faster warm-up, ragged inference (no loss calculation in padding), training support (long sequence support), multi-GPU support (multiple parallelization mode), quantization (PTQ), Cutlass kernel testing of new batches, and improved hardware support, etc.

Please refer to the original project for more details.

The above is the detailed content of OpenAI programming language accelerates Bert reasoning 12 times, and the engine attracts attention. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete