Intel® What is Extension for Transformer?
Intel® Extension for Transformers[1] is an innovative toolkit launched by Intel that can be based on the Intel® architecture platform, especially the fourth generation Intel® Xeon® Scalable processors (codenamed Sapphire Rapids[2], SPR) significantly accelerate Transformer-based Large Language Model (LLM). Its main features include:
- Provide users with a seamless model compression experience by extending the Hugging Face transformers API[3] and leveraging Intel® Neural Compressor[4];
- Provides LLM inference runtime using low-bit quantization kernel (NeurIPS 2023: Efficient LLM inference on CPU [5]), supporting Falcon, LLaMA, MPT, Llama2, BLOOM, OPT, ChatGLM2, GPT-J- Common LLMs such as 6B, Baichuan-13B-Base, Baichuan2-13B-Base, Qwen-7B, Qwen-14B and Dolly-v2-3B[6];
- Advanced compressed sensing runtime[7] (NeurIPS 2022: Fast Distillation on CPU and QuaLA-MiniLM: Quantization Length Adaptive MiniLM; NeurIPS 2021: Prune once, forget it: sparse/prune pre-trained language models).
This article will focus on the LLM inference runtime (referred to as "LLM runtime") , and how to use the Transformer-based API to run on Intel® Xeon ® Achieve more efficient LLM reasoning on scalable processors and how to deal with the application problems of LLM in chat scenarios.
LLM RuntimeIntel® The LLM Runtime[8] provided by Extension for Transformers is a lightweight but efficient LLM inference runtime , which is inspired by GGML[9] and is compatible with llama.cpp[10]. It has the following characteristics:
- The kernel has been built-in for
- Intel® Xeon® CPU Multiple AI acceleration technologies (such as AMX, VNNI) and AVX512F and AVX2 instruction sets have been optimized; can provide more quantization options, such as: different granularity (by channel or by group), different groups Size (such as: 32/128);
- has better KV cache access and memory allocation strategy;
- has tensor parallelization function, which can help distribution in multi-channel systems reasoning.
from transformers import AutoTokenizer, TextStreamerfrom intel_extension_for_transformers.transformers import AutoModelForCausalLMmodel_name = "Intel/neural-chat-7b-v3-1” prompt = "Once upon a time, there existed a little girl,"tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)inputs = tokenizer(prompt, return_tensors="pt").input_idsstreamer = TextStreamer(tokenizer)model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)The default setting is: store weights as 4 bits and perform calculations as 8 bits. But it also supports different calculation data type (dtype) and weight data type combinations, and users can modify the settings as needed. Sample code for how to use this feature is provided below:
from transformers import AutoTokenizer, TextStreamerfrom intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfigmodel_name = "Intel/neural-chat-7b-v3-1” prompt = "Once upon a time, there existed a little girl,"woq_config = WeightOnlyQuantConfig(compute_dtype="int8", weight_dtype="int4")tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)inputs = tokenizer(prompt, return_tensors="pt").input_idsstreamer = TextStreamer(tokenizer)model = AutoModelForCausalLM.from_pretrained(model_name,quantization_cnotallow=woq_config)outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)Performance TestAfter continuous efforts, the INT4 performance of the above optimization scheme has been significantly improved. This article compares the performance with llama.cpp on a system equipped with
Intel® 256 GB total memory (16 x 16 GB DDR5 4800 MT/s [4800 MT/s]), BIOS 3A14.TEL2P1, microcode 0x2b0001b0, CentOS Stream 8. The results of the inference performance test are shown in the table below, where the input size is 32, the output size is 32, and the beam is 1
##△ Table 1. Comparison of inference performance between LLM Runtime and llama.cpp (input size = 32, output size = 32, beam = 1)
Inference performance when the input size is 1024, the output size is 32, and the beam is 1 The test results are detailed in the following table:
△Table 2. LLM Runtime and llama.cpp inference performance comparison (input size = 1024, output size=32,beam=1)
According to Table 2 above: Compared with llama.cpp also running on the fourth generation Intel® Xeon® scalable processor, whether it is the first token or the next token, LLM Runtime can significantly reduce latency, and the inference speed of the first token and the next token is increased by up to 40 times[a] (Baichuan-13B, input is 1024) and 2.68 times [ b] (MPT-7B, input is 1024). The test of llama.cpp uses the default code base [10].
Combining the test results in Table 1 and Table 2, we can get: Compared with llama.cpp also running on the fourth generation Intel® Xeon® scalable processor, LLM Runtime can significantly improve the overall performance of many common LLMs: when the input size is 1024, it achieves an improvement of 3.58 to 21.5 times; when the input size is 32, it achieves an improvement of 1.76 to 3.43 times[c] .
Accuracy Test
Intel® Extension for Transformers available Intel® SignRound[11], RTN and GPTQ[12] in Neural Compressor ] and other quantification methods, and verified the INT4 inference accuracy using lambada_openai, piqa, winogrande and hellaswag data sets. The table below compares test result averages to FP32 accuracy.
△Table 3. Accuracy comparison between INT4 and FP32
As can be seen from Table 3 above, the INT4 inference performed by multiple models based on LLM Runtime is accurate The sexual loss is so small that it can almost be ignored. We verified many models, but only some are listed here due to space limitations. If you would like more information or details, please visit this link: https://medium.com/@NeuralCompressor/llm-performance-of-intel-extension-for-transformers-f7d061556176.
More advanced functions: meet the application needs of LLM in more scenarios
At the same time, LLM Runtime[8] also has the tensor parallelization function of dual-channel CPU, which is the first to have such a function one of the products. In the future, dual nodes will be further supported.
However, the advantage of LLM Runtime is not only its better performance and accuracy. We have also invested a lot of effort to enhance its functions in chat application scenarios and solve the problems that LLM may encounter in chat scenarios. The following application problems are encountered:
- Dialogue is not only related to LLM reasoning, but dialogue history is also very useful.
- Limited output length: LLM model pre-training is mainly based on limited sequence length. Therefore, its accuracy decreases when the sequence length exceeds the attention window size used during pre-training.
- Inefficiency: During the decoding stage, Transformer-based LLM will store the key-value status (KV) of all previously generated tokens, resulting in excessive memory usage and increased decoding latency.
Regarding the first issue, LLM Runtime's dialogue function is solved by incorporating more dialogue history data and generating more output, which llama.cpp is not yet well equipped to deal with. .
Regarding the second and third questions, we integrated streaming LLM (Steaming LLM) into Intel® Extension for Transformers, which can significantly optimize memory usage and reduce inference time extension.
Streaming LLM
Different from the traditional KV caching algorithm, our method combines Attention Sink (4 initial tokens) to improve attention calculation Stability, and retaining the latest token with the help of rolling KV cache, which is crucial for language modeling. The design is highly flexible and can be seamlessly integrated into autoregressive language models capable of utilizing rotational position encoding RoPE and relative position encoding ALiBi.
The content that needs to be rewritten is: △ Figure 2. KV cache of Steam LLM using attention sinking to implement efficient streaming language model (Picture source: [13] )
Moreover, unlike llama.cpp, this optimization plan also adds new parameters such as "n_keep" and "n_discard" to enhance the Streaming LLM strategy. Users can use the "n_keep" parameter to specify the number of tokens to keep in the KV cache, and the "n_discard" parameter to determine the number to discard among the generated tokens. In order to better balance performance and accuracy, the system discards half of the latest token number in the KV cache by default
At the same time, to further improve performance, we have also added Streaming LLM to the MHA fusion mode. If the model uses rotational position encoding (RoPE) to implement position embedding, then you only need to apply a "shift operation" to the existing K-Cache to avoid performing operations on previously generated tokens that have not been discarded. Repeated calculation. This method not only takes full advantage of the full context size when generating long text, but also does not incur additional overhead until the KV cache context is completely filled.
“shift operation”依赖于旋转的交换性和关联性,或复数乘法。例如:如果某个token的K-张量初始放置位置为m并且旋转了m×θi for i ∈ [0,d/2),那么当它需要移动到m-1这个位置时,则可以旋转回到(-1)×θi for i ∈ [0,d/2)。这正是每次舍弃n_discard个token的缓存时发生的事情,而此时剩余的每个token都需要“移动”n_discard个位置。下图以“n_keep=4、n_ctx=16、n_discard=1”为例,展示了这一过程。
△图3.Ring-Buffer KV-Cache和Shift-RoPE工作原理
需要注意的是:融合注意力层无需了解上述过程。如果对K-cache和V-cache进行相同的洗牌,注意力层会输出几乎相同的结果(可能存在因浮点误差导致的微小差异)。
您可以使用下面的代码来启动Streaming LLM:
from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v1-1" # Hugging Face model_id or local model woq_config = WeightOnlyQuantConfig(compute_dtype="int8", weight_dtype="int4") prompt = "Once upon a time, a little girl"tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer)model = AutoModelForCausalLM.from_pretrained(model_name, quantization_cnotallow=woq_config, trust_remote_code=True) # Recommend n_keep=4 to do attention sinks (four initial tokens) and n_discard=-1 to drop half rencetly tokens when meet length threshold outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300, ctx_size=100, n_keep=4, n_discard=-1)
结论与展望
本文基于上述实践经验,提供了一个在英特尔® 至强® 可扩展处理器上实现高效的低位(INT4)LLM推理的解决方案,并且在一系列常见LLM上验证了其通用性以及展现了其相对于其他基于CPU的开源解决方案的性能优势。未来,我们还将进一步提升CPU张量库和跨节点并行性能。
欢迎您试用英特尔® Extension for Transformers[1],并在英特尔® 平台上更高效地运行LLM推理!也欢迎您向代码仓库(repository)提交修改请求 (pull request)、问题或疑问。期待您的反馈!
特别致谢
在此致谢为此篇文章做出贡献的英特尔公司人工智能资深经理张瀚文及工程师许震中、余振滔、刘振卫、丁艺、王哲、刘宇澄。
[a]根据表2 Baichuan-13B的首个token测试结果计算而得。
[b]根据表2 MPT-7B的下一个token测试结果计算而得。
[c]当输入大小为1024时,整体性能=首个token性能+1023下一个token性能;当输入大小为32时,整体性能=首个token性能+31下一个token性能。
The above is the detailed content of Improve large model inference performance by 40x using toolkit. For more information, please follow other related articles on the PHP Chinese website!

The legal tech revolution is gaining momentum, pushing legal professionals to actively embrace AI solutions. Passive resistance is no longer a viable option for those aiming to stay competitive. Why is Technology Adoption Crucial? Legal professional

Many assume interactions with AI are anonymous, a stark contrast to human communication. However, AI actively profiles users during every chat. Every prompt, every word, is analyzed and categorized. Let's explore this critical aspect of the AI revo

A successful artificial intelligence strategy cannot be separated from strong corporate culture support. As Peter Drucker said, business operations depend on people, and so does the success of artificial intelligence. For organizations that actively embrace artificial intelligence, building a corporate culture that adapts to AI is crucial, and it even determines the success or failure of AI strategies. West Monroe recently released a practical guide to building a thriving AI-friendly corporate culture, and here are some key points: 1. Clarify the success model of AI: First of all, we must have a clear vision of how AI can empower business. An ideal AI operation culture can achieve a natural integration of work processes between humans and AI systems. AI is good at certain tasks, while humans are good at creativity and judgment

Meta upgrades AI assistant application, and the era of wearable AI is coming! The app, designed to compete with ChatGPT, offers standard AI features such as text, voice interaction, image generation and web search, but has now added geolocation capabilities for the first time. This means that Meta AI knows where you are and what you are viewing when answering your question. It uses your interests, location, profile and activity information to provide the latest situational information that was not possible before. The app also supports real-time translation, which completely changed the AI experience on Ray-Ban glasses and greatly improved its usefulness. The imposition of tariffs on foreign films is a naked exercise of power over the media and culture. If implemented, this will accelerate toward AI and virtual production

Artificial intelligence is revolutionizing the field of cybercrime, which forces us to learn new defensive skills. Cyber criminals are increasingly using powerful artificial intelligence technologies such as deep forgery and intelligent cyberattacks to fraud and destruction at an unprecedented scale. It is reported that 87% of global businesses have been targeted for AI cybercrime over the past year. So, how can we avoid becoming victims of this wave of smart crimes? Let’s explore how to identify risks and take protective measures at the individual and organizational level. How cybercriminals use artificial intelligence As technology advances, criminals are constantly looking for new ways to attack individuals, businesses and governments. The widespread use of artificial intelligence may be the latest aspect, but its potential harm is unprecedented. In particular, artificial intelligence

The intricate relationship between artificial intelligence (AI) and human intelligence (NI) is best understood as a feedback loop. Humans create AI, training it on data generated by human activity to enhance or replicate human capabilities. This AI

Anthropic's recent statement, highlighting the lack of understanding surrounding cutting-edge AI models, has sparked a heated debate among experts. Is this opacity a genuine technological crisis, or simply a temporary hurdle on the path to more soph

India is a diverse country with a rich tapestry of languages, making seamless communication across regions a persistent challenge. However, Sarvam’s Bulbul-V2 is helping to bridge this gap with its advanced text-to-speech (TTS) t


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Zend Studio 13.0.1
Powerful PHP integrated development environment
