


This new work from Apple will bring unlimited imagination to the ability to add large models to future iPhones.
In recent years, large language models (LLM) such as GPT-3, OPT, and PaLM have demonstrated strong performance in a wide range of natural language processing (NLP) tasks. However, achieving these performances requires extensive computational and memory inference because these large language models may contain hundreds of billions or even trillions of parameters, making it challenging to load and run efficiently on resource-constrained devices
The current standard solution is to load the entire model into DRAM for inference. However, this approach severely limits the maximum model size that can be run. For example, a 7 billion parameter model requires more than 14GB of memory to load parameters in half-precision floating point format, which is beyond the capabilities of most edge devices.
In order to solve this limitation, Apple researchers proposed to store model parameters in flash memory, which is at least an order of magnitude larger than DRAM. Then during inference, they directly and cleverly flash-loaded the required parameters, eliminating the need to fit the entire model into DRAM.
This approach builds on recent work that shows that LLM exhibits a high degree of sparsity in the feedforward network (FFN) layer, with models such as OPT and Falcon achieving sparsity exceeding 90%. Therefore, we exploit this sparsity to selectively load from flash memory only parameters that have non-zero inputs or are predicted to have non-zero outputs.
Paper address: https://arxiv.org/pdf/2312.11514.pdf
Specifically, the researchers discussed a hardware-inspired cost model that includes flash memory, DRAM, and compute cores (CPU or GPU). Then two complementary techniques are introduced to minimize data transfer and maximize flash throughput:
Window: only loads the parameters of the first few tags and reuses the activation of the most recently calculated tag. This sliding window approach reduces the number of IO requests to load weights;
Row and row bundling: stores concatenated rows and columns of up- and down-projection layers to read larger contiguous portions of flash memory piece. This will increase throughput by reading larger blocks.
To further reduce the number of weights transferred from flash to DRAM, the researchers tried to predict the sparsity of FFN and avoid loading zeroing parameters. By using a combination of windowing and sparsity prediction, only 2% of the flash FFN layer is loaded per inference query. They also propose static memory pre-allocation to minimize intra-DRAM transfers and reduce inference latency
This paper’s flash load cost model strikes a balance between loading better data and reading larger blocks . A flash strategy that optimizes this cost model and selectively loads parameters on demand can run models with twice the DRAM capacity and improve inference speeds by 4-5x and 20-25x respectively compared to naive implementations in CPUs and GPUs. times.
Some people commented that this work will make iOS development more interesting.
Flash Memory and LLM Reasoning
Bandwidth and Energy Limitations
Although modern NAND flash memory provides high bandwidth and low latency, But it still falls short of DRAM performance levels, especially in memory-constrained systems. Figure 2a below illustrates these differences.
Naive inference implementations that rely on NAND flash may require reloading the entire model for each forward pass, a process that is time consuming and even compressing the model takes several seconds. Additionally, transferring data from DRAM to CPU or GPU memory requires more energy.
In scenarios where DRAM is sufficient, the cost of loading data is reduced, and the model can reside in DRAM. However, the initial loading of the model still consumes energy, especially if the first token requires fast response time. Our approach exploits activation sparsity in LLM to address these challenges by selectively reading model weights, thereby reducing time and energy costs.
Reexpressed as: Get data transfer rate
Flash systems perform best with large amounts of sequential reads. For example, the Apple MacBook Pro M2 is equipped with 2TB of flash memory, and in benchmark tests, the linear read speed of 1GiB of uncached files exceeded 6GiB/s. However, smaller random reads cannot achieve such high bandwidths due to the multi-stage nature of these reads, including the operating system, drivers, mid-range processors, and flash controllers. Each stage introduces latency, resulting in a larger impact on smaller read speeds
To circumvent these limitations, researchers advocate two main strategies, which can be used simultaneously.
The first strategy is to read larger data blocks. While the increase in throughput is not linear (larger data blocks require longer transfer times), the delay in initial bytes accounts for a smaller proportion of the total request time, making data reads more efficient. Figure 2b depicts this principle. A counter-intuitive but interesting observation is that in some cases, reading more data than needed (but in larger chunks) and then discarding it is faster than reading only what is needed but in smaller chunks .
The second strategy is to exploit the inherent parallelism of the storage stack and flash controller to achieve parallel reads. The results show that it is possible to achieve throughput suitable for sparse LLM inference using multi-threaded random reads of 32KiB or larger on standard hardware.
The key to maximizing throughput lies in how the weights are stored, as a layout that increases the average block length can significantly increase bandwidth. In some cases, it may be beneficial to read and subsequently discard excess data, rather than splitting the data into smaller, less efficient chunks.
Flash loading
Inspired by the above challenges, the researchers proposed a method to optimize the data transfer volume and improve the data transfer rate, which can be expressed as: To improve reasoning speed. This section discusses the challenges of performing inference on devices where the available computational memory is much smaller than the model size.
Analyzing this challenge requires storing complete model weights in flash memory. The primary metric used by researchers to evaluate various flash loading strategies is latency, which is divided into three different components: the I/O cost of performing the flash load, the memory overhead of managing the newly loaded data, and the computational cost of the inference operation.
Apple divides solutions for reducing latency under memory constraints into three strategic areas, each targeting a specific aspect of latency:
1. Reducing data load: Aiming to Load less data to reduce latency associated with flash I/O operations.
2. Optimize data block size: Improve flash throughput by increasing the size of the loaded data block, thereby reducing latency.
The following is the strategy used by researchers to increase data block size to improve flash read efficiency:
Bundle columns and rows
Co-activation-based bundling
3. Effective management of loaded data: Simplify the management of data after it is loaded into memory to maximize Reduce expenses significantly.
Although transferring data in DRAM is more efficient than accessing flash memory, it incurs a non-negligible cost. When introducing data for new neurons, re-allocating matrices and adding new matrices can incur significant overhead due to the need to rewrite existing neuron data in DRAM. This is especially costly when a large portion (~25%) of the feedforward network (FFN) in DRAM needs to be rewritten.
In order to solve this problem, the researchers adopted another memory management strategy. This strategy involves pre-allocating all necessary memory and establishing corresponding data structures for efficient management. As shown in Figure 6, the data structure includes elements such as pointers, matrices, offsets, used numbers, and last_k_active
Figure 6: Memory Management , first copy the last element to the delete neuron to maintain the continuity of the memory block, and then stack the required elements to the end, which avoids copying the entire data multiple times.
It should be noted that the focus is not on the calculation process, because it has nothing to do with the core work of this article. This division allows researchers to focus on optimizing flash interaction and memory management to achieve efficient inference on memory-limited devices
requires rewriting of experimental results
OPT 6.7B Model Results
Predictor. As shown in Figure 3a, our predictor can accurately identify most activated neurons, but occasionally misidentifies non-activated neurons with values close to zero. It is worth noting that after these false negative neurons with close to zero values are eliminated, the final output result will not be significantly changed. Furthermore, as shown in Table 1, this level of prediction accuracy does not adversely affect the model's performance on the zero-shot task.
Delay analysis. When the window size is 5, each token needs to access 2.4% of the feedforward network (FFN) neurons. For the 32-bit model, the data block size per read is 2dmodel × 4 bytes = 32 KiB since it involves concatenation of rows and columns. On the M1 Max, the latency for flash loading per token is 125 milliseconds, and the latency for memory management (including deletion and addition of neurons) is 65 milliseconds. Therefore, the total memory-related latency is less than 190 milliseconds per token (see Figure 1). In comparison, the baseline approach requires loading 13.4GB of data at 6.1GB/s, resulting in a latency of approximately 2330 milliseconds per token. Therefore, our method is greatly improved compared to the baseline method.
For the 16-bit model on a GPU machine, flash load time is reduced to 40.5 milliseconds and memory management time is 40 milliseconds, with a slight increase in time due to the additional overhead of transferring data from the CPU to the GPU. Despite this, the I/O time of the baseline method is still over 2000 ms.
Table 2 provides a detailed comparison of the performance impact of each method.
Results for Falcon 7B model
Delay analysis. Using a window size of 4 in our model, each token needs to access 3.1% of the feedforward network (FFN) neurons. In the 32-bit model, this equates to a block size of 35.5 KiB per read (calculated as 2dmodel × 4 bytes). On an M1 Max device, flash loading this data takes about 161 milliseconds, and the memory management process adds another 90 milliseconds, so the total latency per token is 250 milliseconds. In comparison, with a baseline latency of about 2330 milliseconds, our method is approximately 9 to 10 times faster.
The above is the detailed content of Model inference acceleration: CPU performance is increased by 5 times. Apple uses flash memory for large-scale inference acceleration. Is Siri 2.0 about to debut?. For more information, please follow other related articles on the PHP Chinese website!

PowerInfer提高了在消费级硬件上运行AI的效率上海交大团队最新推出了超强CPU/GPULLM高速推理引擎PowerInfer。PowerInfer和llama.cpp都在相同的硬件上运行,并充分利用了RTX4090上的VRAM。这个推理引擎速度有多快?在单个NVIDIARTX4090GPU上运行LLM,PowerInfer的平均token生成速率为13.20tokens/s,峰值为29.08tokens/s,仅比顶级服务器A100GPU低18%,可适用于各种LLM。PowerInfer与

要让大型语言模型(LLM)充分发挥其能力,有效的prompt设计方案是必不可少的,为此甚至出现了promptengineering(提示工程)这一新兴领域。在各种prompt设计方案中,思维链(CoT)凭借其强大的推理能力吸引了许多研究者和用户的眼球,基于其改进的CoT-SC以及更进一步的思维树(ToT)也收获了大量关注。近日,苏黎世联邦理工学院、Cledar和华沙理工大学的一个研究团队提出了更进一步的想法:思维图(GoT)。让思维从链到树到图,为LLM构建推理过程的能力不断得到提升,研究者也通

近期,复旦大学自然语言处理团队(FudanNLP)推出LLM-basedAgents综述论文,全文长达86页,共有600余篇参考文献!作者们从AIAgent的历史出发,全面梳理了基于大型语言模型的智能代理现状,包括:LLM-basedAgent的背景、构成、应用场景、以及备受关注的代理社会。同时,作者们探讨了Agent相关的前瞻开放问题,对于相关领域的未来发展趋势具有重要价值。论文链接:https://arxiv.org/pdf/2309.07864.pdfLLM-basedAgent论文列表:

大型语言模型(LLM)被广泛应用于需要多个链式生成调用、高级提示技术、控制流以及与外部环境交互的复杂任务。尽管如此,目前用于编程和执行这些应用程序的高效系统却存在明显的不足之处。研究人员最近提出了一种新的结构化生成语言(StructuredGenerationLanguage),称为SGLang,旨在改进与LLM的交互性。通过整合后端运行时系统和前端语言的设计,SGLang使得LLM的性能更高、更易控制。这项研究也获得了机器学习领域的知名学者、CMU助理教授陈天奇的转发。总的来说,SGLang的

将不同的基模型象征为不同品种的狗,其中相同的「狗形指纹」表明它们源自同一个基模型。大模型的预训练需要耗费大量的计算资源和数据,因此预训练模型的参数成为各大机构重点保护的核心竞争力和资产。然而,与传统软件知识产权保护不同,对预训练模型参数盗用的判断存在以下两个新问题:1)预训练模型的参数,尤其是千亿级别模型的参数,通常不会开源。预训练模型的输出和参数会受到后续处理步骤(如SFT、RLHF、continuepretraining等)的影响,这使得判断一个模型是否基于另一个现有模型微调得来变得困难。无

FATE2.0全面升级,推动隐私计算联邦学习规模化应用FATE开源平台宣布发布FATE2.0版本,作为全球领先的联邦学习工业级开源框架。此次更新实现了联邦异构系统之间的互联互通,持续增强了隐私计算平台的互联互通能力。这一进展进一步推动了联邦学习与隐私计算规模化应用的发展。FATE2.0以全面互通为设计理念,采用开源方式对应用层、调度、通信、异构计算(算法)四个层面进行改造,实现了系统与系统、系统与算法、算法与算法之间异构互通的能力。FATE2.0的设计兼容了北京金融科技产业联盟的《金融业隐私计算

IBM再度发力。随着AI系统的飞速发展,其能源需求也在不断增加。训练新系统需要大量的数据集和处理器时间,因此能耗极高。在某些情况下,执行一些训练好的系统,智能手机就能轻松胜任。但是,执行的次数太多,能耗也会增加。幸运的是,有很多方法可以降低后者的能耗。IBM和英特尔已经试验过模仿实际神经元行为设计的处理器。IBM还测试了在相变存储器中执行神经网络计算,以避免重复访问RAM。现在,IBM又推出了另一种方法。该公司的新型NorthPole处理器综合了上述方法的一些理念,并将其与一种非常精简的计算运行

Meta的视频合成新框架给我们带来了一些惊喜就今天的人工智能发展水平来说,文生图、图生视频、图像/视频风格迁移都已经不算什么难事。生成式AI天赋异禀,能够毫不费力地创建或修改内容。尤其是图像编辑,在以十亿规模数据集为基础预训练的文本到图像扩散模型的推动下,经历了重大发展。这股浪潮催生了大量图像编辑和内容创建应用。基于图像的生成模型所取得的成就基础上,下一个挑战的领域必然是为其增加「时间维度」,从而实现轻松而富有创意的视频编辑。一种直接策略是使用图像模型逐帧处理视频,然而,生成式图像编辑本身就具有


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

SublimeText3 English version
Recommended: Win version, supports code prompts!

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver CS6
Visual web development tools

WebStorm Mac version
Useful JavaScript development tools