搜索
首页科技周边人工智能使用Accelerate库在多GPU上进行LLM推理

大型语言模型(llm)已经彻底改变了自然语言处理领域。随着这些模型在规模和复杂性上的增长,推理的计算需求也显着增加。为了应对这一挑战利用多个gpu变得至关重要。

使用Accelerate库在多GPU上进行LLM推理

因此,这篇文章将在多个GPU上同时进行推理,内容主要包括:介绍Accelerate库、简单的方法和工作代码示例,以及使用多个GPU进行性能基准测试

本文将使用多个3090将llama2-7b的推理扩展在多个GPU上

使用Accelerate库在多GPU上进行LLM推理

基本示例

我们首先介绍一个简单的示例来演示使用Accelerate进行多gpu“消息传递” 。

from accelerate import Accelerator from accelerate.utils import gather_object  accelerator = Accelerator()  # each GPU creates a string message=[ f"Hello this is GPU {accelerator.process_index}" ]   # collect the messages from all GPUs messages=gather_object(message)  # output the messages only on the main process with accelerator.print()  accelerator.print(messages)

输出如下:

['Hello this is GPU 0', 'Hello this is GPU 1', 'Hello this is GPU 2', 'Hello this is GPU 3', 'Hello this is GPU 4']

多GPU推理

下面是一个简单的、非批处理的推理方法。代码很简单,因为Accelerate库已经帮我们做了很多工作,我们直接使用就可以:

from accelerate import Accelerator from accelerate.utils import gather_object from transformers import AutoModelForCausalLM, AutoTokenizer from statistics import mean import torch, time, json  accelerator = Accelerator()  # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books prompts_all=["The King is dead. Long live the Queen.","Once there were four children whose names were Peter, Susan, Edmund, and Lucy.","The story so far: in the beginning, the universe was created.","It was a bright cold day in April, and the clocks were striking thirteen.","It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.","The sweat wis lashing oafay Sick Boy; he wis trembling.","124 was spiteful. Full of Baby's venom.","As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.","I write this sitting in the kitchen sink.","We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.", ] * 10  # load a base model and tokenizer model_path="models/llama2-7b" model = AutoModelForCausalLM.from_pretrained(model_path,device_map={"": accelerator.process_index},torch_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained(model_path)   # sync GPUs and start the timer accelerator.wait_for_everyone() start=time.time()  # divide the prompt list onto the available GPUs  with accelerator.split_between_processes(prompts_all) as prompts:# store output of generations in dictresults=dict(outputs=[], num_tokens=0) # have each GPU do inference, prompt by promptfor prompt in prompts:prompt_tokenized=tokenizer(prompt, return_tensors="pt").to("cuda")output_tokenized = model.generate(**prompt_tokenized, max_new_tokens=100)[0] # remove prompt from output output_tokenized=output_tokenized[len(prompt_tokenized["input_ids"][0]):] # store outputs and number of tokens in result{}results["outputs"].append( tokenizer.decode(output_tokenized) )results["num_tokens"] += len(output_tokenized) results=[ results ] # transform to list, otherwise gather_object() will not collect correctly  # collect results from all the GPUs results_gathered=gather_object(results)  if accelerator.is_main_process:timediff=time.time()-startnum_tokens=sum([r["num_tokens"] for r in results_gathered ]) print(f"tokens/sec: {num_tokens//timediff}, time {timediff}, total tokens {num_tokens}, total prompts {len(prompts_all)}")

使用多个gpu会导致一些通信开销:性能在4个gpu时呈线性增长,然后在这种特定设置中趋于稳定。当然这里的性能取决于许多参数,如模型大小和量化、提示长度、生成的令牌数量和采样策略,所以我们只讨论一般的情况

1 GPU: 44个token /秒,时间:225.5 s

2个GPU:每秒处理88个token,总共用时112.9秒

3个GPU:每秒处理128个令牌,总共耗时77.6秒

4 gpu: 137个token /秒,时间:72.7s

5个gpu:每秒处理119个token,总共需要83.8秒的时间

使用Accelerate库在多GPU上进行LLM推理

在多GPU上进行批处理

现实世界中,我们可以使用批处理推理来加快速度。这会减少GPU之间的通讯,加快推理速度。我们只需要增加prepare_prompts函数将一批数据而不是单条数据输入到模型即可:

from accelerate import Accelerator from accelerate.utils import gather_object from transformers import AutoModelForCausalLM, AutoTokenizer from statistics import mean import torch, time, json  accelerator = Accelerator()  def write_pretty_json(file_path, data):import jsonwith open(file_path, "w") as write_file:json.dump(data, write_file, indent=4)  # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books prompts_all=["The King is dead. Long live the Queen.","Once there were four children whose names were Peter, Susan, Edmund, and Lucy.","The story so far: in the beginning, the universe was created.","It was a bright cold day in April, and the clocks were striking thirteen.","It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.","The sweat wis lashing oafay Sick Boy; he wis trembling.","124 was spiteful. Full of Baby's venom.","As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.","I write this sitting in the kitchen sink.","We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.", ] * 10  # load a base model and tokenizer model_path="models/llama2-7b" model = AutoModelForCausalLM.from_pretrained(model_path,device_map={"": accelerator.process_index},torch_dtype=torch.bfloat16, ) tokenizer = AutoTokenizer.from_pretrained(model_path)  tokenizer.pad_token = tokenizer.eos_token  # batch, left pad (for inference), and tokenize def prepare_prompts(prompts, tokenizer, batch_size=16):batches=[prompts[i:i + batch_size] for i in range(0, len(prompts), batch_size)]batches_tok=[]tokenizer.padding_side="left" for prompt_batch in batches:batches_tok.append(tokenizer(prompt_batch, return_tensors="pt", padding='longest', truncatinotallow=False, pad_to_multiple_of=8,add_special_tokens=False).to("cuda") )tokenizer.padding_side="right"return batches_tok  # sync GPUs and start the timer accelerator.wait_for_everyone() start=time.time()  # divide the prompt list onto the available GPUs  with accelerator.split_between_processes(prompts_all) as prompts:results=dict(outputs=[], num_tokens=0) # have each GPU do inference in batchesprompt_batches=prepare_prompts(prompts, tokenizer, batch_size=16) for prompts_tokenized in prompt_batches:outputs_tokenized=model.generate(**prompts_tokenized, max_new_tokens=100) # remove prompt from gen. tokensoutputs_tokenized=[ tok_out[len(tok_in):] for tok_in, tok_out in zip(prompts_tokenized["input_ids"], outputs_tokenized) ]  # count and decode gen. tokens num_tokens=sum([ len(t) for t in outputs_tokenized ])outputs=tokenizer.batch_decode(outputs_tokenized) # store in results{} to be gathered by accelerateresults["outputs"].extend(outputs)results["num_tokens"] += num_tokens results=[ results ] # transform to list, otherwise gather_object() will not collect correctly  # collect results from all the GPUs results_gathered=gather_object(results)  if accelerator.is_main_process:timediff=time.time()-startnum_tokens=sum([r["num_tokens"] for r in results_gathered ]) print(f"tokens/sec: {num_tokens//timediff}, time elapsed: {timediff}, num_tokens {num_tokens}")

可以看到批处理会大大加快速度。

需要重写的内容是:1个GPU:520个令牌/秒,时间:19.2秒

两张GPU的算力为每秒900个代币,计算时间为11.1秒

3 gpu: 1205个token /秒,时间:8.2s

四张GPU:1655个令牌/秒,所需时间为6.0秒

5个GPU: 每秒1658个令牌,时间:6.0秒

使用Accelerate库在多GPU上进行LLM推理

总结

截止到本文为止,llama.cpp,ctransformer还不支持多GPU推理,好像llama.cpp在6月有个多GPU的merge,但是我没看到官方更新,所以这里暂时确定不支持多GPU。如果有小伙伴确认可以支持多GPU请留言。

huggingface的Accelerate包则为我们使用多GPU提供了一个很方便的选择,使用多个GPU推理可以显着提高性能,但gpu之间通信的开销随着gpu数量的增加而显着增加。

以上是使用Accelerate库在多GPU上进行LLM推理的详细内容。更多信息请关注PHP中文网其他相关文章!

声明
本文转载于:51CTO.COM。如有侵权,请联系admin@php.cn删除
LLM路由:策略,技术和Python实施LLM路由:策略,技术和Python实施Apr 14, 2025 am 11:14 AM

大型语言模型(LLM)路由:通过智​​能任务分配优化性能 LLM的快速发展的景观呈现出各种各样的模型,每个模型都具有独特的优势和劣势。 有些在创意内容gen上表现出色

更新授权以维护能源网格更新授权以维护能源网格Apr 14, 2025 am 11:13 AM

三个主要地区构成了美国的能源电网:德克萨斯州的互连系统,西部的互连,该系统跨越了太平洋到落基山的国家,而东部相互联系则为山区以东。

超越骆驼戏:大型语言模型的4个新基准超越骆驼戏:大型语言模型的4个新基准Apr 14, 2025 am 11:09 AM

陷入困境的基准:骆驼案例研究 2025年4月上旬,梅塔(Meta)揭开了Llama 4套件的模特套件,具有令人印象深刻的性能指标,使他们对GPT-4O和Claude 3.5 Sonnet等竞争对手有利地定位。伦斯的中心

Excel中的数据格式是什么? - 分析VidhyaExcel中的数据格式是什么? - 分析VidhyaApr 14, 2025 am 11:05 AM

介绍 在Excel中有效地处理数据对于分析师来说可能具有挑战性。鉴于关键的业务决策取决于准确的报告,因此格式化错误可能会导致重大问题。本文将帮助您

什么是扩散模型?什么是扩散模型?Apr 14, 2025 am 11:00 AM

潜入扩散模型的世界:综合指南 想象一下,在页面上观看墨水,其颜色巧妙地扩散到了迷人的图案。这种自然扩散过程,其中颗粒从高浓度向低浓度移动

AI中的启发式功能是什么? - 分析VidhyaAI中的启发式功能是什么? - 分析VidhyaApr 14, 2025 am 10:51 AM

介绍 想象一下,浏览复杂的迷宫 - 您的目标是尽快逃脱。 存在几条路径?现在,图片有一张图的地图,该地图突出显示有希望的路线和死胡同。这就是人造中启发式功能的本质

回溯算法的综合指南回溯算法的综合指南Apr 14, 2025 am 10:45 AM

介绍 回溯算法是一种有力的解决问题的技术,可以逐步构建候选解决方案。 这是计算机科学中广泛使用的方法,在丢弃任何Potenti之前,系统地探索了所有可能的途径

5个免费学习统计信息的最佳YouTube频道5个免费学习统计信息的最佳YouTube频道Apr 14, 2025 am 10:38 AM

介绍 统计数据是一项至关重要的技能,适用于学术界。无论您是追求数据科学,进行研究还是简单地管理个人信息,对统计的掌握都是必不可少的。 互联网,尤其是距离

See all articles

热AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover

AI Clothes Remover

用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool

Undress AI Tool

免费脱衣服图片

Clothoff.io

Clothoff.io

AI脱衣机

AI Hentai Generator

AI Hentai Generator

免费生成ai无尽的。

热门文章

R.E.P.O.能量晶体解释及其做什么(黄色晶体)
3 周前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.最佳图形设置
3 周前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.如果您听不到任何人,如何修复音频
3 周前By尊渡假赌尊渡假赌尊渡假赌
WWE 2K25:如何解锁Myrise中的所有内容
4 周前By尊渡假赌尊渡假赌尊渡假赌

热工具

PhpStorm Mac 版本

PhpStorm Mac 版本

最新(2018.2.1 )专业的PHP集成开发工具

螳螂BT

螳螂BT

Mantis是一个易于部署的基于Web的缺陷跟踪工具,用于帮助产品缺陷跟踪。它需要PHP、MySQL和一个Web服务器。请查看我们的演示和托管服务。

WebStorm Mac版

WebStorm Mac版

好用的JavaScript开发工具

记事本++7.3.1

记事本++7.3.1

好用且免费的代码编辑器

MinGW - 适用于 Windows 的极简 GNU

MinGW - 适用于 Windows 的极简 GNU

这个项目正在迁移到osdn.net/projects/mingw的过程中,你可以继续在那里关注我们。MinGW:GNU编译器集合(GCC)的本地Windows移植版本,可自由分发的导入库和用于构建本地Windows应用程序的头文件;包括对MSVC运行时的扩展,以支持C99功能。MinGW的所有软件都可以在64位Windows平台上运行。