search
HomeTechnology peripheralsAIBytedance model large-scale deployment actual combat
Bytedance model large-scale deployment actual combatApr 12, 2023 pm 08:31 PM
deep learningByteDance

Bytedance model large-scale deployment actual combat

1. Background introduction

In ByteDance, applications based on deep learning are blooming everywhere. While engineers focus on model effects, they also need to pay attention to online service consistency and performance. , in the early days, this usually required division of labor and close cooperation between algorithm experts and engineering experts. This model has relatively high costs such as diff troubleshooting and verification.

With the popularity of the PyTorch/TensorFlow framework, deep learning model training and online reasoning have been unified. Developers only need to pay attention to the specific algorithm logic and call the Python API of the framework to complete the training verification process. After that, the model can It is very convenient to serialize and export, and the reasoning work is completed by a unified high-performance C engine. Improved developer experience from training to deployment.

However, a complete service usually still has a lot of business logic such as pre-processing/post-processing. This type of logic usually processes various inputs and converts them into Tensors, and then inputs them into the model, and then the model outputs Tensor is then processed into the target format. Some typical scenarios are as follows:

  • Bert
  • Resnet

Bytedance model large-scale deployment actual combat

Bytedance model large-scale deployment actual combat

Our goal is to provide automated and unified training and inference solutions for the above end-to-end process, alleviate a series of problems such as manual development of inference processes and alignment diffs, and achieve large-scale unified deployment solutions.

2. Core issues

Frameworks such as PyTorch/TensorFlow have relatively solved the problem of unified model training/inference, so the model calculation itself does not have the problem of integrating training and inference (operator performance Optimization is beyond the scope of this discussion).

The core problem to be solved is: pre-processing and post-processing need to provide a high-performance training and push integrated solution.

For this type of logic, TensorFlow 2.x provides tf.function (not yet complete), and PyTorch provides TorchScript, which without exception selects a subset of native Python syntax. But even if it is so powerful, there are still problems that cannot be ignored:

  • Performance: Most of this solution is based on virtual machine implementation. The virtual machine solution is flexible and very controllable, but most of the virtual machines in the deep learning framework are usually Performance is not good enough. As a supplement, the framework was designed for Tensor computing in the early days. The cost of each operator in array computing is very high, and the cost of virtual machine dispatch and scheduling can be ignored. However, the overhead of porting to programming language programming is difficult to ignore, and writing too much code will become a performance bottleneck. According to tests, the performance of the TorchScript interpreter is only about 1/5 of that of Python, and the performance of tf.function is even worse.
  • Incomplete functions: In fact, when applied to real scenarios, we can still find many important functions that tf.function/TorchScript does not support, such as: custom resources cannot be packaged and can only serialize built-in types; Strings can only be processed by bytes, and unicode such as Chinese will cause diff; the container must be isomorphic and does not support custom types, etc...

Furthermore, there are many non-deep learning tasks, For example, there are still many non-deep learning applications or subtasks in natural language processing, such as sequence annotation, language model decoding, artificial feature construction of tree models, etc. These usually have more flexible feature paradigms, but at the same time they are not fully implemented. The end-to-end integrated training and promotion solution still requires a lot of development and correctness verification work.

In order to solve the above problems, we have developed a preprocessing solution based on compilation: MATXScript!

3. MATXScript

In the development of deep learning algorithms, developers usually use Python for rapid iteration and experimentation, while using C to develop high-performance online services, including correctness checksums Service development will become a heavier burden!

MatxScript (https://github.com/bytedance/matxscript) is an AOT compiler for Python sub-language, which can automatically translate Python into C and provide one-click packaging and publishing functions. Using MATXScript allows developers to quickly iterate on models while deploying high-performance services at a lower cost.

The core architecture is as follows:

Bytedance model large-scale deployment actual combat

  • The lowest layer is a pure C/CUDA basic library, developed by high-performance operator experts.
  • On top of the basic library, the Python library is encapsulated according to the convention and can be used in the training process.
  • When inferencing is required, MATXScript can be used to translate the Python code into equivalent C code, compile it into a dynamic link library, add the model and other dependent resources, and package and publish it together.

Among them, the role of the compiler is very critical, and its core process is as follows:

Bytedance model large-scale deployment actual combat

Through the above process, the preprocessing code written by the user can It is compiled into a JitOp in the Pipeline. In order to link the pre- and post-processing with the model, we also developed a tracing system (the interface design refers to PyTorch). The architecture is as follows:

Bytedance model large-scale deployment actual combat

Based on MATXScript, we can use the same set of codes for training and inference, which greatly reduces the cost of model deployment. At the same time, the architecture and algorithm are decoupled, and algorithm students can work entirely in Python. Architecture students focus on compiler development and runtime optimization. In ByteDance, this solution has been verified by large-scale deployment!

4. A small test

Here is the simplest English text preprocessing as an example to show how to use MATXScript.

Goal: Convert a piece of English text into indexes

  1. Write a basic dictionary lookup logic
class Text2Ids:
def __init__(self) -> None:
self.table: Dict[str, int] = {
"hello": 0,
"world": 1,
"[UNK]": 2,
}

def lookup(self, word: str)
return self.table.get(word, 2)

def__call__ (self, words: List[str])
return [self.lookup(w) for w in words]
  1. Write Pipeline
import matx

class WorkFlow:
def __init__(self):
# 此处会进行代码编译,Python 代码自动编译封装为 Callable 对象
self.text2ids = matx.script(Text2Ids)()

def process(self, texts):
ids = self.text2ids(texts)
return ids

# test
handler = WorkFlow()
print(handler.process("hello world unknown"))
# output: [0, 1, 2]
  1. Trace Export to disk
# dump
mod = matx.trace(handler.process, "hello world")
print(mod.run({"texts": "hello world"}))
mod.save('./my_dir')
# load
mod = matx.load('./my_dir', -1)
print(mod.run({"texts": "hello world"}))
  1. C Load
#include <string>
#include <vector>
#include <map>
#include <iostream>
#include <matxscript/pipeline/tx_session.h>
using namespace ::matxscript::runtime;
int main()
{
// test case
std::unordered_map<std::string, RTValue> feed_dict;
feed_dict.emplace("texts", Unicode(U"hello world"));
std::vector<std::pair<std::string, RTValue>> result;
const char* module_path = "./my_dir";
const char* module_name = "model.spec.json";
{
// -1 mean cpu
auto sess = TXSession::Load(module_path, module_name, -1);
auto result = sess->Run(feed_dict);
for (auto& r : result) {
std::cout << "key: " << r.first << ", value: " << r.second << std::endl;
}
}
return 0;
}

For the complete code, see: https://github. com/bytedance/matxscript/tree/main/examples/text2ids

Summary: The above is a very simple preprocessing logic implemented in pure Python, and can be loaded and run by a general C code. Next we combine the model Show an actual multimodal end-to-end case!

5. Multi-modal case

Here we take graphic multi-modal (Bert Resnet) as an example. The model is written using PyTorch to show the actual work in training and deployment.

  1. Configuration environment
    a. Configure gcc/cuda and other infrastructure (usually the operation and maintenance students have already done this)
    b. Install MATXScript and the basic libraries developed based on it (text, vision etc.)
  2. Write model code
    a. Omitted here, you can refer to papers or other open source implementations to do it yourself
  3. Write preprocessing code

a . text

from typing import List, Dict, Tuple
import libcut
import matx
class Vocabulary:
...
def utf8_decoder(s: List[bytes]):
return [x.decode() for x in s]
class TextNDArrayBuilder:
...
class TextPipeline:
def __init__(self, mode: str = "eval"):
self.mode = mode
self.cut_engine = libcut.Cutter('/path/to/cut_models', ...)
self.vocab = matx.script(Vocabulary)('/path/to/vocab.txt')
self.decoder = matx.script(utf8_decoder)
self.input_builder = matx.script(TextNDArrayBuilder)(self.vocab)
def process(self, text: List[bytes]):
# List[bytes] 是对齐 C++ 的 vector<string>
text: List[str] = self.decoder(text)
words: List[List[str]] = self.cut_engine(text)
batch_ids: List[List[int]] = self.vocab(words)
input_ids, segment_ids, mask_ids = self.input_builder(batch_ids, 32)
if self.mode == "train":
return input_ids.torch(), segment_ids.torch(), mask_ids.torch()
return input_ids, segment_ids, mask_ids

b. vision

from typing import List, Dict, Tuple
import matx
from matx import vision
class VisionPipeline:
def __init__(self,
 device_id: int = 0,
 mode: str = "eval",
 image_size: int = 224,):
self.is_training = mode == 'train'
self.mode = mode
...
def process(self, image,):
if self.is_training:
decode_nds = self.random_crop_decode(image)
flip_nds = self.random_flip(decode_nds)
resize_nds = self.resize(flip_nds)
transpose_nd = self.transpose_norm(resize_nds, vision.SYNC)
else:
decode_nds = self.decode(image)
resize_nds = self.resize(decode_nds)
crop_nds = self.center_crop(resize_nds)
transpose_nd = self.transpose_norm(crop_nds, vision.SYNC)
if self.mode == "trace":
return transpose_nd
return transpose_nd.torch()
  1. Connect to DataLoader
    a. TextPipeline can be used as a normal Python Class and connected to Dataset
    b. VisionPipeline involves GPU preprocessing is more suitable for batch processing. You need to construct a separate DataLoader by yourself (bury a point here, and will open source ByteDance’s internal multi-threaded DataLoader later)
  2. Add the model code and start training Bar
  3. Export the end-to-end Inference Model
class MultimodalEvalPipeline:
def __init__(self):
self.text_pipe = TextPipeline(mode="eval", ...)
self.vision_pipe = VisionPipeline(mode="eval", ...)
self.torch_model = torch.jit.load('/path/to/multimodal.jit', map_locatinotallow='cuda:0')
self.tx_model_op = matx.script(self.torch_model, device=0)

def eval(self, texts: List[bytes], images: List[bytes])
input_ids, segment_ids, mask_ids = self.text_pipe.process(texts)
images = self.vision_pipe.process(images)
scores = self.tx_model_op(input_ids, segment_ids, mask_ids, images)
return scores
# examples
example_batch_size = 8
text_examples = ['hello, world'.encode()] * example_batch_size
with open('/path/image.jpg', 'rb') as f:
image_example = f.read()
image_examples = [image_example] * example_batch_size
# pipeline instance
pipe = MultimodalEvalPipeline(...)
mod = matx.trace(pipe.eval, text_examples, image_examples)
# test
print(mod.run({"texts": text_examples, "images": image_examples}))
# save
mod.save('/path/to/my_multimodal')

Summary: After the above steps, we can complete the end-to-end training & release work, and the entire process is completed with pure Python code , can be completely controlled by the algorithm students themselves. Of course, if there are performance issues in the model calculation itself, it can also be completed behind the scenes through automatic image modification and optimization.

Note: For complete code examples, see https://github.com/bytedance/matxscript/tree/main/examples/e2e_multi_modal

6. Unified Server

In the previous In this chapter, we got a model package released by an algorithm classmate. This chapter discusses how to load and run it using a unified service.

The complete Server includes: IDL protocol, Batching strategy, thread/thread scheduling and arrangement, model reasoning...

Here, we only discuss model reasoning, the others are It can be developed as agreed. We use a main function to illustrate the process of model loading and running:

#include <string>
#include <vector>
#include <map>
#include <iostream>
#include <matxscript/pipeline/tx_session.h>
using namespace ::matxscript::runtime;
int main()
{
// test case
std::unordered_map<std::string, RTValue> feed_dict;
feed_dict.emplace("texts", List({String("hello world")}));
feed_dict.emplace("images", List({String("......")}));
std::vector<std::pair<std::string, RTValue>> result;
const char* module_path = "/path/to/my_multimodal";
const char* module_name = "model.spec.json";
{
// cuda:0
auto sess = TXSession::Load(module_path, module_name, 0);
auto result = sess->Run(feed_dict);
for (auto& r : result) {
std::cout << "key: " << r.first << ", value: " << r.second << std::endl;
}
}
return 0;
}

The above code is the simplest case of loading a multi-modal model in C. For students who develop Server, they only need to perform a simple Through abstraction and convention, the above code can be transformed into a unified C model service framework.

7. More information

We are the Bytedance-AML-machine learning system team, committed to providing the company with a unified high-performance training and promotion integrated framework, and also It will serve partner companies through the Volcano Engine Machine Learning Platform. The Volcano Engine Machine Learning Platform is expected to provide MATX-related support from 2023, including preset mirror environments, public samples of common scenarios, and technical support during enterprise access and use. etc., which can achieve low-cost acceleration and integration of training and inference scenarios. Welcome to learn more about our products at https://www.volcengine.com/product/ml-platform.

The above is the detailed content of Bytedance model large-scale deployment actual combat. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
人工智能(AI)、机器学习(ML)和深度学习(DL):有什么区别?人工智能(AI)、机器学习(ML)和深度学习(DL):有什么区别?Apr 12, 2023 pm 01:25 PM

人工智能Artificial Intelligence(AI)、机器学习Machine Learning(ML)和深度学习Deep Learning(DL)通常可以互换使用。但是,它们并不完全相同。人工智能是最广泛的概念,它赋予机器模仿人类行为的能力。机器学习是将人工智能应用到系统或机器中,帮助其自我学习和不断改进。最后,深度学习使用复杂的算法和深度神经网络来重复训练特定的模型或模式。让我们看看每个术语的演变和历程,以更好地理解人工智能、机器学习和深度学习实际指的是什么。人工智能自过去 70 多

深度学习GPU选购指南:哪款显卡配得上我的炼丹炉?深度学习GPU选购指南:哪款显卡配得上我的炼丹炉?Apr 12, 2023 pm 04:31 PM

众所周知,在处理深度学习和神经网络任务时,最好使用GPU而不是CPU来处理,因为在神经网络方面,即使是一个比较低端的GPU,性能也会胜过CPU。深度学习是一个对计算有着大量需求的领域,从一定程度上来说,GPU的选择将从根本上决定深度学习的体验。但问题来了,如何选购合适的GPU也是件头疼烧脑的事。怎么避免踩雷,如何做出性价比高的选择?曾经拿到过斯坦福、UCL、CMU、NYU、UW 博士 offer、目前在华盛顿大学读博的知名评测博主Tim Dettmers就针对深度学习领域需要怎样的GPU,结合自

字节跳动模型大规模部署实战字节跳动模型大规模部署实战Apr 12, 2023 pm 08:31 PM

一. 背景介绍在字节跳动,基于深度学习的应用遍地开花,工程师关注模型效果的同时也需要关注线上服务一致性和性能,早期这通常需要算法专家和工程专家分工合作并紧密配合来完成,这种模式存在比较高的 diff 排查验证等成本。随着 PyTorch/TensorFlow 框架的流行,深度学习模型训练和在线推理完成了统一,开发者仅需要关注具体算法逻辑,调用框架的 Python API 完成训练验证过程即可,之后模型可以很方便的序列化导出,并由统一的高性能 C++ 引擎完成推理工作。提升了开发者训练到部署的体验

基于深度学习的Deepfake检测综述基于深度学习的Deepfake检测综述Apr 12, 2023 pm 06:04 PM

深度学习 (DL) 已成为计算机科学中最具影响力的领域之一,直接影响着当今人类生活和社会。与历史上所有其他技术创新一样,深度学习也被用于一些违法的行为。Deepfakes 就是这样一种深度学习应用,在过去的几年里已经进行了数百项研究,发明和优化各种使用 AI 的 Deepfake 检测,本文主要就是讨论如何对 Deepfake 进行检测。为了应对Deepfake,已经开发出了深度学习方法以及机器学习(非深度学习)方法来检测 。深度学习模型需要考虑大量参数,因此需要大量数据来训练此类模型。这正是

地址标准化服务AI深度学习模型推理优化实践地址标准化服务AI深度学习模型推理优化实践Apr 11, 2023 pm 07:28 PM

导读深度学习已在面向自然语言处理等领域的实际业务场景中广泛落地,对它的推理性能优化成为了部署环节中重要的一环。推理性能的提升:一方面,可以充分发挥部署硬件的能力,降低用户响应时间,同时节省成本;另一方面,可以在保持响应时间不变的前提下,使用结构更为复杂的深度学习模型,进而提升业务精度指标。本文针对地址标准化服务中的深度学习模型开展了推理性能优化工作。通过高性能算子、量化、编译优化等优化手段,在精度指标不降低的前提下,AI模型的模型端到端推理速度最高可获得了4.11倍的提升。1. 模型推理性能优化

聊聊实时通信中的AI降噪技术聊聊实时通信中的AI降噪技术Apr 12, 2023 pm 01:07 PM

Part 01 概述 在实时音视频通信场景,麦克风采集用户语音的同时会采集大量环境噪声,传统降噪算法仅对平稳噪声(如电扇风声、白噪声、电路底噪等)有一定效果,对非平稳的瞬态噪声(如餐厅嘈杂噪声、地铁环境噪声、家庭厨房噪声等)降噪效果较差,严重影响用户的通话体验。针对泛家庭、办公等复杂场景中的上百种非平稳噪声问题,融合通信系统部生态赋能团队自主研发基于GRU模型的AI音频降噪技术,并通过算法和工程优化,将降噪模型尺寸从2.4MB压缩至82KB,运行内存降低约65%;计算复杂度从约186Mflop

深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝Apr 09, 2023 am 09:41 AM

今天的主角,是一对AI界相爱相杀的老冤家:Yann LeCun和Gary Marcus在正式讲述这一次的「新仇」之前,我们先来回顾一下,两位大神的「旧恨」。LeCun与Marcus之争Facebook首席人工智能科学家和纽约大学教授,2018年图灵奖(Turing Award)得主杨立昆(Yann LeCun)在NOEMA杂志发表文章,回应此前Gary Marcus对AI与深度学习的评论。此前,Marcus在杂志Nautilus中发文,称深度学习已经「无法前进」Marcus此人,属于是看热闹的不

英伟达首席科学家:深度学习硬件的过去、现在和未来英伟达首席科学家:深度学习硬件的过去、现在和未来Apr 12, 2023 pm 03:07 PM

过去十年是深度学习的“黄金十年”,它彻底改变了人类的工作和娱乐方式,并且广泛应用到医疗、教育、产品设计等各行各业,而这一切离不开计算硬件的进步,特别是GPU的革新。 深度学习技术的成功实现取决于三大要素:第一是算法。20世纪80年代甚至更早就提出了大多数深度学习算法如深度神经网络、卷积神经网络、反向传播算法和随机梯度下降等。 第二是数据集。训练神经网络的数据集必须足够大,才能使神经网络的性能优于其他技术。直至21世纪初,诸如Pascal和ImageNet等大数据集才得以现世。 第三是硬件。只有

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft