Home >Web Front-end >JS Tutorial >Let's talk about how to implement lightweight process pool and thread pool using Node
本文论点主要面向 Node.js 开发语言
>> Show Me Code,目前代码正在 dev 分支,已完成单元测试,尚待测试所有场景。
>> 建议通读 Node.js 官方文档 -【不要阻塞事件循环】
Node.js 即服务端 Javascript,得益于宿主环境的不同,它拥有比在浏览器上更多的能力。比如:完整的文件系统访问权限、网络协议、套接字编程、进程和线程操作、C++ 插件源码级的支持、Buffer 二进制、Crypto 加密套件的天然支持。【相关教程推荐:nodejs视频教程】
Node.js 的是一门单线程的语言,它基于 V8 引擎开发,v8 在设计之初是在浏览器端对 JavaScript 语言的解析运行引擎,其最大的特点是单线程,这样的设计避免了一些多线程状态同步问题,使得其更轻量化易上手。
一、名词定义
学术上说,进程是一个具有一定独立功能的程序在一个数据集上的一次动态执行的过程,是操作系统进行资源分配和调度的一个独立单位,是应用程序运行的载体。我们这里将进程比喻为工厂的车间,它代表 CPU 所能处理的单个任务。任一时刻,CPU 总是运行一个进程,其他进程处于非运行状态。
进程具有以下特性:
在早期的操作系统中并没有线程的概念,进程是能拥有资源和独立运行的最小单位,也是程序执行的最小单位。任务调度采用的是时间片轮转的抢占式调度方式,而进程是任务调度的最小单位,每个进程有各自独立的一块内存,使得各个进程之间内存地址相互隔离。
后来,随着计算机的发展,对 CPU 的要求越来越高,进程之间的切换开销较大,已经无法满足越来越复杂的程序的要求了。于是就发明了线程,线程是程序执行中一个单一的顺序控制流程,是程序执行流的最小单元。这里把线程比喻一个车间的工人,即一个车间可以允许由多个工人协同完成一个任务,即一个进程中可能包含多个线程。
线程具有以下特性:
Node.js 的多进程有助于充分利用 CPU 等资源,Node.js 的多线程提升了单进程上任务的并行处理能力。
在 Node.js 中,每个 worker 线程都有他自己的 V8 实例和事件循环机制 (Event Loop)。但是,和进程不同,workers 之间是可以共享内存的。
二、Node.js 异步机制
Node.js 的单线程是指程序的主要执行线程是单线程,这个主线程同时也负责事件循环。而其实语言内部也会创建线程池来处理主线程程序的 网络 IO / 文件 IO / 定时器
等调用产生的异步任务。一个例子就是定时器 Timer 的实现:在 Node.js 中使用定时器时,Node.js 会开启一个定时器线程进行计时,计时结束时,定时器回调函数会被放入位于主线程的宏任务队列。当事件循环系统执行完主线程同步代码和当前阶段的所有微任务时,该回调任务最后再被取出执行。所以 Node.js 的定时器其实是不准确的,只能保证在预计时间时我们的回调任务被放入队列等待执行,而不是直接被执行。
The multi-threading mechanism cooperates with Node.js's event loop system to allow developers to use asynchronous mechanisms in one thread, including timers, IO, and network requests. However, in order to achieve a highly responsive and high-performance server, Node.js's Event Loop further prioritizes macro tasks.
Node.js Prioritization between macro tasks: Timers > Pending > Poll > Check > Close.
Node.js Optimization and division between microtasks: process.nextTick > Promise.
Before node 11, the Event Loop of Node.js was not executed one at a time like the browser. Macro tasks, and then execute all micro tasks, but execute a certain number of Timers macro tasks, then execute all micro tasks, then execute a certain number of Pending macro tasks, and then execute all micro tasks, and the remaining Poll The same is true for the macro tasks of , Check and Close. After node 11, it was changed to each macro task to execute all micro tasks.
The macro tasks of Node.js also have priorities. If the Event Loop of Node.js runs all the macro tasks of the current priority each time, it will run the next priority. Macro tasks will lead to the occurrence of "starvation" state. If there are too many macro tasks in a certain stage, the next stage will never be executed. Therefore, each type of macro task has a mechanism to limit the number of executions, and the remaining ones are handed over to the subsequent Event Loop for continued execution.
The final performance is: that is, executing a certain number of Timers macro tasks, executing all micro tasks between each macro task, and then executing a certain number of Pending Callback macro tasks, and executing all micro tasks between each macro task. Task.
3. Node.js multi-process
const spawn = require('child_process').spawn; const ls = spawn('ls', ['-lh', '/usr']); ls.stdout.on('data', (data) => { console.log(`stdout: ${data}`); }); ls.stderr.on('data', (data) => { console.log(`stderr: ${data}`); }); ls.on('close', (code) => { console.log(`child process exited with code ${code}`); });
execFile('/path/to/node', ['--version'], function(error, stdout, stderr){ if(error){ throw error; } console.log(stdout); });
exec('ls -al', function(error, stdout, stderr){ if(error) { console.error('error:' + error); return; } console.log('stdout:' + stdout); console.log('stderr:' + typeof stderr); });
var child = child_process.fork('./anotherSilentChild.js', { silent: true }); child.stdout.setEncoding('utf8'); child.stdout.on('data', function(data){ console.log(data); });Among them, spawn is the basis of all methods, and the bottom layer of exec calls execFile.
Cluster module to create an http service cluster. In the example, the same Js execution file is used when creating a Cluster.
cluster.isPrimary is used in the file to determine whether the current execution environment is in the main process or a child process. If it is the main process, the current execution file is used to create a child process instance. , if it is a child process, it will enter the business processing flow of the child process.
/* 简单示例:使用同一个 JS 执行文件创建子进程集群 Cluster */ const cluster = require('node:cluster'); const http = require('node:http'); const numCPUs = require('node:os').cpus().length; const process = require('node:process'); if (cluster.isPrimary) { console.log(`Primary ${process.pid} is running`); // Fork workers. for (let i = 0; i { console.log(`worker ${worker.process.pid} died`); }); } else { // Workers can share any TCP connection http.createServer((req, res) => { res.writeHead(200); res.end('hello world\n'); }).listen(8000); console.log(`Worker ${process.pid} started`); }
Cluster The module allows the establishment of a main process and several sub-processes. Use
child_process.fork() to implicitly create sub-processes internally, which are monitored and controlled by the main process. Coordinate the running of child processes.
http/net built-in module. The Node.js main process is responsible for listening to the target port, and after receiving the request, distributes the request to a child process according to the load balancing policy.
PM2 是常用的 node 进程管理工具,它可以提供 node.js 应用管理能力,如自动重载、性能监控、负载均衡等。
其主要用于 独立应用
的进程化管理,在 Node.js 单机服务部署方面比较适合。可以用于生产环境下启动同个应用的多个实例提高 CPU 利用率、抗风险、热加载等能力。
由于是外部库,需要使用 npm 包管理器安装:
$: npm install -g pm2
pm2 支持直接运行 server.js 启动项目,如下:
$: pm2 start server.js
即可启动 Node.js 应用,成功后会看到打印的信息:
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬───────────┬───────┬──────────┐ │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ ├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼───────────┼───────┼──────────┤ │ server │ 0 │ 1.0.0 │ fork │ 24776 │ online │ 9 │ 19m │ 0% │ 35.4 MB │ 23101 │ disabled │ └──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴───────────┴───────┴──────────┘
pm2 也支持配置文件启动,通过配置文件 ecosystem.config.js
可以定制 pm2 的各项参数:
module.exports = { apps : [{ name: 'API', // 应用名 script: 'app.js', // 启动脚本 args: 'one two', // 命令行参数 instances: 1, // 启动实例数量 autorestart: true, // 自动重启 watch: false, // 文件更改监听器 max_memory_restart: '1G', // 最大内存使用亮 env: { // development 默认环境变量 // pm2 start ecosystem.config.js --watch --env development NODE_ENV: 'development' }, env_production: { // production 自定义环境变量 NODE_ENV: 'production' } }], deploy : { production : { user : 'node', host : '212.83.163.1', ref : 'origin/master', repo : 'git@github.com:repo.git', path : '/var/www/production', 'post-deploy' : 'npm install && pm2 reload ecosystem.config.js --env production' } } };
pm2 logs 日志功能也十分强大:
$: pm2 logs
一般我们使用计算机执行的任务包含以下几种类型的任务:
计算密集型任务:任务包含大量计算,CPU 占用率高。
const matrix = {}; for (let i = 0; i
IO 密集型任务:任务包含频繁的、持续的网络 IO 和磁盘 IO 的调用。
const {copyFileSync, constants} = require('fs'); copyFileSync('big-file.zip', 'destination.zip');
混合型任务:既有计算也有 IO。
一、进程池的适用场景
使用进程池的最大意义在于充分利用多核 CPU 资源,同时减少子进程创建和销毁的资源消耗。
进程是操作系统分配资源的基本单位,使用多进程架构能够更多的获取 CPU 时间、内存等资源。为了应对 CPU-Sensitive 场景,以及充分发挥 CPU 多核性能,Node 提供了 child_process
模块用于创建子进程。
子进程的创建和销毁需要较大的资源成本,因此池化子进程的创建和销毁过程,利用进程池来管理所有子进程。
除了这一点,Node.js 中子进程也是唯一的执行二进制文件的方式,Node.js 可通过流 (stdin/stdout/stderr) 或 IPC 和子进程通信。
通过 Stream 通信
const {spawn} = require('child_process'); const ls = spawn('ls', ['-lh', '/usr']); ls.stdout.on('data', (data) => { console.log(`stdout: ${data}`); }); ls.stderr.on('data', (data) => { console.error(`stderr: ${data}`); }); ls.on('close', (code) => { console.log(`child process exited with code ${code}`); });
通过 IPC 通信
const cp = require('child_process'); const n = cp.fork(`${__dirname}/sub.js`); n.on('message', (m) => { console.log('PARENT got message:', m); }); n.send({hello: 'world'});
二、线程池的适用场景
使用线程池的最大意义在于多任务并行,为主线程降压,同时减少线程创建和销毁的资源消耗。单个 CPU 密集性的计算任务使用线程执行并不会更快,甚至线程的创建、销毁、上下文切换、线程通信、数据序列化等操作还会额外增加资源消耗。
但是如果一个计算机程序中有很多同一类型的阻塞任务需要执行,那么将他们交给线程池可以成倍的减少任务总的执行时间,因为在同一时刻多个线程在并行进行计算。如果多个任务只使用主线程执行,那么最终消耗的时间是线性叠加的,同时主线程阻塞之后也会影响其它任务的处理。
特别是对 Node.js 这种单主线程的语言来讲,主线程如果消耗了过多的时间来执行这些耗时任务,那么对整个 Node.js 单个进程实例的性能影响将是致命的。这些占用着 CPU 时间的操作将导致其它任务获取的 CPU 时间不足或 CPU 响应不够及时,被影响的任务将进入 “饥饿” 状态。
因此 Node.js 启动后主线程应尽量承担调度的角色,批量重型 CPU 占用任务的执行应交由额外的工作线程处理,主线程最后拿到工作线程的执行结果再返回给任务调用方。另一方面由于 IO 操作 Node.js 内部作了优化和支持,因此 IO 操作应该直接交给主线程,主线程再使用内部线程池处理。
Node.js 的异步能不能解决过多占用 CPU 任务的执行问题?
答案是:不能,过多的异步 CPU 占用任务会阻塞事件循环。
Node.js 的异步在 网络 IO / 磁盘 IO
处理上很有用,宏任务微任务系统 + 内部线程调用能分担主进程的执行压力。但是如果单独将 CPU 占用任务放入宏任务队列或微任务队列,对任务的执行速度提升没有任何帮助,只是一种任务调度方式的优化而已。
我们只是延迟了任务的执行或是将巨大任务分散成多个再分批执行,但是任务最终还是要在主线程被执行。如果这类任务过多,那么任务分片和延迟的效果将完全消失,一个任务可以,那十个一百个呢?量变将会引起质变。
以下是 Node.js 官方博客中的原文:
“如果你需要做更复杂的任务,拆分可能也不是一个好选项。这是因为拆分之后任务仍然在事件循环线程中执行,并且你无法利用机器的多核硬件能力。 请记住,事件循环线程只负责协调客户端的请求,而不是独自执行完所有任务。 对一个复杂的任务,最好把它从事件循环线程转移到工作线程池上。”
场景:间歇性让主进程 瘫痪
每一秒钟,主线程有一半时间被占用
// this task costs 100ms function doHeavyTask() { ...} setInterval(() => { doHeavyTask(); // 100ms doHeavyTask(); // 200ms doHeavyTask(); // 300ms doHeavyTask(); // 400ms doHeavyTask(); // 500ms }, 1e3);
场景:高频性让主进程 半瘫痪
每 200ms,主线程有一半时间被占用
// this task costs 100ms function doHeavyTask() { ...} setInterval(() => { doHeavyTask(); }, 1e3); setInterval(() => { doHeavyTask(); }, 1.2e3); setInterval(() => { doHeavyTask(); }, 1.4e3); setInterval(() => { doHeavyTask(); }, 1.6e3); setInterval(() => { doHeavyTask(); }, 1.8e3);
以下是官方博客的原文摘录:
“因此,你应该保证永远不要阻塞事件轮询线程。换句话说,每个 JavaScript 回调应该快速完成。这些当然对于 await,Promise.then 也同样适用。”
进程池是对进程的创建、执行任务、销毁等流程进行管控的一个应用或是一套程序逻辑。之所以称之为池是因为其内部包含多个进程实例,进程实例随时都在进程池内进行着状态流转,多个创建的实例可以被重复利用,而不是每次执行完一系列任务后就被销毁。因此,进程池的部分存在目的是为了减少进程创建的资源消耗。
此外进程池最重要的一个作用就是负责将任务分发给各个进程执行,各个进程的任务执行优先级取决于进程池上的负载均衡运算,由算法决定应该将当前任务派发给哪个进程,以达到最高的 CPU 和内存利用率。常见的负载均衡算法有:
「 对单一任务的控制不重要,对单个进程宏观的资源占用更需关注 」
进程池架构图参考之前的进程管理工具开发相关 文章,本文只需关注进程池部分。
main.js
const { ChildProcessPool, LoadBalancer } = require('electron-re'); const processPool = new ChildProcessPool({ path: path.join(__dirname, 'child_process/child.js'), max: 4, strategy: LoadBalancer.ALGORITHM.POLLING, );
child.js
const { ProcessHost } = require('electron-re'); ProcessHost .registry('test1', (params) => { console.log('test1'); return 1 + 1; }) .registry('test2', (params) => { console.log('test2'); return new Promise((resolve) => resolve(true)); });
processPool.send('test1', { value: "test1"}).then((result) => { console.log(result); });
processPool.sendToAll('test1', { value: "test1"}).then((results) => { console.log(results); });
1)基本代理原理:
2)单进程下客户端执行原理:
3)多进程下客户端执行原理:
以上描述的是客户端连接单个节点的工作模式,节点订阅组中的负载均衡模式需要同时启动多个子进程,每个子进程启动 ss-local 执行文件占用一个本地端口并连接到远端一个服务器节点。
每个子进程启动时选择的端口是会变化的,因为某些端口可能已经被系统占用,程序需要先选择未被使用的端口。并且浏览器 proxy 工具也不可能同时连接到我们本地启动的子进程上的多个 ss-local 服务上。因此需要一个占用固定端口的中间节点接收 proxy 工具发出的连接请求,然后按照某种分发规则将 tcp 流量转发到各个子进程的 ss-local 服务的端口上。
I have previously made a client that supports SMB protocol multi-file segmented upload, upload task management on the Node.js side, IO operations, etc. have all been implemented using multi-process in one version, but I did it myself (escaped) in the gitlab experimental branch.
In order to reduce the system overhead of CPU-intensive task calculations, Node.js introduces a new Feature: worker_threads, which first appeared as an experimental feature in v10.5.0
. Multiple threads can be created within the process through worker_threads. The main thread and worker threads communicate using parentPort, and worker threads can communicate directly through MessageChannel. As an important feature for developers to use threads, worker_threads can be used normally in the production environment in the stable version of v12.11.0
.
However, the creation of threads requires additional CPU and memory resources. If a thread is to be used multiple times, it should be saved. When the thread is not used at all, it needs to be closed in time to reduce memory usage. Imagine that we create a thread directly when we need to use it, and destroy it immediately after use. Maybe the cost of creating and destroying the thread itself has exceeded the cost of resources saved by using the thread itself. Although Node.js uses a thread pool internally, it is completely transparent and invisible to developers. Therefore, the importance of encapsulating a thread pool tool that can maintain the thread life cycle is reflected.
In order to enhance the scheduling of multiple asynchronous tasks, the thread pool not only provides the ability to maintain threads, but also provides the ability to maintain task queues. When sending a request to the thread pool to perform an asynchronous task, if there are no idle threads in the thread pool, the task will be directly discarded. Obviously this is not the desired effect.
Therefore, you can consider adding a task queue scheduling logic to the thread pool: when the thread pool has no idle threads, put the task into the task queue to be executed (FIFO), and the thread pool will take out the task at a certain opportunity. It is executed by an idle thread. After the execution is completed, the asynchronous callback function is triggered and the execution result is returned to the request caller. However, the number of tasks in the thread pool's task queue should be limited to a special value to prevent excessive load on the thread pool from affecting the overall performance of the Node.js application.
"It is important to control a single task, and there is no need to pay attention to the resource occupation of a single thread"
The caller can dispatch tasks to the thread pool through StaticPool/StaticExcutor/DynamicPool/DynamicExcutor
instances (key terms are explained below). The biggest difference between various instances is the parameter dynamic ability.
Tasks are generated internally by the thread pool. After generation, the task is used as the main transfer carrier. On the one hand, it carries the task calculation parameters passed by the user, and on the other hand, it records the status of the task transfer process. Changes, such as: task status, start time, end time, task ID, number of task retries, whether the task supports retries, task type, etc.
After the task is generated, first determine whether the number of threads in the current thread pool has reached the upper limit. If it has not reached the upper limit, create a new thread and put it into the thread storage area, and then use the thread Directly execute the current task.
If the number of thread pool threads exceeds the limit, determine whether there are idle threads that have not executed tasks. After getting the idle threads, use this thread to directly execute the current task.
If there is no idle thread, it will be judged whether the current waiting task queue is full. If the task queue is full, an error will be thrown, allowing the caller to sense that the task has not been successfully executed for the first time.
If the task queue is not full, put the task into the task queue and wait for the task cycle system to take it out and execute it.
After the task is executed in the three cases of steps 4/5/6 above, it is judged whether the task is executed successfully. When successful, the successful callback function is triggered, and the Promise status is filled. If it fails, determine whether retry is supported. If retry is supported, the task will be retried 1 and then placed back at the end of the task queue. If the task does not support retry, it will fail directly and trigger the failed asynchronous callback function, and the Promise status will be rejected.
In the entire thread pool life cycle, there is a task cycle system, which obtains tasks from the head of the task queue at a certain periodic frequency, obtains idle threads from the thread storage area, and uses this thread for execution. Task, the process also conforms to the description in step 7.
任务循环系统除了取任务执行,如果线程池设置了任务超时时间的话,也会判断正在执行中的任务是否超时,超时后会终止该线程的所有运行中的代码。
execFunction/execString/execFile
执行参数来启动工作线程,执行参数在进程池创建后不能更改。execFunction/execString/execFile
且不可更改。execFunction/execString/execFile
执行参数即可创建线程池。执行参数在调用 exec()
方法时动态传入,因此执行参数可能不固定。null
,其它参数比如:任务超时时间、任务重试次数、transferList 等都可以通过 API 随时更改。null
。execFunction/execString/execFile
,执行参数可以随时改变。worker_threads
API。main.js
const { StaticThreadPool } = require(`electron-re`); const threadPool = new StaticThreadPool({ execPath: path.join(__dirname, './worker_threads/worker.js'), lazyLoad: true, // 懒加载 maxThreads: 24, // 最大线程数 maxTasks: 48, // 最大任务数 taskRetry: 1, // 任务重试次数 taskLoopTime: 1e3, // 任务轮询时间 }); const executor = threadPool.createExecutor();
worker.js
const fibonaccis = (n) => { if (n { return fibonaccis(value); }
threadPool.exec(15).then((res) => { console.log(+res.data === 610) }); executor .setTaskRetry(2) // 不影响 pool 的全局设置 .setTaskTimeout(2e3) // 不影响 pool 的全局设置 .exec(15).then((res) => { console.log(+res.data === 610) });
const { DynamicThreadPool } = require(`electron-re`); const threadPool = new DynamicThreadPool({ maxThreads: 24, // 最大线程数 maxTasks: 48, // 最大任务数 taskRetry: 1, // 任务重试次数 }); const executor = threadPool.createExecutor({ execFunction: (value) => { return 'dynamic:' + value; }, }); threadPool.exec('test', { execString: `module.exports = (value) => { return 'dynamic:' + value; };`, }); executor.exec('test'); executor .setExecPath('/path/to/exec-file.js') .exec('test');
暂未在项目中实际使用,可考虑在前端图片像素处理、音视频转码处理等 CPU 密集性任务中进行实践。
这里有篇文章写了 web_worker 的一些应用场景,web_worker 和 worker_threads 是类似的,宿主环境不同,一些权限和能力的不同而已。
The beginningProject is provided as a tool set for Electron application developmentBrowserService/ChildProcessPool/Simple process monitoring UI/ Inter-process communication
and other functions, the addition of the thread pool was actually not planned at the beginning, and the thread pool itself is independent and does not rely on other module functions in electron-re, and should be independent in the future.
The implementation of process pool and thread pool needs to be improved.
For example, the process pool does not support the automatic exit of the child process when it is idle to release the resource occupation. At that time, another version was made to monitor the task execution status of ProcessHost to let the child process sleep when it is idle. I wanted to save resource occupation in this way. However, since there is no node.js API level support to distinguish the idle state of the child process, and the sleep/wake-up function of the child process is relatively useless (there are attempts to achieve this by sending SIGSTOP/SIGCONT
signals to the child process), in the end This feature has been deprecated.
We can consider supporting CPU/Memory load balancing algorithm later. At present, we have implemented resource occupancy collection through the ProcessManager
module in the project.
The relative availability of the thread pool is still high. It provides two levels of call management of pool/excutor
, supports chain calls, and is supported in some scenarios where data transmission performance needs to be improved. transferList
method to avoid data cloning. Compared with other open source Node thread pool solutions, it focuses on strengthening the task queue function and supports functions such as task retry and task timeout.
For more node-related knowledge, please visit: nodejs tutorial!
The above is the detailed content of Let's talk about how to implement lightweight process pool and thread pool using Node. For more information, please follow other related articles on the PHP Chinese website!