Home > Article > Web Front-end > Detailed explanation of some overlooked usage of Buffer in Node.js
Online articles about Buffer usage in Node.js There are many, but I feel they are still not detailed enough, so this article mainly introduces some usage of Buffer in Node.js that you may not know. The article introduces it in very detail. Friends who need it can refer to it. Let’s take a look together. .
##Preface
In most articles introducing Buffer, they mainly focus on the two aspects of data splicing and memory allocation. When using the fs module toread the contents of a file, a Buffer is returned:
fs.readFile('filename', function (err, buf) { // <Buffer 2f 2a 2a 0a 20 2a 20 53 75 ... > });When using the net or http module to receive network data, the data
event The parameter is also a Buffer. At this time, we also need to use Buffer.concat() for data splicing:
var bufs = []; conn.on('data', function (buf) { bufs.push(buf); }); conn.on('end', function () { // 接收数据结束后,拼接所有收到的 Buffer 对象 var buf = Buffer.concat(bufs); });can also use
Buffer.toString() Convert base64 or hexadecimal characters, such as:
console.log(new Buffer('hello, world!').toString('base64')); // 转换成 base64 字符串:aGVsbG8sIHdvcmxkIQ== console.log(new Buffer('aGVsbG8sIHdvcmxkIQ==', 'base64').toString()); // 还原 base64 字符串:hello, world! console.log(new Buffer('hello, world!').toString('hex')); // 转换成十六进制字符串:68656c6c6f2c20776f726c6421 console.log(new Buffer('68656c6c6f2c20776f726c6421', 'hex').toString()); // 还原十六进制字符串:hello, world!Generally, a single Node.js process has a maximum memory limit. The following is an explanation from the official documentation: What is the memory limit on a node process?Currently, by default v8 has a memory limit of 512MB on 32-bit systems, and 1.4GB on 64-bit systems. The limit can be raised by setting --max_old_space_size to a maximum of ~1024 (~1 GB) (32-bit) and ~4096 (~4GB) (64-bit), but it is recommended that you split your single process into several workers if you are hitting memory limits.Since the memory space occupied by the Buffer object is not counted in the memory space limit of the Node.js process, we often use Buffer to store data that requires a large amount of memory:
// 分配一个 2G-1 字节的数据 // 单次分配内存超过此值会抛出异常 RangeError: Invalid typed array length var buf = new Buffer(1024 * 1024 * 1024 - 1);The above are several common uses of Buffer. However, when reading the API documentation of Buffer, we will find more APIs starting with
readXXX() and
writeXXX(), as follows:
##buf.readDoubleBE(offset[, noAssert])
buf.write(string[, offset][, length][, encoding])
##buf.writeDoubleLE(value, offset[, noAssert])
buf.writeDoubleBE(value, offset[, noAssert])
var buf = new Buffer(6); buf.writeUIntBE(1447656645380, 0, 6); // <Buffer 01 51 0f 0f 63 04> buf.readUIntBE(0, 6); // 1447656645380
操作结构化数据
假设有一个学生考试成绩数据库,每条记录结构如下:
学号 | 课程代码 | 分数 |
---|---|---|
XXXXXX | XXXX | XX |
其中学号是一个 6 位的数字,课程代码是一个 4 位数字,分数最高分为 100 分。
在使用文本来存储这些数据时,比如使用 CSV 格式存储可能是这样的:
100001,1001,99 100002,1001,67 100003,1001,88
其中每条记录占用 15 字节的空间,而使用二进制存储时其结构将会是这样:
学号 | 课程代码 | 分数 |
---|---|---|
3 字节 | 2 字节 | 1 字节 |
每一条记录仅需要 6 字节的空间即可,仅仅是使用文本存储的 40%!下面是用来操作这些记录的程序:
// 读取一条记录 // buf Buffer 对象 // offset 本条记录在 Buffer 对象的开始位置 // data {number, lesson, score} function writeRecord (buf, offset, data) { buf.writeUIntBE(data.number, offset, 3); buf.writeUInt16BE(data.lesson, offset + 3); buf.writeInt8(data.score, offset + 5); } // 写入一条记录 // buf Buffer 对象 // offset 本条记录在 Buffer 对象的开始位置 function readRecord (buf, offset) { return { number: buf.readUIntBE(offset, 3), lesson: buf.readUInt16BE(offset + 3), score: buf.readInt8(offset + 5) }; } // 写入记录列表 // list 记录列表,每一条包含 {number, lesson, score} function writeList (list) { var buf = new Buffer(list.length * 6); var offset = 0; for (var i = 0; i < list.length; i++) { writeRecord(buf, offset, list[i]); offset += 6; } return buf; } // 读取记录列表 // buf Buffer 对象 function readList (buf) { var offset = 0; var list = []; while (offset < buf.length) { list.push(readRecord(buf, offset)); offset += 6; } return list; }
我们可以再编写一段程序来看看效果:
var list = [ {number: 100001, lesson: 1001, score: 99}, {number: 100002, lesson: 1001, score: 88}, {number: 100003, lesson: 1001, score: 77}, {number: 100004, lesson: 1001, score: 66}, {number: 100005, lesson: 1001, score: 55}, ]; console.log(list); var buf = writeList(list); console.log(buf); // 输出 <Buffer 01 86 a1 03 e9 63 01 86 a2 03 e9 58 01 86 a3 03 e9 4d 01 86 a4 03 e9 42 01 86 a5 03 e9 37> var ret = readList(buf); console.log(ret); /* 输出 [ { number: 100001, lesson: 1001, score: 99 }, { number: 100002, lesson: 1001, score: 88 }, { number: 100003, lesson: 1001, score: 77 }, { number: 100004, lesson: 1001, score: 66 }, { number: 100005, lesson: 1001, score: 55 } ] */
lei-proto 模块介绍
上面的例子中,当每一条记录的结构有变化时,我们需要修改readRecord()
和writeRecord()
,重新计算每一个字段在 Buffer 中的偏移量,当记录的字段比较复杂时很容易出错。为此我编写了lei-proto模块,它允许你通过简单定义每条记录的结构即可生成对应的readRecord()
和`writeRecord()
函数。
首先执行以下命令安装此模块:
$ npm install lei-proto --save
使用lei-proto模块后,前文的例子可以改为这样:
var parsePorto = require('lei-proto'); // 生成指定记录结构的数据编码/解码器 var record = parsePorto([ ['number', 'uint', 3], ['lesson', 'uint', 2], ['score', 'uint', 1] ]); function readList (buf) { var list = []; var offset = 0; while (offset < buf.length) { list.push(record.decode(buf.slice(offset, offset + 6))); offset += 6; } return list; } function writeList (list) { return Buffer.concat(list.map(record.encodeEx)); }
运行与上文同样的测试程序,可看到其结果是一样的:
<Buffer 01 86 a1 03 e9 63 01 86 a2 03 e9 58 01 86 a3 03 e9 4d 01 86 a4 03 e9 42 01 86 a5 03 e9 37> [ { number: 100001, lesson: 1001, score: 99 }, { number: 100002, lesson: 1001, score: 88 }, { number: 100003, lesson: 1001, score: 77 }, { number: 100004, lesson: 1001, score: 66 }, { number: 100005, lesson: 1001, score: 55 } ]
总结
The above is the detailed content of Detailed explanation of some overlooked usage of Buffer in Node.js. For more information, please follow other related articles on the PHP Chinese website!