Home >Web Front-end >JS Tutorial >Let's talk about how to use Node to achieve content compression through practice
How to achieve content compression using Nodejs? The following article will talk about the method of implementing content compression (gzip/br/deflate) on the Node side through practice. I hope it will be helpful to you!
When checking my application log, I found that it always takes a few seconds to load after entering the log page (the interface is not paginated). So I opened the network panel and checked
Only then did I find that the data returned by the interface was not compressed. I thought the interface used Nginx reverse proxy. , Nginx will automatically help me do this layer (I will explore this later, it is theoretically feasible)
The backend here is Node Service
This article will Share HTTP data compression
related knowledge and practice on the Node side
The following clients all refer to browsing
When the client initiates a request to the server, it will add ## to the request header. #accept-encoding field, its value indicates the compressed content encoding
format content-encoding
supported by the client
After the server performs compression on the returned content, it tells the browser the encoding algorithm used for the actual compression of the content by adding
content-encoding to the response header.
deflate/gzip/br
is one that uses both the LZ77
algorithm and Huffman Coding
Lossless data compression algorithm.
is an algorithm based on DEFLATE
refers to Brotli
, the data format Aiming to further improve the compression ratio, the compression of text can increase the compression density by 20%
relative to deflate
, while the compression and decompression speed remains roughly unchanged zlib module
that provides access to Gzip
, Deflate/Inflate
, and Brotli
Implemented compression function Here we take
as an example to list various usage methods according to scenarios. Deflate/Inflate
is used in the same way as Brotli
, but the API is different
stream
buffer Operation
Introduce several required modules
const zlib = require('zlib') const fs = require('fs') const stream = require('stream') const testFile = 'tests/origin.log' const targetFile = `${testFile}.gz` const decodeFile = `${testFile}.un.gz`
Unzip/compress the file
command here to directly count the results before and after decompression <pre class="brush:js;toolbar:false;"># 执行
du -ah tests
# 结果如下
108K tests/origin.log.gz
2.2M tests/origin.log
2.2M tests/origin.log.un.gz
4.6M tests</pre>
Operation based on
Use and createUnzip
Therefore, the compression and decompression code in the following example should be executed separately, otherwise an error will be reported Directly use the example The pipe method on <pre class="brush:js;toolbar:false;">// 压缩
const readStream = fs.createReadStream(testFile)
const writeStream = fs.createWriteStream(targetFile)
readStream.pipe(zlib.createGzip()).pipe(writeStream)
// 解压
const readStream = fs.createReadStream(targetFile)
const writeStream = fs.createWriteStream(decodeFile)
readStream.pipe(zlib.createUnzip()).pipe(writeStream)</pre>
Use pipeline on stream
, which can be returned Do other processing separately <pre class="brush:js;toolbar:false;">// 压缩
const readStream = fs.createReadStream(testFile)
const writeStream = fs.createWriteStream(targetFile)
stream.pipeline(readStream, zlib.createGzip(), writeStream, err => {
if (err) {
console.error(err);
}
})
// 解压
const readStream = fs.createReadStream(targetFile)
const writeStream = fs.createWriteStream(decodeFile)
stream.pipeline(readStream, zlib.createUnzip(), writeStream, err => {
if (err) {
console.error(err);
}
})</pre>
PromiseizationpipelineMethod<pre class="brush:js;toolbar:false;">const { promisify } = require(&#39;util&#39;)
const pipeline = promisify(stream.pipeline)
// 压缩
const readStream = fs.createReadStream(testFile)
const writeStream = fs.createWriteStream(targetFile)
pipeline(readStream, zlib.createGzip(), writeStream)
.catch(err => {
console.error(err);
})
// 解压
const readStream = fs.createReadStream(targetFile)
const writeStream = fs.createWriteStream(decodeFile)
pipeline(readStream, zlib.createUnzip(), writeStream)
.catch(err => {
console.error(err);
})</pre>
Operation based on
Using and unzip
APIs, these two methods include synchronous
and asynchronous
types
readStreamTransferBuffer, and then perform further operations
gzip: asynchronous
// 压缩 const buff = [] readStream.on('data', (chunk) => { buff.push(chunk) }) readStream.on('end', () => { zlib.gzip(Buffer.concat(buff), targetFile, (err, resBuff) => { if(err){ console.error(err); process.exit() } fs.writeFileSync(targetFile,resBuff) }) })
// 压缩 const buff = [] readStream.on('data', (chunk) => { buff.push(chunk) }) readStream.on('end', () => { fs.writeFileSync(targetFile,zlib.gzipSync(Buffer.concat(buff))) })
readFileSyncDecrypt/compress the text content of
// 压缩 const readBuffer = fs.readFileSync(testFile) const decodeBuffer = zlib.gzipSync(readBuffer) fs.writeFileSync(targetFile,decodeBuffer) // 解压 const readBuffer = fs.readFileSync(targetFile) const decodeBuffer = zlib.gzipSync(decodeFile) fs.writeFileSync(targetFile,decodeBuffer)
In addition to file compression, sometimes it may be necessary Decompress the transmitted content directly
这里以压缩文本内容为例
// 测试数据 const testData = fs.readFileSync(testFile, { encoding: 'utf-8' })
流(stream)
操作这块就考虑 string
=> buffer
=> stream
的转换就行
string
=> buffer
const buffer = Buffer.from(testData)
buffer
=> stream
const transformStream = new stream.PassThrough() transformStream.write(buffer) // or const transformStream = new stream.Duplex() transformStream.push(Buffer.from(testData)) transformStream.push(null)
这里以写入到文件示例,当然也可以写到其它的流里,如HTTP的Response
(后面会单独介绍)
transformStream .pipe(zlib.createGzip()) .pipe(fs.createWriteStream(targetFile))
Buffer
操作同样利用Buffer.from
将字符串转buffer
const buffer = Buffer.from(testData)
然后直接使用同步API进行转换,这里result就是压缩后的内容
const result = zlib.gzipSync(buffer)
可以写入文件,在HTTP Server
中也可直接对压缩后的内容进行返回
fs.writeFileSync(targetFile, result)
这里直接使用Node中 http
模块创建一个简单的 Server 进行演示
在其他的 Node Web
框架中,处理思路类似,当然一般也有现成的插件,一键接入
const http = require('http') const { PassThrough, pipeline } = require('stream') const zlib = require('zlib') // 测试数据 const testTxt = '测试数据123'.repeat(1000) const app = http.createServer((req, res) => { const { url } = req // 读取支持的压缩算法 const acceptEncoding = req.headers['accept-encoding'].match(/(br|deflate|gzip)/g) // 默认响应的数据类型 res.setHeader('Content-Type', 'application/json; charset=utf-8') // 几个示例的路由 const routes = [ ['/gzip', () => { if (acceptEncoding.includes('gzip')) { res.setHeader('content-encoding', 'gzip') // 使用同步API直接压缩文本内容 res.end(zlib.gzipSync(Buffer.from(testTxt))) return } res.end(testTxt) }], ['/deflate', () => { if (acceptEncoding.includes('deflate')) { res.setHeader('content-encoding', 'deflate') // 基于流的单次操作 const originStream = new PassThrough() originStream.write(Buffer.from(testTxt)) originStream.pipe(zlib.createDeflate()).pipe(res) originStream.end() return } res.end(testTxt) }], ['/br', () => { if (acceptEncoding.includes('br')) { res.setHeader('content-encoding', 'br') res.setHeader('Content-Type', 'text/html; charset=utf-8') // 基于流的多次写操作 const originStream = new PassThrough() pipeline(originStream, zlib.createBrotliCompress(), res, (err) => { if (err) { console.error(err); } }) originStream.write(Buffer.from('<h1>BrotliCompress</h1>')) originStream.write(Buffer.from('<h2>测试数据</h2>')) originStream.write(Buffer.from(testTxt)) originStream.end() return } res.end(testTxt) }] ] const route = routes.find(v => url.startsWith(v[0])) if (route) { route[1]() return } // 兜底 res.setHeader('Content-Type', 'text/html; charset=utf-8') res.end(`<h1>404: ${url}</h1> <h2>已注册路由</h2> <ul> ${routes.map(r => `<li><a href="${r[0]}">${r[0]}</a></li>`).join('')} </ul> `) res.end() }) app.listen(3000)
更多node相关知识,请访问:nodejs 教程!
The above is the detailed content of Let's talk about how to use Node to achieve content compression through practice. For more information, please follow other related articles on the PHP Chinese website!