Home  >  Article  >  Web Front-end  >  Let’s talk about encoding in Node.js Buffer

Let’s talk about encoding in Node.js Buffer

青灯夜游
青灯夜游forward
2021-08-31 10:28:233401browse

This article will take you to understand the encoding in Node.js Buffer, I hope it will be helpful to everyone!

Let’s talk about encoding in Node.js Buffer

The smallest unit of a computer is a bit, that is, 0 and 1, which are corresponded to high and low levels on the hardware. However, only one bit represents too little information, so 8 bits are specified as one byte. After that, various information such as numbers and strings are stored based on bytes. [Recommended study: "nodejs Tutorial"] How to store

characters? It relies on encoding. Different characters correspond to different encodings. Then when rendering is needed, the font library is checked according to the corresponding encoding, and then the graphics of the corresponding characters are rendered.

Character set

The character set (charset) was originally ASCII code, which is abc ABC 123 and other 128 characters, because the computer was first invented in the United States. Later, Europe also developed a set of character set standards called ISO, and later China also developed a set of character set standards called GBK.

The International Organization for Standardization felt that we couldn’t each have one, otherwise the same code would have different meanings in different character sets, so we proposed unicode coding to include most of the world’s codes, so that each character Only unique encoding.

But ASCII code only requires 1 byte to store, while GBK requires 2 bytes, and some character sets require 3 bytes, etc. Some only need one byte to store but store 2 Bytes, which is a waste of space. So there are different encoding schemes such as utf-8, utf-16, utf-24, etc.

utf-8, utf-16, and utf-24 are all unicode encodings, but the specific implementation plans are different.

UTF-8 In order to save space, a variable-length storage scheme from 1 to 6 bytes is designed. UTF-16 is fixed at 2 bytes, and UTF-24 is fixed at 4 bytes.

Let’s talk about encoding in Node.js Buffer

Finally, UTF-8 is widely used because it takes up the least space.

Node.js Buffer encoding

Each language supports character set encoding and decoding, and Node.js does the same.

Node.js can use Buffer to store binary data, and when converting binary data to a string, you need to specify the character set. Buffer's from, byteLength, lastIndexOf and other methods support specifying encoding:

The specifically supported encodings are:

utf8, ucs2, utf16le, latin1, ascii, base64, hex

Some students may find that: base64 and hex are not character sets Ah, why are you here?

Yes, in addition to the character set, the byte-to-character encoding scheme also includes base64 for converting to plaintext characters, and hex for converting to hexadecimal.

This is why Node.js calls it encoding instead of charset, because the supported encoding and decoding schemes are not just character sets.

If encoding is not specified, the default is utf8.

const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');

console.log(buf.toString());// hello world

Encoding source code

I went through the Node.js source code about encoding:

This section implements encoding: https: //github.com/nodejs/node/blob/master/lib/buffer.js#L587-L726

You can see that each encoding implements encoding, encodingVal, byteLength, write, slice, indexOf. Several APIs, because these APIs use different encoding schemes, will have different results. Node.js will return different objects according to the incoming encoding. This is a polymorphic idea.

const encodingOps = {
  utf8: {
    encoding: 'utf8',
    encodingVal: encodingsMap.utf8,
    byteLength: byteLengthUtf8,
    write: (buf, string, offset, len) => buf.utf8Write(string, offset, len),
    slice: (buf, start, end) => buf.utf8Slice(start, end),
    indexOf: (buf, val, byteOffset, dir) =>
      indexOfString(buf, val, byteOffset, encodingsMap.utf8, dir)
  },
  ucs2: {
    encoding: 'ucs2',
    encodingVal: encodingsMap.utf16le,
    byteLength: (string) => string.length * 2,
    write: (buf, string, offset, len) => buf.ucs2Write(string, offset, len),
    slice: (buf, start, end) => buf.ucs2Slice(start, end),
    indexOf: (buf, val, byteOffset, dir) =>
      indexOfString(buf, val, byteOffset, encodingsMap.utf16le, dir)
  },
  utf16le: {
    encoding: 'utf16le',
    encodingVal: encodingsMap.utf16le,
    byteLength: (string) => string.length * 2,
    write: (buf, string, offset, len) => buf.ucs2Write(string, offset, len),
    slice: (buf, start, end) => buf.ucs2Slice(start, end),
    indexOf: (buf, val, byteOffset, dir) =>
      indexOfString(buf, val, byteOffset, encodingsMap.utf16le, dir)
  },
  latin1: {
    encoding: 'latin1',
    encodingVal: encodingsMap.latin1,
    byteLength: (string) => string.length,
    write: (buf, string, offset, len) => buf.latin1Write(string, offset, len),
    slice: (buf, start, end) => buf.latin1Slice(start, end),
    indexOf: (buf, val, byteOffset, dir) =>
      indexOfString(buf, val, byteOffset, encodingsMap.latin1, dir)
  },
  ascii: {
    encoding: 'ascii',
    encodingVal: encodingsMap.ascii,
    byteLength: (string) => string.length,
    write: (buf, string, offset, len) => buf.asciiWrite(string, offset, len),
    slice: (buf, start, end) => buf.asciiSlice(start, end),
    indexOf: (buf, val, byteOffset, dir) =>
      indexOfBuffer(buf,
                    fromStringFast(val, encodingOps.ascii),
                    byteOffset,
                    encodingsMap.ascii,
                    dir)
  },
  base64: {
    encoding: 'base64',
    encodingVal: encodingsMap.base64,
    byteLength: (string) => base64ByteLength(string, string.length),
    write: (buf, string, offset, len) => buf.base64Write(string, offset, len),
    slice: (buf, start, end) => buf.base64Slice(start, end),
    indexOf: (buf, val, byteOffset, dir) =>
      indexOfBuffer(buf,
                    fromStringFast(val, encodingOps.base64),
                    byteOffset,
                    encodingsMap.base64,
                    dir)
  },
  hex: {
    encoding: 'hex',
    encodingVal: encodingsMap.hex,
    byteLength: (string) => string.length >>> 1,
    write: (buf, string, offset, len) => buf.hexWrite(string, offset, len),
    slice: (buf, start, end) => buf.hexSlice(start, end),
    indexOf: (buf, val, byteOffset, dir) =>
      indexOfBuffer(buf,
                    fromStringFast(val, encodingOps.hex),
                    byteOffset,
                    encodingsMap.hex,
                    dir)
  }
};
function getEncodingOps(encoding) {
  encoding += '';
  switch (encoding.length) {
    case 4:
      if (encoding === 'utf8') return encodingOps.utf8;
      if (encoding === 'ucs2') return encodingOps.ucs2;
      encoding = StringPrototypeToLowerCase(encoding);
      if (encoding === 'utf8') return encodingOps.utf8;
      if (encoding === 'ucs2') return encodingOps.ucs2;
      break;
    case 5:
      if (encoding === 'utf-8') return encodingOps.utf8;
      if (encoding === 'ascii') return encodingOps.ascii;
      if (encoding === 'ucs-2') return encodingOps.ucs2;
      encoding = StringPrototypeToLowerCase(encoding);
      if (encoding === 'utf-8') return encodingOps.utf8;
      if (encoding === 'ascii') return encodingOps.ascii;
      if (encoding === 'ucs-2') return encodingOps.ucs2;
      break;
    case 7:
      if (encoding === 'utf16le' ||
          StringPrototypeToLowerCase(encoding) === 'utf16le')
        return encodingOps.utf16le;
      break;
    case 8:
      if (encoding === 'utf-16le' ||
          StringPrototypeToLowerCase(encoding) === 'utf-16le')
        return encodingOps.utf16le;
      break;
    case 6:
      if (encoding === 'latin1' || encoding === 'binary')
        return encodingOps.latin1;
      if (encoding === 'base64') return encodingOps.base64;
      encoding = StringPrototypeToLowerCase(encoding);
      if (encoding === 'latin1' || encoding === 'binary')
        return encodingOps.latin1;
      if (encoding === 'base64') return encodingOps.base64;
      break;
    case 3:
      if (encoding === 'hex' || StringPrototypeToLowerCase(encoding) === 'hex')
        return encodingOps.hex;
      break;
  }
}

Summary

The smallest unit for storing data in a computer is a bit, but the smallest unit for storing information is a byte. The mapping relationship based on encoding and characters is realized again. Various character sets, including ascii, iso, gbk, etc., and the International Organization for Standardization proposed unicode to include all characters. There are several unicode implementation solutions: utf-8, utf-16, utf-24, and they use different characters respectively. Number of sections to store characters. Among them, utf-8 is variable length and has the smallest storage volume, so it is widely used.

Node.js stores binary data through Buffer, and when converting it to a string, you need to specify an encoding scheme. This encoding scheme not only includes character sets (charset), but also supports hex and base64 schemes, including:

utf8, ucs2, utf16le, latin1, ascii, base64, hex

We looked at the Node.js source code of encoding and found that each encoding scheme will be used to implement a series of APIs. This is a Polymorphic thoughts.

Encoding is a concept that is frequently encountered when learning Node.js, and the encoding of Node.js does not only include charset. I hope this article can help everyone understand encoding and character sets.

For more programming-related knowledge, please visit: Introduction to Programming! !

The above is the detailed content of Let’s talk about encoding in Node.js Buffer. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:juejin.cn. If there is any infringement, please contact admin@php.cn delete