Home >Web Front-end >JS Tutorial >Introduction to numerical ranges in JavaScript_javascript tips

Introduction to numerical ranges in JavaScript_javascript tips

WBOY
WBOYOriginal
2016-05-16 16:23:391564browse

All numbers in JavaScript, whether integers or decimals, are of type Number. Within the program, the Number type is essentially a 64-bit floating point number, which is consistent with the double type floating point number in Java; therefore, all numbers in JavaScript are floating point numbers. Following the IEEE 754 standard (floating point arithmetic standard), the numerical range that JavaScript can represent is plus or minus 1.7976931348623157 times 10 to the power of 308, and the smallest decimal that can be represented is plus or minus 5 times 10 to the power of negative 324. These two boundary values ​​can be obtained by accessing the MAX_VALUE attribute and MIN_VALUE attribute of the Number object respectively.

For integers, according to the requirements of the ECMAScript standard (http://ecma262-5.com/ELS5_HTML.htm#Section_8.5), the range of integers that JavaScript can represent and perform precise arithmetic operations is: Plus or minus 2 raised to the 53rd power, that is, the range from the minimum value -9007199254740992 to the maximum value 9007199254740992; for integers exceeding this range, JavaScript can still perform operations, but it does not guarantee the accuracy of the operation results. It is worth noting that for integer bit operations (such as shifts and other operations), JavaScript only supports 32-bit integers, that is, integers from -2147483648 to 2147483647.

Experiment

Display the absolute value of the largest number and the absolute value of the smallest decimal in JavaScript:

Copy code The code is as follows:

console.log(Number.MAX_VALUE);
console.log(Number.MIN_VALUE);

The displayed results are 1.7976931348623157e 308 and 5e-324.

For integers outside the range of plus or minus 2 raised to the 53rd power, JavaScript cannot give accurate calculation results:

Copy code The code is as follows:

var a = 9007199254740992;
console.log(a 3);


The correct calculation result should be 9007199254740995, but the calculation result given by JavaScript is 9007199254740996. After trying to change the calculation formula, you can find that as long as the integer is greater than 9007199254740992, errors in this calculation result will occur frequently. If the deviation in calculation accuracy is acceptable, then the consequences of the following example are even more serious:


Copy code The code is as follows:

var MAX_INT = 9007199254740992;
for (var i = MAX_INT; i < MAX_INT 2; i) {
// infinite loop
}


Due to calculation accuracy issues, the above for statement will fall into an infinite loop.

For bitwise operations, JavaScript only supports 32-bit integers:


Copy code The code is as follows:

var smallInt = 256;
var bigInt = 2200000000;
console.log(smallInt / 2);
console.log(smallInt >> 1);
console.log(bigInt / 2);
console.log(bigInt >> 1);


It can be seen that for integers within 32 bits (256), JavaScript can perform correct bit operations, and the result is consistent with the result of the division operation (128). For integers other than 32 bits, JavaScript can perform correct division operations (1100000000), but the result obtained after performing bit operations is far from the correct result (-1047483648).

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn