Home >Web Front-end >JS Tutorial >Problem analysis and solutions to Javascript floating point operations_javascript skills
Decimal System 0.0100 1100 1100 1100 ...
0.4 0.0110 0110 0110 0110 ...
0.5 0.1
0.6 0.1001 1001 1001 1001...
So for example, 1.1, the program cannot actually represent '1.1', but can only achieve a certain degree of accuracy, which is unavoidable Precision lost:
1.09999999999999999
The problem is more complicated in JavaScript. Here are just some test data in Chrome:
Input
1.0-0.9 == 0.1 False
1.0-0.7 == 0.3 False
1.0-0.6 == 0.4 True
1.0- 0.5 == 0.5 True
1.0-0.4 == 0.6 True
1.0-0.3 == 0.7 True
1.0-0.2 == 0.8 True
1.0-0.1 == 0.9 True
Solution
So how to avoid such non-bug problems with 1.0-0.9 != 0.1? The following is a commonly used solution. The calculation result is reduced in precision before judging the floating point operation result, because the precision reduction process will always be automatically rounded:
Number.prototype.isEqual = function(number, digits){
digits = digits == undefined? 10: digits; // The default precision is 10 return this.toFixed(digits) === number.toFixed(digits);
}
(1.0-0.7).isEqual(0.3); // return true