Home >Backend Development >C++ >Why Doesn't `x == 0.1` Always Work with C#'s `double` Data Type?
C# double
Comparisons: Precision Issues
Working with floating-point numbers (like C#'s double
type) often presents unexpected challenges when comparing values. A common example is comparing a double
variable to 0.1:
<code class="language-csharp">double x = 0.1; if (x == 0.1) { /* Code */ }</code>
This seemingly simple comparison might surprisingly fail.
Understanding the Problem: Binary vs. Decimal Representation
The root cause lies in how floating-point numbers are stored. double
values are stored as binary fractions, not decimal fractions. This means that many decimal values, including 0.1, cannot be precisely represented as a binary fraction. The computer stores an approximation instead, leading to subtle differences that affect comparisons.
The Solution: Using the decimal
Data Type
To avoid this precision issue, use the decimal
data type. decimal
values are stored using decimal notation, allowing for exact representation of numbers like 0.1.
<code class="language-csharp">decimal x = 0.1m; if (x == 0.1m) { /* Code */ }</code>
Using decimal
ensures accurate storage and comparison of 0.1.
Floating-Point Representation: A Deeper Look
To illustrate the problem, consider decimal representation. 12.34 is:
<code>1 * 10^1 + 2 * 10^0 + 3 * 10^-1 + 4 * 10^-2</code>
Similarly, 0.1 is:
<code>1 * 10^-1</code>
However, in binary, some numbers (like 1/10, or 0.1 decimal) lack a precise representation. They are approximated, leading to the discrepancies that cause unexpected results in comparisons. This approximation is why x == 0.1
might fail when x
is a double
.
The above is the detailed content of Why Doesn't `x == 0.1` Always Work with C#'s `double` Data Type?. For more information, please follow other related articles on the PHP Chinese website!