Home >Backend Development >C++ >Why Do C# and C Produce Different Output When Formatting Doubles?

Why Do C# and C Produce Different Output When Formatting Doubles?

Mary-Kate Olsen
Mary-Kate OlsenOriginal
2025-01-04 19:27:44405browse

Why Do C# and C Produce Different Output When Formatting Doubles?

Formatting Doubles for Output in C#

In a recent experiment, an attempt was made to emulate the output formatting capabilities of C in C#. However, discrepancies were observed between the expected and actual output.

The C code:

double i = 10 * 0.69;
printf("%f\n", i);
printf("  %.20f\n", i);
printf("+ %.20f\n", 6.9 - i);
printf("= %.20f\n", 6.9);

produced the following output:

6.900000
6.89999999999999946709
+ 0.00000000000000088818
= 6.90000000000000035527

The C# code:

double i = 10 * 0.69;
Console.WriteLine(i);
Console.WriteLine(String.Format("  {0:F20}", i));
Console.WriteLine(String.Format("+ {0:F20}", 6.9 - i));
Console.WriteLine(String.Format("= {0:F20}", 6.9));

yielded:

6.9
6.90000000000000000000
+ 0.00000000000000088818
= 6.90000000000000000000

Despite i appearing as 6.89999999999999946709 in the debugger, the C# formatting rounds the value to 15 significant decimal digits before applying the format.

This unexpected behavior is because .NET internally stores doubles using a binary representation. Before formatting, .NET converts the double to a decimal string, which inherently introduces rounding error.

To avoid this issue, one can either:

  • Access the exact decimal value of the double: Construct a string representation of the decimal value by extracting the internal binary bits manually.
  • Use a custom formatting library: Leverage Jon Skeet's DoubleConverter class, which provides methods for retrieving the exact decimal value of a double and rounding the output to a specified precision.

The DoubleConverter usage example:

Console.WriteLine(DoubleConverter.ToExactString(i));
Console.WriteLine(DoubleConverter.ToExactString(6.9 - i));
Console.WriteLine(DoubleConverter.ToExactString(6.9));

// 6.89999999999999946709294817992486059665679931640625
// 0.00000000000000088817841970012523233890533447265625
// 6.9000000000000003552713678800500929355621337890625

The above is the detailed content of Why Do C# and C Produce Different Output When Formatting Doubles?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn