php.net |  support |  documentation |  report a bug |  advanced search |  search howto |  statistics |  random bug |  login
Bug #53918 printf of floating point variable prints maximum of 53 decimal places
Submitted: 2011-02-03 12:56 UTC Modified: 2011-02-21 02:22 UTC
From: exploringbinary at gmail dot com Assigned:
Status: Wont fix Package: Math related
PHP Version: 5.3.5 OS: Windows (32-bit system)
Private report: No CVE-ID: None
Have you experienced this issue?
Rate the importance of this bug to you:

 [2011-02-03 12:56 UTC] exploringbinary at gmail dot com
Description:
------------
Bug #47168, "printf of floating point variable prints maximum of 40 decimal places", was not fixed as expected. Instead of the previous arbitrary limit of 40 digits, there is now an arbitrary limit of 53 digits.

These three examples all print powers of two, which have exact representations in double-precision floating-point:

<?php printf("%1.176f\n",1.044048714879763924273647057481047608912186281291034647641381832875155719135597796355663380296618925058282911777496337890625e-53); /* 2^-176 */?>
 
<?php printf("%1.177f\n",5.220243574398819621368235287405238044560931406455173238206909164375778595677988981778316901483094625291414558887481689453125e-54); /* 2^-177 */?>

<?php printf("%1.178f\n",2.6101217871994098106841176437026190222804657032275866191034545821878892978389944908891584507415473126457072794437408447265625e-54); /* 2^-178 */?> 

The output is:
0.00000000000000000000000000000000000000000000000000001
0.00000000000000000000000000000000000000000000000000001
0.00000000000000000000000000000000000000000000000000000

(The first prints 1 significant digit, the second rounds to 1 significant digit, and the third prints no significant digits.)

Compare this to gcc C on Linux:
 printf("%1.176f\n",1.044048714879763924273647057481047608912186281291034647641381832875155719135597796355663380296618925058282911777496337890625e-53); /* 2^-176 */
 printf("%1.177f\n",5.220243574398819621368235287405238044560931406455173238206909164375778595677988981778316901483094625291414558887481689453125e-54); /* 2^-177 */
 printf("%1.178f\n",2.6101217871994098106841176437026190222804657032275866191034545821878892978389944908891584507415473126457072794437408447265625e-54); /* 2^-178 */

The output (which is correct) is:
0.00000000000000000000000000000000000000000000000000001044048714879763924273647057481047608912186281291034647641381832875155719135597796355663380296618925058282911777496337890625

0.000000000000000000000000000000000000000000000000000005220243574398819621368235287405238044560931406455173238206909164375778595677988981778316901483094625291414558887481689453125

0.0000000000000000000000000000000000000000000000000000026101217871994098106841176437026190222804657032275866191034545821878892978389944908891584507415473126457072794437408447265625

(I see that in formatted_print.c, 

#define MAX_FLOAT_PRECISION 40

was changed to

#define MAX_FLOAT_PRECISION 53)

Test script:
---------------
<?php printf("%1.178f\n",2.6101217871994098106841176437026190222804657032275866191034545821878892978389944908891584507415473126457072794437408447265625e-54); /* 2^-178 */?>

Expected result:
----------------
All requested decimal places are printed.

Actual result:
--------------
Only 53 decimal places are printed.

Patches

Add a Patch

Pull Requests

Add a Pull Request

History

AllCommentsChangesGit/SVN commitsRelated reports
 [2011-02-03 14:23 UTC] iliaa@php.net
-Status: Open +Status: Bogus
 [2011-02-03 14:23 UTC] iliaa@php.net
Thank you for taking the time to write to us, but this is not
a bug. Please double-check the documentation available at
http://www.php.net/manual/ and the instructions on how to report
a bug at http://bugs.php.net/how-to-report.php

53 digits is the maximum limit enforced by most programming languages.

http://www.exploringbinary.com/print-precision-of-dyadic-fractions-varies-by-
language/
 [2011-02-03 14:28 UTC] cataphract@php.net
-Status: Bogus +Status: Open
 [2011-02-03 14:28 UTC] cataphract@php.net
Well, can't you just print the numbers in exponential format?

A representation with more than 50 leading digits in the fractional part is not particularly useful.

This seems a better option than showing > 53 digits of decimal places when the machine precision is of only 15.9546 decimal digits.

I imagine the magic arbitrary 53 digits limits was because the precision of the machine precision of 53 binary digits, though this is of course irrelevant for decimal representations.
 [2011-02-03 15:19 UTC] exploringbinary at gmail dot com
I got the same "this is not a bug" response for bug #47168 -- until Rasmus stepped in that is. Please reconsider.

I COULD print the numbers in exponential format, but that's besides the point. If you accept that bug #47168 was valid, then this bug should be too.

The "15.9546 decimal digits" of precision does not apply here; my examples are powers of two: every digit is accurate.
 [2011-02-04 01:11 UTC] cataphract@php.net
I'm sorry, my classes of numerical analysis are long gone.

First let's establish what we're talking about. Let's take Mathematica's definitions:

http://reference.wolfram.com/mathematica/ref/Precision.html
Precision[x] gives the effective number of digits of precision in the number x.
Precision[x] gives a measure of the relative uncertainty in the value of x.
With absolute uncertainty dx, Precision[x] is -Log[10, dx/x].
The meaning of "absolute uncertainty" gets clear by this passage:
«Mathematica is set up so that if a number x has uncertainty dx, then its true value can lie anywhere in an interval of size dx from x-dx/2 to x+dx/2.»

So, in this convention, 60.0000000000000000 (60. + 16 zeros) represents 60 ± 0.5e-16 -- 60 with an absolute uncertainty of 1e-16. We need a precision greater than double precision, of -log10(1e-16/60), or 17.7782.

The (implicit) *precision* of a (normalized) double is _always_ -log10(2^-53), or around 15.9546. These are the 52 bits in the mantissa plus an implicit bit in the normalized doubles.

Now for accuracy:

http://reference.wolfram.com/mathematica/ref/Accuracy.html
Accuracy[x] gives the effective number of digits to the right of the decimal point in the number x.
Accuracy[x] gives a measure of the absolute uncertainty in the value of x.
With uncertainty dx, Accuracy[x] is -Log[10,dx]. 
Accuracy[x] is equal to Precision[x]-RealExponent[x] (where RealExponent[x] is log10(abs(x)))

From these definitions, it becomes clear that what you want matters for this bug report is the *accuracy* in the representation, i.e., only the digits to the right of the decimal point matter.

It's also clear that the smaller the (absolute value of) number, greater is the accuracy. In fact, by the relationship given above, the accuracy of a double is given by:

15.9546 - log10(abs(x))

And the accuracy needed to represent the effective digits of the smallest normalized double, 2.2250738585072014e-308 (2^-1022), is 323.607. You can have smaller denormalized numbers (for which -RealExponent gets larger), but the precision also dwindles. The smallest possible denormalized double, 2^-1074, only has one effective "binary digit" of precision. The precision is log10(2), or 0.30103; adding -RealExponent(2^-1074) gives the same accuracy of 323.607.

So would a limit of 324 digits be enough? Yes, but only if you want to show only the effective digits. If you want to show the exact number represented by the double, you'll need a lot more digits.

For instance, to fully represent the value stored in the double 001ffffffffffffe, you'd need 766 decimal digits, not counting the 307 leading zeros. See
http://www.wolframalpha.com/input/?i=2%5E-1074%2A16%5E%5E1FFFFFFFFFFFFE

In my opinion, it's deceiving to show decimal places beyond the effective digits. Those numbers are rendered useless by the rounding errors. Except for some exotic application that would that uses numbers only in the domain of the numbers representable in the IEEE double, I fail to see any point in showing them.

The current limit of 53 decimal places, albeit arbitrary, is more than generous. 16 decimal places would be enough to show the effective digits in scientific notation. The remaining leave some space to show leading zeros in non-scientific representations and insignificant digits.

Then there's the problem that not all C libraries support these accuracies.

Therefore, I would mark both this bug and bug #47168 as Wont Fix.
 [2011-02-04 03:24 UTC] exploringbinary at gmail dot com
I'm not sure what that formula tells you. 2^-1022 has 1022 decimal digits: 307 leading 0s followed by 715 other digits. 

As for 2^-1074, it only NEEDS 1 bit of precision. All 1074 of its digits can be printed: 323 leading 0s followed by 751 other digits. (1074 digits would be the limit for a double.)

These articles might be of interest
 
http://www.exploringbinary.com/a-simple-c-program-that-prints-2098-powers-of-two/ (using gcc it will print all the powers of two to full accuracy, even the subnormal ones)

http://www.exploringbinary.com/converting-floating-point-numbers-to-binary-strings-in-c/ (I use a limit of 1077 -- 1074 plus "0." and string terminator)

As for "Except for some exotic application that would that uses numbers only in the domain of the numbers representable in the IEEE double...". I don't disagree with that. But glibc lets you do it, as do Python and Perl. If it's a killer to implement, don't bother. If it's as simple as changing that constant from 53 to 1074, I say why not?
 [2011-02-04 14:23 UTC] cataphract@php.net
> I'm not sure what that formula tells you. 2^-1022 has 1022
> decimal digits: 307 leading 0s followed by 715 other digits.

Yes, but not all digits are born the same.

The accuracy of the IEEE double that represents 2^-1022 is 323.607. That means all the decimal digits beyond that could be wrong due to rounding errors.

Eith uncertainty dx, Accuracy[x] is -Log[10,dx]. For accuracy = 323.607, dx = 10^-323.607 = 2.470328229206*10^-324.

Which means that, unless you're sure your number is actually exactly represented in an IEEE double (an unlikely scenario of application), your IEEE double doesn't represent 2^-1022. It represents 2^-1022 ± 1.235164114603*10^-324.

You can easily see this with a small C program. Let's take our 2^1022, which has a nice binary representation:

#include<stdio.h>
void main() {
	double u,v;
	u = 2.22507385850720138309023271733e-308; /* given with 30 digits */
	v = 2.2250738585072014e-308; 		  /* given with 17 digits */
	printf("%016llx %016llx\n",
		*((unsigned long long*)&u), *((unsigned long long*)&v));
}

This prints the same double in big-endian form:
0010000000000000 0010000000000000
 [2011-02-04 15:03 UTC] exploringbinary at gmail dot com
The formula tells you the number of leading zeros plus approximately 17, which I agree applies for any decimal value that's not exactly representable. And I agree that there are unlimited decimal values that map to the same double, e.g. DBL_MIN = 2^-1022. But if you enter the decimal value representing 2^-1022 in your source code, it maps directly to 2^-1022 in a double -- all its digits are accurate so the formula does not apply. In other words, I'm sure this "is actually exactly represented in an IEEE double."

Uses of this are limited, and I already said I agree. But one use is my C program that prints all the powers of two representable by a double, using only double-precision arithmetic (see link above).
 [2011-02-06 18:25 UTC] iliaa@php.net
-Status: Open +Status: Wont fix
 [2011-02-06 18:25 UTC] iliaa@php.net
Current PHP behaviour is analogous with other programming languages.
 [2011-02-06 19:15 UTC] exploringbinary at gmail dot com
"Current PHP behaviour is analogous with other programming languages"

Except, for example, the three that I mentioned: gcc C (glibc), Python and Perl.
 [2011-02-21 02:22 UTC] cataphract@php.net
I'd just add this is not as simple as changing the MAX_FLOAT_PRECISION define, as values larger than around 512 (the value of NUM_BUF_SIZE) will result in a buffer overflow.
 
PHP Copyright © 2001-2024 The PHP Group
All rights reserved.
Last updated: Thu Apr 18 03:01:28 2024 UTC