Less-than-optimal decimal real literal conversion to x86 extended floats
pineapple via Digitalmars-d
digitalmars-d at puremagic.com
Sat Jan 21 17:54:59 PST 2017
I'm not sure whether this should be counted as a bug, but I ran
into it and thought it deserved mentioning. I've been testing
this with DMD on Windows.
I wrote this function to support parsing of strings as floating
point values:
https://github.com/pineapplemachine/mach.d/blob/master/mach/math/floats/inject.d#L28
It is a direct copy of the relevant parts of strtod as
implemented here:
https://opensource.apple.com/source/tcl/tcl-10/tcl/compat/strtod.c
When writing tests for this code I assumed that the result of
expressions like `float x = some_literal` would be at least as
accurate as the values returned by my code, and so I wrote tests
which asserted that the literals and outputted values should be
exactly equal.
Take for example 0.0005, which is the input that caused me quite
a lot of trouble. When producing floats and doubles, the literals
and the function outputs are exactly equal, e.g.
`myfunc!double(5, -4) == 5e-4`. This is not the case for reals.
When you write `real x = 0.0005;` x in fact represets a value of
about
0.000500000000000000000032187251995663412884596255025826394. This
is a about 3.2 * 10^-23 more than 0.0005.
The output of my function in this case was about
0.000499999999999999999979247692792269641692826098733348771. This
is a about 2.1 * 10^-23 less than 0.0005.
In this case, at least, the output of the function I wrote
produced a more accurate value than the compiler did. Would it be
possible and/or desirable to make DMD use a more accurate
string-to-float algorithm?
More information about the Digitalmars-d
mailing list