[Issue 23856] The problem of accuracy loss in double division
d-bugmail at puremagic.com
d-bugmail at puremagic.com
Sat Oct 14 03:59:23 UTC 2023
https://issues.dlang.org/show_bug.cgi?id=23856
Basile-z <b2.temp at gmx.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |b2.temp at gmx.com
--- Comment #5 from Basile-z <b2.temp at gmx.com> ---
I suspect that this is not a bug and, even better, that `1999` would actually
be the most accurate result.
1. `0.0025` is not exactly representable[0] as a double, it's actually is
*slightly* bigger.
2. The division
- DMD code will always use the FPU, so SDIV and have a 80 bit internal
precision
- LDC code will use a SSE instruction, with a 64 bit precision
3. the trunc
- DMD
- in conv_err FISTP is used on the internal 80 bit value
- in conv_ok the 80 bit value goes back to a 64 bits local
then is reloaded in the FPY, with 16 bit of precision loss,
and finally the trunc happens (still FISTP)
- LDC behaves the same in both conv_err and conv_ok because
the 80 bit intermediate value has never existed.
See disasm here[1].
Note: in the assembly you can see that the rounding mode is set/saved/restored
at several places so maybe i'm completely wrong and that would be a DMD backend
bug related to that.
[0]:
https://www.binaryconvert.com/result_double.html?decimal=048046048048050053
[1]: https://godbolt.org/z/3969TsPG1
--
More information about the Digitalmars-d-bugs
mailing list