std.math unittests - accuracy of floating point results
Johan Engelen via digitalmars-d-ldc
digitalmars-d-ldc at puremagic.com
Sun Mar 1 15:08:53 PST 2015
Hi all,
I am working on making more of the std.math unittests pass (I'm
new to the project, and it is a nice niche thing to tinker on,
learning the codebase, workflow, etc.).
I've hit on a problem that I do not know how to handle: floating
point comparison.
There are some tests that check whether exp(x) works well,
including overflow checks for different x. See phobos/std/math.d
line 2083. The checks are defined for 80-bit reals, and I am
converting them to 64-bit reals (Win64). The problem is that the
checks are bit-precise (i.e. assert(x == y)), but the calculation
results are sometimes 1 ulp off. For example, the results of two
tests:
std.math.E = 0x4005bf0a8b145769 = 2.7182818284590450
exp(1.0L) = 0x4005bf0a8b145769 = 2.7182818284590450 [1]
Wolfram Alpha = 2.718281828459045235...
E*E*E = 0x403415e5bf6fb105 = 20.085536923187664
exp(3.0L) = 0x403415e5bf6fb106 = 20.085536923187668
Wolfram Alpha = 20.08553692318766774...
I do not know how I can make the second test pass, without
breaking the first one. I feel the tests are too strict and
should allow an error of 1 ulp.
dmd 2.066.1 passes these unittests with values corresponding
Wolfram Alpha.
(Incidentally, an inaccuracy of 1 ulp also haunts a std.csv
unittest, but I do not yet know why exactly)
How should I go about fixing these unittests for us?
Thanks,
Johan
[1] The correct result for exp(1.0L) I was able to obtain by
enabling the LLVM intrinsic for exp, although there is a comment
saying that that actually causes unittest failure. Without the
LLVM intrinsic, exp(1.0L) is 1 ulp off.
More information about the digitalmars-d-ldc
mailing list