Phobos unit testing uncovers a CPU bug
%u
e at ee.com
Fri Nov 26 13:06:55 PST 2010
== Quote from Don (nospam at nospam.com)'s article
> The code below compiles to a single machine instruction, yet the results
> are CPU manufacturer-dependent.
> ----
> import std.math;
> void main()
> {
> assert( yl2x(0x1.0076fc5cc7933866p+40L, LN2)
> == 0x1.bba4a9f774f49d0ap+4L); // Pass on Intel, fails on AMD
> }
> ----
> The results for yl2x(0x1.0076fc5cc7933866p+40L, LN2) are:
> Intel: 0x1.bba4a9f774f49d0ap+4L
> AMD: 0x1.bba4a9f774f49d0cp+4L
> The least significant bit is different. This corresponds only to a
> fraction of a bit (that is, it's hardly important for accuracy. For
> comparison, sin and cos on x86 lose nearly sixty bits of accuracy in
> some cases!). Its importance is only that it is an undocumented
> difference between manufacturers.
> The difference was discovered through the unit tests for the
> mathematical Special Functions which will be included in the next
> compiler release. Discovery of the discrepancy happened only because of
> several features of D:
> - built-in unit tests (encourages tests to be run on many machines)
> - built-in code coverage (the tests include extreme cases, simply
> because I was trying to increase the code coverage to high values)
> - D supports the hex format for floats. Without this feature, the
> discrepancy would have been blamed on differences in the floating-point
> conversion functions in the C standard library.
> This experience reinforces my belief that D is an excellent language for
> scientific computing.
> Thanks to David Simcha and Dmitry Olshansky for help in tracking this down.
Must have made you smile ;)
Slightly related, do you have some code to convert a hex float string to float?
I think the hex format is a nice compromise between size and readability.
Regarding unit tests, I should really use them :(
I use std2 in my D1 project and a few of std2's unit tests fail, so I run my
tests() manually..
More information about the Digitalmars-d-announce
mailing list