Phobos unit testing uncovers a CPU bug

Don nospam at nospam.com
Fri Nov 26 12:02:23 PST 2010


The code below compiles to a single machine instruction, yet the results 
are CPU manufacturer-dependent.
----
import std.math;

void main()
{
      assert( yl2x(0x1.0076fc5cc7933866p+40L, LN2)
	== 0x1.bba4a9f774f49d0ap+4L); // Pass on Intel, fails on AMD
}
----
The results for yl2x(0x1.0076fc5cc7933866p+40L, LN2) are:

Intel:  0x1.bba4a9f774f49d0ap+4L
AMD:    0x1.bba4a9f774f49d0cp+4L

The least significant bit is different. This corresponds only to a 
fraction of a bit (that is, it's hardly important for accuracy. For 
comparison, sin and cos on x86 lose nearly sixty bits of accuracy in 
some cases!). Its importance is only that it is an undocumented 
difference between manufacturers.

The difference was discovered through the unit tests for the 
mathematical Special Functions which will be included in the next 
compiler release. Discovery of the discrepancy happened only because of 
several features of D:

- built-in unit tests (encourages tests to be run on many machines)

- built-in code coverage (the tests include extreme cases, simply 
because I was trying to increase the code coverage to high values)

- D supports the hex format for floats. Without this feature, the 
discrepancy would have been blamed on differences in the floating-point 
conversion functions in the C standard library.

This experience reinforces my belief that D is an excellent language for 
scientific computing.

Thanks to David Simcha and Dmitry Olshansky for help in tracking this down.


More information about the Digitalmars-d-announce mailing list