std.math performance (SSE vs. real)

Joseph Rushton Wakeling via Digitalmars-d digitalmars-d at puremagic.com
Thu Jul 3 06:51:45 PDT 2014


On Thursday, 3 July 2014 at 11:21:34 UTC, Iain Buclaw via 
Digitalmars-d wrote:
> The spec should be clearer on that.  The language should 
> respect the long double ABI of the platform it is targeting
> - so if the compiler is targeting a real=96bit system, but
> the max supported on the  chip is 128bit, the compiler should
> *still* map real to the 96bit long doubles, unless explicitly
> told otherwise on the command-line.

This would be a change in the standard, no?  "The long double ABI 
of the target platform" is not necessarily the same as the 
current definition of real as the largest hardware-supported 
floating-point type.

I can't help but feel that this is another case where the 
definition of real in the D spec, and its practical use in the 
implementation, have wound up in conflict because of assumptions 
made relative to x86, where it's simply a nice coincidence that 
the largets hardware-supported FP and the long double type happen 
to be the same.


More information about the Digitalmars-d mailing list