std.math performance (SSE vs. real)

Iain Buclaw via Digitalmars-d digitalmars-d at puremagic.com
Sun Jun 29 14:28:19 PDT 2014


On 29 June 2014 22:04, John Colvin via Digitalmars-d
<digitalmars-d at puremagic.com> wrote:
> On Sunday, 29 June 2014 at 19:22:16 UTC, Walter Bright wrote:
>>
>> On 6/29/2014 11:21 AM, Russel Winder via Digitalmars-d wrote:
>>>
>>> Because when reading the code you haven't got a f####### clue how
>>> accurate the floating point number is until you ask and answer the
>>> question "and which processor are you running this code on".
>>
>>
>> That is not  true with D. D specifies that float and double are IEEE 754
>> types which have specified size and behavior. D's real type is the largest
>> the underlying hardware will support.
>>
>> D also specifies 'int' is 32 bits, 'long' is 64, and 'byte' is 8, 'short'
>> is 16.
>
>
> I'm afraid that it is exactly true if you use `real`.
>

There seems to be a circular argument going round here, it's tiring
bringing up the same point over and over again.


> What important use-case is there for using `real` that shouldn't also be
> accompanied by a `static assert(real.sizeof >= 10);` or similar, for
> correctness reasons?
>

Breaks portability.  There is just too much code out there that uses
real, and besides druntime/phobos math has already been ported to
handle all cases where real == 64bits.

> Assuming there isn't one, then what is the point of having a type with
> hardware dependant precision? Isn't it just a useless abstraction over the
> hardware that obscures useful intent?
>
> mixin(`alias real` ~ (real.sizeof*8).stringof ~ ` = real;`);
>

Good luck guessing which one to use.  On GDC you have a choice of
three or four depending on what the default -m flags are. ;)


More information about the Digitalmars-d mailing list