std.math performance (SSE vs. real)
dennis luehring via Digitalmars-d
digitalmars-d at puremagic.com
Fri Jun 27 06:04:34 PDT 2014
Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:
> On Fri, 2014-06-27 at 11:10 +0000, John Colvin via Digitalmars-d wrote:
> [âŠ]
>> I understand why the current situation exists. In 2000 x87 was
>> the standard and the 80bit precision came for free.
>
> Real programmers have been using 128-bit floating point for decades. All
> this namby-pamby 80-bit stuff is just an aberration and should never
> have happened.
what consumer hardware and compiler supports 128-bit floating points?
More information about the Digitalmars-d
mailing list