Why is std.math slower than the C baseline?
Robert M. Münch
robert.muench at saphirion.com
Tue Jun 9 10:12:10 UTC 2020
On 2020-06-06 05:58:37 +0000, Nathan S. said:
> On Friday, 5 June 2020 at 20:05:56 UTC, jmh530 wrote:
>> On Friday, 5 June 2020 at 19:39:26 UTC, Andrei Alexandrescu wrote:
>>> [snip]
>>>
>>> This needs to change. It's one thing to offer more precision to the
>>> user who consciously uses 80-bit reals, and it's an entirely different
>>> thing to force that bias on the user who's okay with double precision.
>>
>> I agree with you that more precision should be opt-in.
>>
>> However, I have always been sympathetic to Walter's argument in favor
>> doing intermediates at the highest precision. There are many good
>> reasons why critical calculations need to be done at the highest
>> precision possible.
>
> I believe that decision was based on a time when floating point math on
> common computers was done in higher precision anyway
It's still this way today, your statement reads as x87 is gone. It's not.
> so explicit `real` didn't cost anything and avoided needless rounding.
It's still the way today.
And one feature I like a lot about D is, that I have simple access to
the 80-Bit FP precision. I would even like to have a 128-Bit FP, but
Intel won't do it.
All the GPU, AI/ML hype is focusing on 64-Bit FP or less. There is no
glory in giving up additional precision.
As already stated in this threa: Why not implement the code as
templates and provide some pre-instantiated wrappers?
--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster
More information about the Digitalmars-d
mailing list