std.math performance (SSE vs. real)

Manu via Digitalmars-d digitalmars-d at puremagic.com
Sun Jun 29 19:58:05 PDT 2014


On 28 June 2014 15:16, Walter Bright via Digitalmars-d
<digitalmars-d at puremagic.com> wrote:
> On 6/27/2014 3:50 AM, Manu via Digitalmars-d wrote:
>>
>> Totally agree.
>> Maintaining commitment to deprecated hardware which could be removed
>> from the silicone at any time is a bit of a problem looking forwards.
>> Regardless of the decision about whether overloads are created, at
>> very least, I'd suggest x64 should define real as double, since the
>> x87 is deprecated, and x64 ABI uses the SSE unit. It makes no sense at
>> all to use real under any general circumstances in x64 builds.
>>
>> And aside from that, if you *think* you need real for precision, the
>> truth is, you probably have bigger problems.
>> Double already has massive precision. I find it's extremely rare to
>> have precision problems even with float under most normal usage
>> circumstances, assuming you are conscious of the relative magnitudes
>> of your terms.
>
>
> That's a common perception of people who do not use the floating point unit
> for numerical work, and whose main concern is speed instead of accuracy.
>
> I've done numerical floating point work. Two common cases where such
> precision matters:
>
> 1. numerical integration
> 2. inverting matrices
>
> It's amazing how quickly precision gets overwhelmed and you get garbage
> answers. For example, when inverting a matrix with doubles, the results are
> garbage for larger than 14*14 matrices or so. There are techniques for
> dealing with this, but they are complex and difficult to implement.

This is what I was alluding to wrt being aware of the relative
magnitudes of terms in operations.
You're right it can be a little complex, but it's usually just a case
of rearranging the operations a bit, or worst case, a temporary
renormalisation from time to time.

> Increasing the precision is the most straightforward way to deal with it.

Is a 14*14 matrix really any more common than a 16*16 matrix though?
It just moves the goal post a bit. Numerical integration will always
manage to find it's way into crazy big or crazy small numbers. It's
all about relative magnitude with floats.
'real' is only good for about 4 more significant digits... I've often
thought they went a bit overboard on exponent and skimped on mantissa.
Surely most users would reach for a lib in these cases anyway, and
they would be written by an expert.

Either way, I don't think it's sensible to have a std api defy the arch ABI.

> Note that the 80 bit precision comes from W.F. Kahan, and he's no fool when
> dealing with these issues.

I never argued this. I'm just saying I can't see how defying the ABI
in a std api could be seen as a good idea applied generally to all
software.

> Another boring Boeing anecdote: calculators have around 10 digits of
> precision. A colleague of mine was doing a multi-step calculation, and
> rounded each step to 2 decimal points. I told him he needed to keep the full
> 10 digits. He ridiculed me - but his final answer was off by a factor of 2.
> He could not understand why, and I'd explain, but he could never get how his
> 2 places past the decimal point did not work.

Rounding down to 2 decimal points is rather different than rounding
from 19 to 15 decimal points.

> Do you think engineers like that will ever understand the problems with
> double precision, or have the remotest idea how to deal with them beyond
> increasing the precision? I don't.

I think they would use a library.
Either way, those jobs are so rare, I don't see that it's worth
defying the arch ABI across the board for it.

I think there should be a 'double' overload. The existing real
overload would be chosen when people use the real type explicitly.
Another advantage of this, is that when people are using the double
type, the API will produce the same results on all architectures,
including the ones that don't have 'real'.

>> I find it's extremely rare to have precision problems even with float
>> under most normal usage
>> circumstances,
>
> Then you aren't doing numerical work, because it happens right away.

My key skillset includes physics, lighting, rendering, animation.
These are all highly numerical workloads.
While I am comfortable with some acceptable level of precision loss
for performance, I possibly have to worry about maintaining numerical
precision even more since I use low-precision types exclusively. I
understand the problem very well, probably better than most. More
often than not, the problems are easily mitigated by rearranging
operations such that operations are performed against terms with
relative magnitudes, or in some instances, temporarily renormalising
terms.
I agree these aren't skills that most people have, but most people use
libraries for complex numerical work... or would, if such a robust
library existed.

Thing is, *everybody* will use std.math.


More information about the Digitalmars-d mailing list