std.math performance (SSE vs. real)
Walter Bright via Digitalmars-d
digitalmars-d at puremagic.com
Fri Jun 27 22:16:30 PDT 2014
On 6/27/2014 3:50 AM, Manu via Digitalmars-d wrote:
> Totally agree.
> Maintaining commitment to deprecated hardware which could be removed
> from the silicone at any time is a bit of a problem looking forwards.
> Regardless of the decision about whether overloads are created, at
> very least, I'd suggest x64 should define real as double, since the
> x87 is deprecated, and x64 ABI uses the SSE unit. It makes no sense at
> all to use real under any general circumstances in x64 builds.
>
> And aside from that, if you *think* you need real for precision, the
> truth is, you probably have bigger problems.
> Double already has massive precision. I find it's extremely rare to
> have precision problems even with float under most normal usage
> circumstances, assuming you are conscious of the relative magnitudes
> of your terms.
That's a common perception of people who do not use the floating point unit for
numerical work, and whose main concern is speed instead of accuracy.
I've done numerical floating point work. Two common cases where such precision
matters:
1. numerical integration
2. inverting matrices
It's amazing how quickly precision gets overwhelmed and you get garbage answers.
For example, when inverting a matrix with doubles, the results are garbage for
larger than 14*14 matrices or so. There are techniques for dealing with this,
but they are complex and difficult to implement.
Increasing the precision is the most straightforward way to deal with it.
Note that the 80 bit precision comes from W.F. Kahan, and he's no fool when
dealing with these issues.
Another boring Boeing anecdote: calculators have around 10 digits of precision.
A colleague of mine was doing a multi-step calculation, and rounded each step to
2 decimal points. I told him he needed to keep the full 10 digits. He ridiculed
me - but his final answer was off by a factor of 2. He could not understand why,
and I'd explain, but he could never get how his 2 places past the decimal point
did not work.
Do you think engineers like that will ever understand the problems with double
precision, or have the remotest idea how to deal with them beyond increasing the
precision? I don't.
> I find it's extremely rare to have precision problems even with float under
most normal usage
> circumstances,
Then you aren't doing numerical work, because it happens right away.
More information about the Digitalmars-d
mailing list