std.math performance (SSE vs. real)

francesco cattoglio via Digitalmars-d digitalmars-d at puremagic.com
Sat Jun 28 02:47:52 PDT 2014


On Saturday, 28 June 2014 at 09:07:17 UTC, John Colvin wrote:
> On Saturday, 28 June 2014 at 06:16:51 UTC, Walter Bright wrote:
>> On 6/27/2014 10:18 PM, Walter Bright wrote:
>>> On 6/27/2014 4:10 AM, John Colvin wrote:
>>>> *The number of algorithms that are both numerically 
>>>> stable/correct and benefit
>>>> significantly from > 64bit doubles is very small.
>>>
>>> To be blunt, baloney. I ran into these problems ALL THE TIME 
>>> when doing
>>> professional numerical work.
>>>
>> Sorry for being so abrupt. FP is important to me - it's not 
>> just about performance, it's also about accuracy.

When you need accuracy, 999 times out of 1000 you change the 
numerical technique, you don't just blindly upgrade the precision.
The only real reason one would use 80 bits is when there is an 
actual need of adding values which differ for more than 16 orders 
of magnitude. And I've never seen this happen in any numerical 
paper I've read.

> I still maintain that the need for the precision of 80bit reals 
> is a niche demand. Its a very important niche, but it doesn't 
> justify having its relatively extreme requirements be the 
> default. Someone writing a matrix inversion has only themselves 
> to blame if they don't know plenty of numerical analysis and 
> look very carefully at the specifications of all operations 
> they are using.

Couldn't agree more. 80 bit IS a niche, which is really nice to 
have, but shouldn't be the standard if we lose on performance.

> Paying the cost of moving to/from the fpu, missing out on 
> increasingly large SIMD units, these make everyone pay the 
> price.

Especially the numerical analysts themselves will pay that price. 
64 bit HAS to be as fast as possible, if you want to be 
competitive when it comes to any kind of numerical work.


More information about the Digitalmars-d mailing list