What features of D you would not miss?

Timon Gehr timon.gehr at gmx.ch
Sun Sep 18 23:11:14 UTC 2022


On 18.09.22 22:33, Walter Bright wrote:
> 
>> (There is a trick where you use two or more doubles to represent a 
>> number with more mantissa bits though and it is possible that with 
>> AVX, performance may be competitive with 80 bit floats, but 
>> vectorising that code manually is more work.)
> 
> I didn't know about this. Is there an article on how it works?

I picked it up back in high school for a CPU-based fractal rendering 
project using vectorization, multithreading and some clever algorithms 
for avoiding computations (I should port it to D and publish it, a lot 
of it is Intel-style x86 inline assembly.) Unfortunately I don't recall 
what are all the online resources I found. I only needed addition and 
multiplication. Addition is rather straightforward, but multiplication 
required splitting the mantissa with Dekker's trick.

I think this is the original paper (also crediting several others with 
similar ideas, e.g., Kahan summation is in Phobos. The multi-double data 
types are basically a generalization of that idea to other operations):
https://csclub.uwaterloo.ca/~pbarfuss/dekker1971.pdf

This seems to be a more recent account, maybe it is easier to read:
https://web.mit.edu/tabbott/Public/quaddouble-debian/qd-2.3.4-old/docs/qd.pdf

I also found this Julia library: 
https://github.com/JuliaMath/DoubleFloats.jl

I used the search terms "dekker double-double floating point", there 
might be even better articles out there.

(This is an example of an application where extending precision 
implicitly for some subset of calculations can give you worse overall 
results.)


More information about the Digitalmars-d mailing list