Implement the "unum" representation in D ?

Anthony Di Franco via Digitalmars-d digitalmars-d at puremagic.com
Thu Sep 17 16:53:28 PDT 2015


On Tuesday, 15 September 2015 at 05:16:53 UTC, deadalnix wrote:
> On Saturday, 11 July 2015 at 03:02:24 UTC, Nick B wrote:
>> On Thursday, 20 February 2014 at 10:10:13 UTC, Nick B wrote:
...
>>
>> If you are at all interested in computer arithmetic or 
>> numerical methods, read this book. It is destined to be a 
>> classic.
>
> To be honest, that sound like snake oil salesman speech to me 
> rather than science. It's all hand waving and nothing concrete 
> is provided, the whole thing wrapped in way too much 
> superlatives.
>
> The guy seems to have good credential. Why should I read that 
> book ?

I read the whole book and did not regret it at all, but I was 
already looking for good interval arithmetic implementations. I 
found that the techniques are not too different (though improved 
in important ways) from what is mainstream in verified computing. 
I found signs that techniques like this are standard in beam 
physics. (Caveat, I am not a beam physicist, but my friend at 
CERN is.) And the bibliography for MSU's COSY INFINITY, a 
verified computing tool from the beam physics community, provided 
a lot of interesting background information: 
http://www.bt.pa.msu.edu/pub/

What is perhaps most surprising is not that the techniques work 
well but that they can be implemented efficiently in a sensible 
amount of hardware, little more expensive than bare floating 
point hardware. Even maybe the most surprising results about 
search-based global optimization with unums have precedent, see 
e.g. "Rigorous Global Search using Taylor Models," M. Berz, K. 
Makino, Symbolic Numeric Computation 2009, (2009) 11-19, 
http://bt.pa.msu.edu/cgi-bin/display.pl?name=GOSNC09 (Taylor 
model techniques are similar to direct computation with intervals 
but add a bit more sophistication.)

So, from my perspective, I think a unum library would at least be 
an interesting and useful library, and roughly in the style of 
the Mathematica / Python libraries could reduce unum interval 
computations to floating point computations with a modest amount 
of overhead. There is a sense in which we might expect the small 
overhead up front to be well worth it in the overall system: less 
haste, more speed. Hacks to try to compensate for incorrect 
behavior after the fact may end up being more costly overall, 
certainly to the programmer but perhaps also to the machine. For 
example, the common need to stick a loop over an entire vector or 
matrix into the inner loop of an iterative method to renormalize 
to prevent floating point rounding errors from accumulating.

Whether this library should be part of the standard library, I 
don't know. It would seem to depend on how much people want the 
standard library to support verified numerical computing. If it 
is clear that verified numerical computing needs good support in 
the standard library, something like unums should be there, maybe 
even with some other techniques built on top of them (Taylor 
model or Levi-Civita for example).

Anthony


More information about the Digitalmars-d mailing list