Efficient Symmetric level-index arithmetic in D?

Michelle Long HappyDance321 at gmail.com
Fri Mar 1 16:07:53 UTC 2019


On Friday, 1 March 2019 at 15:35:33 UTC, H. S. Teoh wrote:
> On Fri, Mar 01, 2019 at 03:02:04PM +0000, Simen Kjærås via 
> Digitalmars-d wrote:
>> On Friday, 1 March 2019 at 14:00:46 UTC, Olivier FAURE wrote:
>> > On Wednesday, 27 February 2019 at 20:47:34 UTC, Michelle 
>> > Long wrote:
>> > > https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4927227/ 
>> > > https://en.wikipedia.org/wiki/Symmetric_level-index_arithmetic
>> > 
>> > tl;dr ?
>> 
>> It's a different way of storing and operating on numbers, 
>> essentially.
> [...]
>> I'd never heard of it before, but found it an interesting 
>> idea, like the unum format that has been posted here once or 
>> twice. Interesting on a theoretical level, but probably not 
>> useful for D.
> [...]
>
> I've encountered such number representations before, in the 
> form of a Perl script called hypercalc.pl that, paraphrasing 
> the author, allows you to compute the factorial of the US 
> national debt raised to the power of the number of atoms in the 
> universe, and make meaningful comparisons of the result with 
> other humongous numbers.
>
> I've never heard of such a representation being useful outside 
> of pure mathematical curiosity, though.  For one thing, it 
> (greatly!) expands the range of representible numbers by 
> (greatly!) diminishing accuracy. A number like e^e^e^...(n 
> times)..^x has an error margin of roughly e^e^... (n-1 times), 
> which for any non-trivial values of n is pretty huge. e^e^1, 
> for example, is "only" around 15, but e^e^e^1 is around 
> 3814279.10476. So e^e^e^e^x has a really big margin of error, 
> and that's only for n=4. For large values of n, the error is so 
> large it doesn't even fit in IEEE 754 double(!).  This 
> inaccuracy is most obvious when subtracting values of equal (or 
> almost equal) magnitude.  Because of the inherent inaccuracy in 
> the representation, it's often not possible to tell whether the 
> result of a subtraction is 0, or some number on the order of 
> magnitude of e^e^...(n-k times)...^x for some small k, which, 
> for large n, can be so huge you can't even represent it in a 
> IEEE double. So while inequalities generally work fine, 
> equality is in general not a very meaningful operation with 
> this representation (except for exact equality, which in real 
> computations rarely ever happens).
>
> So I don't think such a representation has general 
> applicability in terms of computations most people would be 
> working with, as opposed to specialized applications that 
> primarily deal with humongous numbers (or near-infinitesimal 
> numbers).
>
>
It is no worse than FP. In the range of -1..1 it is FP minus a 
few bits so whatever issues it has FP will have and vice versa.

It uses those few bits to add very large and small representable 
numbers(it approximates infinity and the differential very well).


These benefits is that MANY scientific algorithms can stall on 
certain points because the algorithm itself cannot represent the 
values it needs to move beyond them.

With SLI one can represent them and so the algorithm can actually 
produce a new value that would not be possible otherwise.


With FP essentially one gets 0 for any value < e. With SLI one 
can get far closer. So it works well for unstable problems such 
as matrices with determinants near 0, and other numerically 
unstable problems which are in fact common.


Here is a trivial algorithm that values with FP but passes with 
SLI:

A(x) = { A(x/1.5) if x >= fp.e, else 0}

Then for x = 1,

A(x) = x/1.5^n for some n.

Now, we know this is 0 in the limit, but eventually the fp will 
not be able to produce anything below fp.e and the algorithm will 
end up in an infinite loop.

With SLI, the value will be representable for some n, although 
highly inaccurately, and so will will actually produce a value of 
eventually 0.

Of course, it's as if we are just using a much larger 
epsilon(Whatever the "epsilon" for SLI is)... but that is the 
point. We get the ability to represent very small changes that 
some algorithms require to be stable even though we cannot 
represent small values precisely. Sometimes the precision is not 
as important because it's already precise beyond what we need.

In fact, the above algorithm sort of shows it. For fp the 
algorithm will always be unstable but for SLI the algorithm will 
pass even though we will have less precision(since some bits 
would be used for the tetration).

The first link I gave demonstrate this problem for some PDE's.

Essentially it is "shaping" the FP's in such a way that we 
sacrifice some of the bits to represent extremes. In FP these 
extremes are impossible and in many problems are not needed, but 
in some problems they are and if one doesn't have them the 
problem cannot be solved correctly.
















More information about the Digitalmars-d mailing list