Efficient Symmetric level-index arithmetic in D?

Michelle Long HappyDance321 at gmail.com
Fri Mar 1 14:56:06 UTC 2019


On Friday, 1 March 2019 at 14:00:46 UTC, Olivier FAURE wrote:
> On Wednesday, 27 February 2019 at 20:47:34 UTC, Michelle Long 
> wrote:
>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4927227/
>> https://en.wikipedia.org/wiki/Symmetric_level-index_arithmetic
>
> tl;dr ?

Uses tetration to represent very small and large numbers(much 
larger and smaller than anyone can use). This improves the 
accuracy of many algorithms since it can represent these large 
numbers well enough to do computations on them. The first link 
shows some examples with how much better it works than floating 
point.

e^e^...^e^(frac(x))

where the iteration occurs floor(x) times.

the integer part of x is called the level and controls 
essentially how large the number is. For even a few levels the 
values are astronomical or infinitesimal.


It's really not much different than FP except instead of a fixed 
base, the base is a tetrated base so it allows for much larger 
values to be expressed. The cost of course is resolution at those 
large values, but generally it is not a problem since we 
generally only care about the most significant terms.


E.g., a FP value uniformly partitions between min and max. A SLI 
value uses less resolution near the extremes but uses it to allow 
for much larger values.

It's more suited for certain problems that can become unstable. 
Because essentially inside the algorithm it will tetrate values 
and produce very large/small values that are not representable 
using FP and hence will fail, but with SLI one can represent 
those values.

What makes it useful is that it offers very small values. These 
values allow for building a representation that is far more 
accurate than can be achieved with FP.

E.g., with FP one has a minimal division. One cannot achieve any 
better accuracy because one cannot get any more precise.

With SLI one has very small values available to represent far 
more precision(but the error between values is larger). What 
makes it different is that these small differences are incoherent 
and sorta provide noise for algorithms that help them converge 
faster. With FP one is stuck with a fixed minimum and if an 
algorithm gets in a "cycle" on it then it will not be able to 
escape. With tet's one can get smaller values and it will escape 
the cycle(as if adding some more precise noise to the value).

If, say, the minimum precision is 1 and an algorithm is something 
like x = 1 - x^2 then all we can get is 0 and 1. If though we can 
represent smaller values like, say 1/2, then we can get a value 
of 3/4 and then that now escapes the algorithm producing other 
values. With floating point it is impossible to get any lower but 
with tet's one can easily represent values that are smaller than 
anything achievable in the universe without much cost.

It's ultimately just using a non-uniform scaling to get some very 
small and very large values in a nice way that can't be done with 
uniform scaling and that then helps with some problems that need 
to have better precision in the extremes. It then simplifies 
having to deal with those problems by using more complicated 
algorithms.












More information about the Digitalmars-d mailing list