Complexity nomenclature

Jonny via Digitalmars-d digitalmars-d at puremagic.com
Thu Dec 10 14:25:29 PST 2015


On Monday, 7 December 2015 at 16:15:46 UTC, Andrei Alexandrescu 
wrote:
> On 12/07/2015 09:43 AM, Timon Gehr wrote:
>>
>> 1-2) BigO("1")
>> 3) BigO("count")
>> 4) BigO("distance(first,last)")
>> 5) BigO("ilist.size()")
>>
>> There's also this:
>> On 12/06/2015 12:55 AM, Andrei Alexandrescu wrote:
>>> Well you'd need multiple terms if you want to say things like,
>>> "inserting a range into an array is O(array[].walkLength +
>>> r.walkLength)." When you insert a range in a binary search 
>>> tree, the
>>> complexity would be O(log(array[].walkLength) * r.walkLength).
>>
>> BigO("array[].walkLength + r.walkLength");
>> BigO("log(array[].walkLength) * r.walkLength");
>
> These are still expressible without a DSL: 
> BigO(Atom("array[].walkLength") + Atom("r.walkLength")) etc.
>
> Somewhat independently of DSL or not: At this point I'm unclear 
> whether supporting free variables with arbitrary names is a 
> good thing. The key to unleashing the power of BigO is to 
> compute it when combining functions, and the names seem to be 
> specific to the function and therefore not easy to combine.
>
>
> Andrei

There are many "operations" that can be carried out on methods.

Many are composable and decomposable in the exact same way and 
can be generalized as an algorithm:

let f,f1,f2, etc be functions,

if f = f1 + f2
then L(f) = L(f1) + L(f2)


Hence if L is a linear operator, this always holds. Addition(+) 
is linear, BigO is linear. Composing linear operators is 
linear(but it doesn't have an inverse like addition).


If D supported some generic way to handle such linear operators 
acting on methods, with calls to other methods as decomposing the 
method, one could do many cool things quite easily in D. BigO 
would just be one.


e.g.,

void foo()
{
     bar1();
     bar2();
     for(auto i = 1; i < 100; i++)
        for(auto j = 1; j < 100; j++)
            bar3();
}

then we can see L(foo) = L(bar1) + L(bar2) + X(n^2)*L(bar3)

(Here, X(n^2) is because we have a double nested for loop which 
multiples the complexity by n^2 for big notation. To be more 
precise, we should refactor all code inside the function in such 
a way to make things precise, i.e.,

void foo()
{
     bar1();
     bar2();
}

void fooTemp()
{
     for(auto i = 1; i < 100; i++)
        for(auto j = 1; j < 100; j++)
            bar3();
}

which then gives the more precise decomposition as

L(foo) = L(bar1) + L(bar2) + L(fooTemp)


The main point of all this, is that L can be many things. If one 
has the complete hierarchical call stack(with root = Main) then L 
can be scene as recursively working on the tree.

In this case, addition would rather be "insertion into tree". We 
could easily check relations such as if foo is called, calls, or 
disjoint from bar.


We can, as already stated, implement complexity analysis 
automatically:

BigO - The sup of the complexity of a function assuming all 
unknowns(loops, user input, unknown sub-function calls) have 
maximal complexity.
bigO - The sup of the complexity of all functions assuming all 
unknowns have 0 complexity.
LittleO - The inf of complexity....
litleO - ...


The point, by simply implementing a decomposable structure on the 
"call hierarchy", one can implement all kinds of cool stuff. If 
it is exposed to D at run time, then even more amazing things can 
happen.

e.g., one could implement a real time profiler. L would be an 
"operator" that simply sends the current ticks when first called 
and when returning. The "addition" is really just code that does 
the computations on the differences it is getting.

Imagine hitting some hot key and your current app showing a 
tree(something like a fractal tree) where each node is a function 
call and sub nodes are functions that are the functions that the 
function calls("uses").

Color the tree based on the use rate(count how many times the 
function is called, or even accumulate hierarchically) or the 
execution time, or even BigO. You'll then see a visual of where 
the program is active the most. It might sort of look like the 
heat distribution map of the earth.

I realize it is more complex to implement than I'm making it out, 
but it's basically tree traversal and hooks stuff.

Once you start allowing meta coding, the power becomes magnified:

e.g. (psuedo-code)

// Define generic meta code function composer(the + in the 
algorithm) for BigO
@function!composer!BigO(functions)
{
     let f1, ..., fn = functions
     return function!composer!BigO(f1) + ... + 
function!composer!BigO(f2)

     // Would need to separate parse f1 in such a way that it is 
composed of only calls to functions or none calls(the loop stuff 
I mentioned above)
}

// The Linear operator BigO that returns the complexity of a 
function f
@function!composer!BigO(f)
{
    return complexity(f);
}


// Explicitly specify complexity for foo
void foo()
[function!composer!BigO() { return 0; }
{
     ...
}


------------------------------------

The above represents a mock way to allow the programmer to "hook" 
into the compilers call hierarchy structure. Obviously it would 
require some work for actual implementation. But once such a 
feature is implemented, the programmer can do all sorts of 
things. Implement "attributes"(not needed, of course), real time 
profiling, BigO, Code replacement/Self Modifying 
Programs(essentially change how a function behaves), etc...

Of course, maybe we are starting to get into AI here? ;) (e.g., 
one could use multiple similar functions in a program for 
performance enhancements, but implement a few lines of code to 
have those functions reconfigure themselves to use the most 
optimal one for performance reasons)


Also, why stop at functions? what about all sub-structures in the 
program? We could work on classes, structs, etc...









More information about the Digitalmars-d mailing list